fpbp >For the purposes of these operators. a pointer to a nonarray object behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type. >ANSI C (1990) 6.3.6 Additive Operator >If an invalid value has been assigned to the pointer, the behavior of the unary * operator is undefined. >ANSI C (1990) 6.3.3.2 Address and indirection operators
OP BTFO
Both of these programs are writing to memory addresses that are not guaranteed to be valid. If I were to hazard a guess -- which could very likely be wrong because this is undefined behavior and anything can happen... the first program is the cause of the segfault. My reasoning is as follows:
In the first program, space is allocated for 2 4-byte integers on the stack. This can be accomplished by decrementing the stack pointer by 8 and allocating the integers next to each other. When b+1 is written to, it clobbers x. When x+1 is written to, it clobbers something else. That something else might be a stack/base pointer that was saved, or a return address.
In the second pointer, space is only needed to be allocated for one 4-byte integer. But on many 64-bit platforms, the stack needs to always be 8-byte aligned. So you subtract 8 bytes anyways, and b+1 clobbers the unused padding.
LULZ struggling with the concept of memory pages exhibit six million and one.
Both of these programs are writing to memory addresses that are not guaranteed to be valid. If I were to hazard a guess -- which could very likely be wrong because this is undefined behavior and anything can happen... the first program is the cause of the segfault. My reasoning is as follows:
In the first program, space is allocated for 2 4-byte integers on the stack. This can be accomplished by decrementing the stack pointer by 8 and allocating the integers next to each other. When b+1 is written to, it clobbers x. When x+1 is written to, it clobbers something else. That something else might be a stack/base pointer that was saved, or a return address.
In the second pointer, space is only needed to be allocated for one 4-byte integer. But on many 64-bit platforms, the stack needs to always be 8-byte aligned. So you subtract 8 bytes anyways, and b+1 clobbers the unused padding.
>undefined behavior and anything can happen
Undefined behavior doesn't mean anything can happen. Undefined behavior simply means the standard does not define the behavior. So how it actually behaves is left up to the compiler vendors. And the behavior is necessarily defined by each compiler.
Both of these are dereferencing invalid pointer. if you're just a beginner asking an honest question - then the answer is > compile with warnings, `-Wall -Wextra` which will warn you about erroneous or suspicious code during compilation > and sanitizers, `-fsanitize=address,undefined -g` which will do runtime checks for various UB and memory related issues.
So I learned C++ in high school in the 90s. Lets see what I remember.
Defining an integer a.
Defining an integer pointed to by b to the value of the address of a.
Setting the value of the memory address of b + 1 (which is also a+1) to a value of 3.
Outputting the value of b+1 that could be 3 but might be something else?
I don't think you should be fucking with the memory of *b+1.
>declares a variable but doesn't assign it a value >immediately initializes a second variable with the value of the first >wonders why his program has wonky, unexpected behavior
Either of them could segfault. The only reason you're not seeing a segfault, or seeing a segfault, is purely luck.
semi related question
why does C allow UBs? It seems to me like every large enough program will be somewhat like the Incompleteness theorem where you'll never know definitively whether the bug is fixed or not
UB is more of an excuse than a rule, for cases where it is very much a thing you _can_ do but varies in the result depending on the platform and other factors you're running on top of
You're thinking of implementation-defined behavior. Undefined behavior like this is a free ticket to the compiler to completely omit parts of the code or return bogus shit
I get what you're saying, but platform I didn't mean only architectures or OSes, but also a compiler or stdlibs. I know it's not the right way to refer to it
>why does C allow UBs
For optimizations and portability. Most other languages can get away with not having UB since they have at most 2-3 implementations and most of them run in a vm, but if C would declare that divide by zero is not UB then every C compiler must emit code to check if a divide by zero happened and return an error value. That would be a lot of work considering every divide could be a divide by zero, you could apply a similar case for the other UB's. Also if there are differences on the platforms then C would have to define what should always happen in those cases which could favor some architectures over others. >It seems to me like every large enough program will be somewhat like the Incompleteness theorem where you'll never know definitively whether the bug is fixed or not
We already have static analyzers, asan and ubsan that will catch these kind of errors.
i would guess the second segfaults because the first may need a larger memory area and therefore wouldn't be accessing invalid area.
but mainly i would guess second one because its the unintuitive answer, and above is the first justification i could think of for that behaviour.
Both are UB and therefore not valid C programs. Fuck off nagger.
fpbp
>For the purposes of these operators. a pointer to a nonarray object behaves the same as a pointer to the first element of an array of length one with the type of the object as its element type.
>ANSI C (1990) 6.3.6 Additive Operator
>If an invalid value has been assigned to the pointer, the behavior of the unary * operator is undefined.
>ANSI C (1990) 6.3.3.2 Address and indirection operators
OP BTFO
fbpb and real programmer detected
I can only tell you're overwriting the value of b with 3, and then dereferencing 4. I can't tell how this wouldn't crash though
LULZ struggling with the concept of memory pages exhibit six million and one.
The fact that I do not mention issues related to memory pages does not mean I do not understand the topic.
I did mention it was a guess. There are a myriad of ways either program could segfault.
Pretty sure it has nothing to do with memory pages, it probably is something related with alignment
Six million? In just a few years? I don't think so
Both can default anytime.
Both of these programs are writing to memory addresses that are not guaranteed to be valid. If I were to hazard a guess -- which could very likely be wrong because this is undefined behavior and anything can happen... the first program is the cause of the segfault. My reasoning is as follows:
In the first program, space is allocated for 2 4-byte integers on the stack. This can be accomplished by decrementing the stack pointer by 8 and allocating the integers next to each other. When b+1 is written to, it clobbers x. When x+1 is written to, it clobbers something else. That something else might be a stack/base pointer that was saved, or a return address.
In the second pointer, space is only needed to be allocated for one 4-byte integer. But on many 64-bit platforms, the stack needs to always be 8-byte aligned. So you subtract 8 bytes anyways, and b+1 clobbers the unused padding.
/thread.
But he's wrong. The second, shorter program is the one that faults.
both are accessing memory that isn't ok to access. either could segfault.
he's not wrong. that anons assumptions are completely valid.
>undefined behavior and anything can happen
Undefined behavior doesn't mean anything can happen. Undefined behavior simply means the standard does not define the behavior. So how it actually behaves is left up to the compiler vendors. And the behavior is necessarily defined by each compiler.
Which OP did not specify therefore anon's wording is reasonable.
how did the 3 get there?
nasal demons put them there
Both of these are dereferencing invalid pointer. if you're just a beginner asking an honest question - then the answer is
> compile with warnings, `-Wall -Wextra` which will warn you about erroneous or suspicious code during compilation
> and sanitizers, `-fsanitize=address,undefined -g` which will do runtime checks for various UB and memory related issues.
So I learned C++ in high school in the 90s. Lets see what I remember.
Defining an integer a.
Defining an integer pointed to by b to the value of the address of a.
Setting the value of the memory address of b + 1 (which is also a+1) to a value of 3.
Outputting the value of b+1 that could be 3 but might be something else?
I don't think you should be fucking with the memory of *b+1.
this is a trick question. they both segfault. what do I win?
As others have said, it's undefined, so you'll have to look at it in ghidra or something.
how do I know you didn't cheat and just look at the other anon's answers?
this is assigning the value of 3 to the memory address that's next to a/x?
Replace
*(b + 1) = 3;
*(y + 1) = 3;
with
char brap = 1;
*(b + brap) = 3;
char brap = 1;
*(y + brap) = 3;
And the 2 programs shouldn't ever crash.
it just moves the reference a 4th the distance of the original question, for what?
a and x aren't initialized to anything. So both are UB and bad programs.
>declares a variable but doesn't assign it a value
>immediately initializes a second variable with the value of the first
>wonders why his program has wonky, unexpected behavior
Either of them could segfault. The only reason you're not seeing a segfault, or seeing a segfault, is purely luck.
initializes a second variable with the value of the first
idiot.
retarded question
semi related question
why does C allow UBs? It seems to me like every large enough program will be somewhat like the Incompleteness theorem where you'll never know definitively whether the bug is fixed or not
UB is more of an excuse than a rule, for cases where it is very much a thing you _can_ do but varies in the result depending on the platform and other factors you're running on top of
You're thinking of implementation-defined behavior. Undefined behavior like this is a free ticket to the compiler to completely omit parts of the code or return bogus shit
I get what you're saying, but platform I didn't mean only architectures or OSes, but also a compiler or stdlibs. I know it's not the right way to refer to it
>why does C allow UBs
For optimizations and portability. Most other languages can get away with not having UB since they have at most 2-3 implementations and most of them run in a vm, but if C would declare that divide by zero is not UB then every C compiler must emit code to check if a divide by zero happened and return an error value. That would be a lot of work considering every divide could be a divide by zero, you could apply a similar case for the other UB's. Also if there are differences on the platforms then C would have to define what should always happen in those cases which could favor some architectures over others.
>It seems to me like every large enough program will be somewhat like the Incompleteness theorem where you'll never know definitively whether the bug is fixed or not
We already have static analyzers, asan and ubsan that will catch these kind of errors.
Not a c fags but aren't you just incrementing the address by one and changing the address at that location to 3.
So b+1 is now an address called 3. Printing it out should be fine but your program would crash if you dereference it.
I'm a PLC Programmer so forgive my C ignorance.
>I'm a PLC Programmer
Recently did a small gig in automation working with Beckhoff PLCs. Very cool stuff.
i would guess the second segfaults because the first may need a larger memory area and therefore wouldn't be accessing invalid area.
but mainly i would guess second one because its the unintuitive answer, and above is the first justification i could think of for that behaviour.
Depends on the compiler and the OS, possibly other factors.