Dynamic linking is a retarded meme, it was only needed because of the limitations in disk space back in the 70s. It's not needed today, and any benefits are entirely debatable.
>Dynamic linking is a retarded meme, it was only needed because of the limitations in disk space back in the 70s. It's not needed today, and any benefits are entirely debatable.
/thread
>Dynamic linking is a retarded meme, it was only needed because of the limitations in disk space back in the 70s. It's not needed today, and any benefits are entirely debatable.
/thread
Isn't dynamic linking better for security? If food has a vulnerability, instead of having to wait for all programs that depend on foo to rebuild their binary, you just need the maintainer of foo to rebuild, and all other things no longer have that vuln?
Yes which is why static linking supporters are all either "it works for games on windows and I have no use for computers beyond that" or "just build everything from 200 microservices".
With the latter being mostly go users coping with lack of compiler support, and a total of 0 useful pieces of software produced between the two of them.
>Isn't dynamic linking better for security? >If food has a vulnerability, instead of having to wait for all programs that depend on foo to rebuild their binary, you just need the maintainer of foo to rebuild, and all other things no longer have that vuln?
No, this is horrible practice.
I'm not dealing with retards who come crying to me because the program stopped working because some freetard decided to updoot libsneeder420 and broke compatibility with the API because he thinks "you don't need that feature" and some functions behave differently because "this is what the official specification says and it wasn't compliant before."
If I release something, it means I tested it, and it works with whatever I tested it with. You are getting all the dependencies of the program with the installer (maybe as versioned dlls which you WILL load into memory no matter what). You don't have to install a metric fuckload of libraries, and the program will just work. If anybody tried to install ML programs via python or literally any linux program for that matter, it should be obvious why this practice is cancer.
Never even implied that. I will fix it when I get around to it and it becomes important. I'll check out the changes, make modifications, test the software, and release a new version when it's ready. Not going to entertain issues from users who fuck their shit up and then they're surprised it doesn't work as intended.
Your users will always find ways to be retarded so I don't care, still unvendoring your libraries.
3 weeks ago
Anonymous
Meh, do whatever you want on your computer.
But relying on libraries maintained by literal whos being installed on the end user's machine is asking for trouble from a developer's perspective, so if you're not mentally ill, I don't see why you would do it.
3 weeks ago
Anonymous
Your philosophy makes sense only if you're the one building the binaries running on the user's machine. On Linux and FOSS systems in general this is mostly not the case because of the distribution model.
3 weeks ago
Anonymous
>Your philosophy makes sense only if you're the one building the binaries running on the user's machine.
I don't see why this would be the case. Programs distribute libraries written by others all the time. If my program needs sqlite, it comes with sqlite3.dll, simple as. The exact version I tested it with. >On Linux and FOSS systems in general this is mostly not the case because of the distribution model.
It's an arbitrary decision to just rely on users installing all the stuff you use. You don't have to do it, but Linux people believe in muh unix philosophy the same way they believe in god, and that leads to dogmatism. They're like "mindless dynamic linking le good, period."
On Windows and Macintosh, when you get a program installer, it actually has everything you need to run the program, and it has just worked for decades.
3 weeks ago
Anonymous
Even on Windows you sometimes (mostly on new systems) end up missing some .NET version, DirectX or VC++ dlls to run some things.
3 weeks ago
Anonymous
>sometimes (mostly on new systems) end up missing some .NET version, DirectX or VC++ dlls to run some things
You mean mostly on older systems.
It is very rare to get this on newer ones. I don't recall ever having to install any .net or directx at all for Windows 10, and VC++ is usually bundled with the installers and installed automatically.
3 weeks ago
Anonymous
MS doesn't ship those with Windows so it happens on any newly installed system. Often there's a separate installer for those libraries so maybe you didn't notice, but the license forbids just dropping the dlls into your program folder.
3 weeks ago
Anonymous
Yeah that's right. The installer usually just pops up during/after/before the program's installer. I think this is reasonable, I mean they ship everything you need with the software, and you only need each version once.
I always have to install those on new systems for games or company tooling, which is still mostly .NET framework shit.
Steam automated the installs for games so that's nice at least.
I'm not sure about .NET, but I know basic .NET support comes with Windows since like Vista and I don't remember installing too much .net stuff, it's usually VC++. The only thing I do recall installing an extra library for was XNA.
3 weeks ago
Anonymous
But it's not always there and in any case, you don't decide which exact version of the libraries your program gets to use. If the user updates any of those runtimes/libraries that's what you are going to get. A similar thing goes for things like graphics drivers, or other things you can't reasonably link statically. On Windows this is actually harder than on Linux because on Linux the kernel syscall ABI is stable, while on Windows you need to go through a dll.
Few programs use .NET, so it's easy to miss.
3 weeks ago
Anonymous
As far as I know, Visual C++ runtime libraries are maintained properly, and you don't have to worry it because you can detect if it's present and prompt the user to install if it's not.
Of course, you can't count on stuff like graphics drivers, you just have to use their APIs and hope for the best.
I think the problem is more about the fundamental philosophy between how things are done on Linux vs. desktop OSs. On Linux, people will even dynamic link that one guy's personal github project with 50 downloads and 2 stars (one from you and one from his alt) and just expect it to get installed on your computer. Whenever you start an installer, it shits up your entire system with libraries. It introduces complexity doesn't actually save significant space, and it is ultimately the reason why Linux desktop will never just work. This is not on the same level as linking official MSVC++ libraries or GPU libraries.
3 weeks ago
Anonymous
I always have to install those on new systems for games or company tooling, which is still mostly .NET framework shit.
Steam automated the installs for games so that's nice at least.
3 weeks ago
Anonymous
debian can take your source code for their package but in all reality it's extremely that my application will get enough users to warrant being included in the debian mainline repos.
distros compiling everything with shared libraries *for the coreutils* is probably fine, along with many core dependencies. but only for the system itself. appimages and static linking for 3rd party apps, tbh.
>I'm not dealing with retards who come crying to me because the program stopped working because some freetard decided to updoot libsneeder420 and broke compatibility with the API
So close the issue? You know you can do that, right?
one case i've heard is that rather than me running my malware.exe on your system, which will obviously raise alarms as i try to access resources/use compute, i can just alter a dynamically loaded library (usually the same as the original library but with extra malicious action). Then, you execute your normal applications, and my malicious library gets called instead. Makes attribution/detection much more difficult, since it looks like legit apps are doing malicious things
I can understand if the software is meant for contractor work for feds or military, or needed for infrastructure/banking. But 99% of the time it's not, so thesd devs are just being overly paranoid and riding some fotm programming ethics.
why don't you homosexuals understand that not everyone has multiple terrabyte of disk-resources lying around. Esp. laptops are often sold with as little as 128GB storage and *some* exclusively statically linked languages can eat a lot of that up (a lot of rust projects easily reach multiple gigabyte in temporary resources, that wouldn't be necessary with dynamic linking (also faster compile times))
Static linking doesn't scale.
If you keep everything small and suckless, sure static linking can be fine.
But if you start to include huge libraries, it blows up and now you have gigabyte binaries for hello world.
It doesn't scale,
Not him but I don't see the contradiction. Obviously doesn't apply to "hello world" but it's still extra code you're bringing in. If you're statically linking to qt or something like that the filesize will be noticeably bigger
The problem with this argument is that one routine in a library often depends on another routine in a library, next thing you know you're pulling in most of the library anyways.
An example from personal experience: the Boost C++ libraries
They are fuckhuge. Even though they claim to be "modular" - if you just pull in just ASIO or Beast, you will end up needing like 60% of the 180MB library.
>Dynamic linking is a retarded meme
always was, anon.
Never happens. But you know what does? >CVE discovered in library >oops, gotta recompile everything
one case i've heard is that rather than me running my malware.exe on your system, which will obviously raise alarms as i try to access resources/use compute, i can just alter a dynamically loaded library (usually the same as the original library but with extra malicious action). Then, you execute your normal applications, and my malicious library gets called instead. Makes attribution/detection much more difficult, since it looks like legit apps are doing malicious things
>coping this hard
[...]
Isn't dynamic linking better for security? If food has a vulnerability, instead of having to wait for all programs that depend on foo to rebuild their binary, you just need the maintainer of foo to rebuild, and all other things no longer have that vuln?
>Isn't dynamic linking better for security?
no.
>Fuck ~~*dynamic*~~ linking. It's one the worst things to happen to the computing world and the reason why there will never be a year of the Linux desktop
Dynamic linking is the only reason Windows programs can run on 9x and NT. You are stupid.
>only reason
i think it is for windows itself but you can statically link executables on those systems.
dynamic linking is the way says the freetard has he installs yet another containerization system to manage the utter shitheap a standard linux install is
Fuck ~~*dynamic*~~ linking. It's one the worst things to happen to the computing world and the reason why there will never be a year of the Linux desktop
>Fuck ~~*dynamic*~~ linking. It's one the worst things to happen to the computing world and the reason why there will never be a year of the Linux desktop
Dynamic linking is the only reason Windows programs can run on 9x and NT. You are stupid.
If you implemented your shit properly according to win32 API your shit is still working decades later. Not many libraries offer that level of stability.
my robot OS is roughly 125 programs and 250 shared libraries spread among 104 code repositories.
that's on top of standard Qt, OpenCV, OpenGL, libstdc++, libc, libm, Proj, sqlite, and a full headless Linux install
I fully support dynamic linking
also LULZ: crams a gorillion dynamic libraries into a docker container because its the only way the software can coexist with anything else on your machine
Dynamic linking is a retarded meme, it was only needed because of the limitations in disk space back in the 70s. It's not needed today, and any benefits are entirely debatable.
>Dynamic linking is a retarded meme, it was only needed because of the limitations in disk space back in the 70s. It's not needed today, and any benefits are entirely debatable.
/thread
Isn't dynamic linking better for security? If food has a vulnerability, instead of having to wait for all programs that depend on foo to rebuild their binary, you just need the maintainer of foo to rebuild, and all other things no longer have that vuln?
Yes which is why static linking supporters are all either "it works for games on windows and I have no use for computers beyond that" or "just build everything from 200 microservices".
With the latter being mostly go users coping with lack of compiler support, and a total of 0 useful pieces of software produced between the two of them.
irresponsible, impersonated, coerced or outright malicious maintainers are a large threat surface
static linking is best for open source, especially since chsnges aren't coordinated.
However, libraries used should be in a check file so uou know if an app has risk of vulns. afaik nobody does this.
>Isn't dynamic linking better for security?
>If food has a vulnerability, instead of having to wait for all programs that depend on foo to rebuild their binary, you just need the maintainer of foo to rebuild, and all other things no longer have that vuln?
No, this is horrible practice.
I'm not dealing with retards who come crying to me because the program stopped working because some freetard decided to updoot libsneeder420 and broke compatibility with the API because he thinks "you don't need that feature" and some functions behave differently because "this is what the official specification says and it wasn't compliant before."
If I release something, it means I tested it, and it works with whatever I tested it with. You are getting all the dependencies of the program with the installer (maybe as versioned dlls which you WILL load into memory no matter what). You don't have to install a metric fuckload of libraries, and the program will just work. If anybody tried to install ML programs via python or literally any linux program for that matter, it should be obvious why this practice is cancer.
So you just keep shipping broken software?
Never even implied that. I will fix it when I get around to it and it becomes important. I'll check out the changes, make modifications, test the software, and release a new version when it's ready. Not going to entertain issues from users who fuck their shit up and then they're surprised it doesn't work as intended.
Your users will always find ways to be retarded so I don't care, still unvendoring your libraries.
Meh, do whatever you want on your computer.
But relying on libraries maintained by literal whos being installed on the end user's machine is asking for trouble from a developer's perspective, so if you're not mentally ill, I don't see why you would do it.
Your philosophy makes sense only if you're the one building the binaries running on the user's machine. On Linux and FOSS systems in general this is mostly not the case because of the distribution model.
>Your philosophy makes sense only if you're the one building the binaries running on the user's machine.
I don't see why this would be the case. Programs distribute libraries written by others all the time. If my program needs sqlite, it comes with sqlite3.dll, simple as. The exact version I tested it with.
>On Linux and FOSS systems in general this is mostly not the case because of the distribution model.
It's an arbitrary decision to just rely on users installing all the stuff you use. You don't have to do it, but Linux people believe in muh unix philosophy the same way they believe in god, and that leads to dogmatism. They're like "mindless dynamic linking le good, period."
On Windows and Macintosh, when you get a program installer, it actually has everything you need to run the program, and it has just worked for decades.
Even on Windows you sometimes (mostly on new systems) end up missing some .NET version, DirectX or VC++ dlls to run some things.
>sometimes (mostly on new systems) end up missing some .NET version, DirectX or VC++ dlls to run some things
You mean mostly on older systems.
It is very rare to get this on newer ones. I don't recall ever having to install any .net or directx at all for Windows 10, and VC++ is usually bundled with the installers and installed automatically.
MS doesn't ship those with Windows so it happens on any newly installed system. Often there's a separate installer for those libraries so maybe you didn't notice, but the license forbids just dropping the dlls into your program folder.
Yeah that's right. The installer usually just pops up during/after/before the program's installer. I think this is reasonable, I mean they ship everything you need with the software, and you only need each version once.
I'm not sure about .NET, but I know basic .NET support comes with Windows since like Vista and I don't remember installing too much .net stuff, it's usually VC++. The only thing I do recall installing an extra library for was XNA.
But it's not always there and in any case, you don't decide which exact version of the libraries your program gets to use. If the user updates any of those runtimes/libraries that's what you are going to get. A similar thing goes for things like graphics drivers, or other things you can't reasonably link statically. On Windows this is actually harder than on Linux because on Linux the kernel syscall ABI is stable, while on Windows you need to go through a dll.
Few programs use .NET, so it's easy to miss.
As far as I know, Visual C++ runtime libraries are maintained properly, and you don't have to worry it because you can detect if it's present and prompt the user to install if it's not.
Of course, you can't count on stuff like graphics drivers, you just have to use their APIs and hope for the best.
I think the problem is more about the fundamental philosophy between how things are done on Linux vs. desktop OSs. On Linux, people will even dynamic link that one guy's personal github project with 50 downloads and 2 stars (one from you and one from his alt) and just expect it to get installed on your computer. Whenever you start an installer, it shits up your entire system with libraries. It introduces complexity doesn't actually save significant space, and it is ultimately the reason why Linux desktop will never just work. This is not on the same level as linking official MSVC++ libraries or GPU libraries.
I always have to install those on new systems for games or company tooling, which is still mostly .NET framework shit.
Steam automated the installs for games so that's nice at least.
debian can take your source code for their package but in all reality it's extremely that my application will get enough users to warrant being included in the debian mainline repos.
distros compiling everything with shared libraries *for the coreutils* is probably fine, along with many core dependencies. but only for the system itself. appimages and static linking for 3rd party apps, tbh.
>I'm not dealing with retards who come crying to me because the program stopped working because some freetard decided to updoot libsneeder420 and broke compatibility with the API
So close the issue? You know you can do that, right?
one case i've heard is that rather than me running my malware.exe on your system, which will obviously raise alarms as i try to access resources/use compute, i can just alter a dynamically loaded library (usually the same as the original library but with extra malicious action). Then, you execute your normal applications, and my malicious library gets called instead. Makes attribution/detection much more difficult, since it looks like legit apps are doing malicious things
If you have write access to the system, you could as well write into the statically linked programs too.
Security fags are the worst, I don't care if the shit I downloaded has a vulnerability, I just want it to work.
I can understand if the software is meant for contractor work for feds or military, or needed for infrastructure/banking. But 99% of the time it's not, so thesd devs are just being overly paranoid and riding some fotm programming ethics.
dll injection blocks your path.
There are valid use-cases for dll injection.
Remember a month ago when libwebm turned out to be a security issue and you had to patch everything separately?
>Remember a month ago when libwebm turned out to be a security issue and you had to patch everything separately?
this
was about to post this
Remember a month ago when libwebm had a vulnerability and everything that linked to it was also vulnerable?
>multiple copies of same library in and out of RAM
>re-linking for every change
Anon, I …
>nooo not the heckin' 15 megabytes of duplicate librarinos on my neoarchsucklesswayland build reeeeeeeee
why don't you homosexuals understand that not everyone has multiple terrabyte of disk-resources lying around. Esp. laptops are often sold with as little as 128GB storage and *some* exclusively statically linked languages can eat a lot of that up (a lot of rust projects easily reach multiple gigabyte in temporary resources, that wouldn't be necessary with dynamic linking (also faster compile times))
Static linking doesn't scale.
If you keep everything small and suckless, sure static linking can be fine.
But if you start to include huge libraries, it blows up and now you have gigabyte binaries for hello world.
It doesn't scale,
Imagine not realizing that static linking only includes the routines needed and does not include the entire library.
Absolute retard.
Not him but I don't see the contradiction. Obviously doesn't apply to "hello world" but it's still extra code you're bringing in. If you're statically linking to qt or something like that the filesize will be noticeably bigger
The problem with this argument is that one routine in a library often depends on another routine in a library, next thing you know you're pulling in most of the library anyways.
An example from personal experience: the Boost C++ libraries
They are fuckhuge. Even though they claim to be "modular" - if you just pull in just ASIO or Beast, you will end up needing like 60% of the 180MB library.
And your program will work. How is requiring users to keep big libraries around they may not even need for anything else better?
>Dynamic linking is a retarded meme
always was, anon.
>coping this hard
>Isn't dynamic linking better for security?
no.
>only reason
i think it is for windows itself but you can statically link executables on those systems.
This. Dependency hell is already bad inside ONE program. Imagine the dependency hell spanning over the whole OS.
>allow me to hack your program by editing the library
No thanks
Never happens. But you know what does?
>CVE discovered in library
>oops, gotta recompile everything
dynamic linking is the way says the freetard has he installs yet another containerization system to manage the utter shitheap a standard linux install is
>projecting
Dynamic linking results in performance bottleneck and API breakage. No thanks.
Most Swift shit when compiling on MacOS can't be statically linked.
Apple says you're wrong, so you're wrong.
>library gets updated
>program breaks
Thanks.
Just use an older lib
If packages were forced to maintain backwards compatibility then there would be zero reason to use static linking.
Fuck ~~*dynamic*~~ linking. It's one the worst things to happen to the computing world and the reason why there will never be a year of the Linux desktop
>Fuck ~~*dynamic*~~ linking. It's one the worst things to happen to the computing world and the reason why there will never be a year of the Linux desktop
Dynamic linking is the only reason Windows programs can run on 9x and NT. You are stupid.
If you implemented your shit properly according to win32 API your shit is still working decades later. Not many libraries offer that level of stability.
You do realize Linux uses dynamic linking too.
my robot OS is roughly 125 programs and 250 shared libraries spread among 104 code repositories.
that's on top of standard Qt, OpenCV, OpenGL, libstdc++, libc, libm, Proj, sqlite, and a full headless Linux install
I fully support dynamic linking
Your entire OS is a dynamically linked ball of dependency anyway.
/g/: static linking bad
also LULZ: crams a gorillion dynamic libraries into a docker container because its the only way the software can coexist with anything else on your machine
just kill stuff like electron and stop shipping a full blown 200MB browser for your 500KByte application
The assembly programmer knows.
As someone that actually programs in assembly, I prefer dynamic linking.
Who wants to have a folder of 1200 files when I can just have 1 executable?