You don't need to recompile if you want a new lib version. Just relinking is sufficient. And testing. With dynamic linking obviously the developer doesn't get a chance to test. So quality and user experience suffers as a consequence.kbhearn wrote:The major advantage to runtime linking is that any library bugs that get fixed don't require you to issue a new compile to be fixed in your executable. If it's a common enough library that it's used from a shared location you may well not have to worry about it at all. As an example let's say you were using openssl in a program when the heartbleed bug became public. If you had staticly linked in the openssl library then you have to patch, recompile, and get all your users to update your program. If you were using a shared library then their system security updates would take care of it for you.Joost Buijs wrote:Maybe I'm old fashioned but I still use MFC when I need to build a graphical interface, it almost never gives me problems. The only problem I had was when MFC switched to Unicode only, somewhat later Microsoft felt this was a mistake and they released a separate MFC library which is still compatible with char and multi-byte.
I don't like it when my programs depend upon dynamically linked libraries and runtime modules, this gives all kinds of problems with separate versions of the libraries installed (makes me think of Linux), the MFC library can be linked statically, and this is what I always do. Computer memory is so large nowadays that I don't see any advantage in linking on runtime.
There is also of course the advantage in some sorts of low level libraries that the 'many different versions' may take advantage of local hardware differences without you having to explicitly code for them yourself. As long as you're using the libraries in the intended manner (and avoiding extremely new features unless you really need them), the existence of difference versions should not bite you too hard/often.
The hearbeat example is contrived in the context of a chess program. How does a chess pogram user get impacted by that? There is no resource to "secure" in such a userland application that isn't already accessible by going to the OS directly. Libraries add nothing but convenience on top of the OS API.
The whole point Joost makes is that he chooses the OS API as the stable interface for his binaries, instead of trusting the library makers to keep their promises over the years. In practice, the kernel API, be it windows or posix, is rock stable, and libraries interfaces are not. In reality, dynamic linking means that 3 years down the road your executable doesn't run anymore, because some interface "improved", or some hidden assumptions the maintainers didn't think about changed, or because the maintainers renamed it lib (eg. libreadline on Ubuntu). If nobody screws up, it is fine. But you can't count on that, quite the opposite. You can count on library makers to make a mess of it. Kernel maintainers are far more responsible in keeping userland applications working than library makers, who are generally ignorant and can only think about the latest and the greatest.
One performance advantage for dynamic linking is the text section is shared among processes, saving some memory and load time during startup. But when load time matters, the lib would be cached anyway, so mapping in a statically linked executable will be faster than mapping one that is dynamically linked (because the linking step can be skipped). But the differences are so small not to be noticable. So it is only a memory difference now. Nowadays systems are not exactly shy on memory for text sections. Therefore static linking makes a great deal of sense. Dynamic linking still has a place, but not as a default for user applications.
The whole trend to go to self-contained bundles, containers, vm images, etc is all an extension of the same idea: late binding to shared components is too much hassle for the user. The dependency problems are simply solved by space, which is cheap now.
