New project MVC 5 (why ?)

Discussion of chess software programming and technical issues.

Moderator: Ras

mvk
Posts: 589
Joined: Tue Jun 04, 2013 10:15 pm

Re: New project MVC 5 (why ?)

Post by mvk »

kbhearn wrote:
Joost Buijs wrote:Maybe I'm old fashioned but I still use MFC when I need to build a graphical interface, it almost never gives me problems. The only problem I had was when MFC switched to Unicode only, somewhat later Microsoft felt this was a mistake and they released a separate MFC library which is still compatible with char and multi-byte.

I don't like it when my programs depend upon dynamically linked libraries and runtime modules, this gives all kinds of problems with separate versions of the libraries installed (makes me think of Linux), the MFC library can be linked statically, and this is what I always do. Computer memory is so large nowadays that I don't see any advantage in linking on runtime.
The major advantage to runtime linking is that any library bugs that get fixed don't require you to issue a new compile to be fixed in your executable. If it's a common enough library that it's used from a shared location you may well not have to worry about it at all. As an example let's say you were using openssl in a program when the heartbleed bug became public. If you had staticly linked in the openssl library then you have to patch, recompile, and get all your users to update your program. If you were using a shared library then their system security updates would take care of it for you.

There is also of course the advantage in some sorts of low level libraries that the 'many different versions' may take advantage of local hardware differences without you having to explicitly code for them yourself. As long as you're using the libraries in the intended manner (and avoiding extremely new features unless you really need them), the existence of difference versions should not bite you too hard/often.
You don't need to recompile if you want a new lib version. Just relinking is sufficient. And testing. With dynamic linking obviously the developer doesn't get a chance to test. So quality and user experience suffers as a consequence.

The hearbeat example is contrived in the context of a chess program. How does a chess pogram user get impacted by that? There is no resource to "secure" in such a userland application that isn't already accessible by going to the OS directly. Libraries add nothing but convenience on top of the OS API.

The whole point Joost makes is that he chooses the OS API as the stable interface for his binaries, instead of trusting the library makers to keep their promises over the years. In practice, the kernel API, be it windows or posix, is rock stable, and libraries interfaces are not. In reality, dynamic linking means that 3 years down the road your executable doesn't run anymore, because some interface "improved", or some hidden assumptions the maintainers didn't think about changed, or because the maintainers renamed it lib (eg. libreadline on Ubuntu). If nobody screws up, it is fine. But you can't count on that, quite the opposite. You can count on library makers to make a mess of it. Kernel maintainers are far more responsible in keeping userland applications working than library makers, who are generally ignorant and can only think about the latest and the greatest.

One performance advantage for dynamic linking is the text section is shared among processes, saving some memory and load time during startup. But when load time matters, the lib would be cached anyway, so mapping in a statically linked executable will be faster than mapping one that is dynamically linked (because the linking step can be skipped). But the differences are so small not to be noticable. So it is only a memory difference now. Nowadays systems are not exactly shy on memory for text sections. Therefore static linking makes a great deal of sense. Dynamic linking still has a place, but not as a default for user applications.

The whole trend to go to self-contained bundles, containers, vm images, etc is all an extension of the same idea: late binding to shared components is too much hassle for the user. The dependency problems are simply solved by space, which is cheap now.
[Account deleted]
SuneF
Posts: 127
Joined: Thu Sep 17, 2009 11:19 am

Re: New project MVC 5 (why ?)

Post by SuneF »

emadsen wrote: I agree, the pace of change can be dizzying. A healthy dose of skepticism helps (Do I really need dependency injection containers? Can't I just pass interfaces to class constructors and be done with it?) As well as making a conscientious decision when taking dependencies on frameworks or language features (A queryable object model is great, but LINQ makes it difficult to see the actual SQL query that's executing).

There's so much buzzword BS out there and bloated design, you have to be careful not to believe all the received wisdom. Sometimes a hammer is best- you don't need a damn hammer factory.
Right I totally agree. Being a first-mover is probably not worth it. If I use a new hyped up framework it has to really pack a punch and be worth the risk og maintenance issues.

What I usually do is keep an eye on new things and wait until they reach a certain level of maturity, then I do some throw away prototyping and play around with it and only then would I even consider using it on a larger project.

Btw how is C# for writing chess engines, nps wise and all?
Henk
Posts: 7251
Joined: Mon May 27, 2013 10:31 am

Re: New project MVC 5 (why ?)

Post by Henk »

My engine which is written in C# only makes 70-200 kilo moves per second. It also counts null moves but it doesn't count standing pat as a null move.

I don't know how Stockfish counts nodes but otherwise Skipper is 20 times slower than Stockfish on my computer.
SuneF
Posts: 127
Joined: Thu Sep 17, 2009 11:19 am

Re: New project MVC 5 (why ?)

Post by SuneF »

Henk wrote:My engine which is written in C# only makes 70-200 kilo moves per second. It also counts null moves but it doesn't count standing pat as a null move.

I don't know how Stockfish counts nodes but otherwise Skipper is 20 times slower than Stockfish on my computer.
Well if you use linq, foreach, lists and all the cool stuff I think there is a performance hit. On the other hand, if you avoid all the cool stuff you might as well write it in C :)

C# is good for fast development, large projects, web services, threads, ORMs etc, where as C/C++ is better for smaller projects and fast code.

Chess engines belong in the latter category but still interesting to try it with C#.
kbhearn
Posts: 411
Joined: Thu Dec 30, 2010 4:48 am

Re: New project MVC 5 (why ?)

Post by kbhearn »

There was an effort made to port SF to C# and recover all the performance possible to see what the hit was.

https://github.com/bpfliegel/Portfish

says he got it down to 2.7x slower.

Of course there was also stockfish ported to javascript, and even that only runs about 10x slower here iirc.
User avatar
emadsen
Posts: 441
Joined: Thu Apr 26, 2012 1:51 am
Location: Oak Park, IL, USA
Full name: Erik Madsen

Re: New project MVC 5 (why ?)

Post by emadsen »

Btw how is C# for writing chess engines, nps wise and all?
It's good enough for a middle tier engine. I wrote my 1st engine (MadChess 1.0) in object-oriented style. It can generate moves at 800 Knps and search the middlegame at 200 Knps. Plays bullet chess at 2100 ELO.

I'm writing my 2nd engine (MadChess 2.0), still in development, in a more procedural style. It generates moves at 4 Mnps and searches the middlegame at 1 Mnps. Plays bullet chess at 2300 ELO.

Both engines use mailbox board representation. Interesting that the reduction factor from move generation to search is 1/4 for both engines. I wonder if that's normal?

The author of the NoraGrace has managed to write a C# bitboard engine that plays bullet chess at 2600 ELO. Not sure of the move generation or typical search speeds.
Erik Madsen | My C# chess engine: https://www.madchess.net
Henk
Posts: 7251
Joined: Mon May 27, 2013 10:31 am

Re: New project MVC 5 (why ?)

Post by Henk »

Busy half a day and whole evening with solving a bug. "Game did not show the names of the players." Finally I found it. I did not make the player property virtual. So the player object was set to null. Before that I inspected the database and everything was ok.

When I consult the tutorial I find this:

"Navigation properties are typically defined as virtual so that they can take advantage of certain Entity Framework functionality such as lazy loading"

Typically defined ????
SuneF
Posts: 127
Joined: Thu Sep 17, 2009 11:19 am

Re: New project MVC 5 (why ?)

Post by SuneF »

Henk wrote:With web development it is even more terrible. First they came with SOAP. After that it was web services. But that was not good enough so they invented WCF. But now WCF is also obsolete and it should be web API.
You're mixing things up a little bit here, though I would say that WCF is not obsolete in any way. WCF is still the major framework to use for advanced communication, e.g. with duplex and binary bindings and SOAP protocol. Web API only supports REST, so it's not a replacement for WCF.
Henk wrote:So no matter what you build on the web it is old fashioned within two years.
Yes and this is GOOD, this means things are evolving at a immense speed. :)
Henk wrote: Also security model in MVC 4 is obsolete and replaced with a new one in MVC 5. They want us to keep busy with learning obsolete technology. When they introduce it they say it is fantastic and everybody should use it. And a few years later they make you think you are a fool if you are still using that old stuff.
Yes for instance the new OWIN is simpler and enables asp web sites to be selfhosted outside the IIS, so this definitely GOOD :)
Henk wrote:With languages it is the same: C, C++, C#.

C# language is not that easy as it used to be because of generics, lambda expressions and LINQ build in. With generics it is easy to create complicated nested type definitions.
C# has become a powerful language indeed. If C is a hammer, C# is like a toolbox filled with powertools. It takes a bit longer to learn how to master all of it but you become more proficient once you do, and of course you can just start with a tiny subset like in C. Support for advanced refactorings, auto completion etc. just saves you tons of time.
What I particularly like about C# is that you don't spend time writing boring boilerplate code because the library is huge or you just find a nuget package that has it.
Henk wrote:Also design patterns can make your code difficult to understand. With design patterns you only see interfaces and abstractions and you wonder where you can find the concrete code that is executing here.
Oh boy. Decouple you code, SOLID, best practices and all that... oh wait, your a chess programmer :)