1)For chess programs rating:
Using game length may increase the rating of programs that never resign(or if the interface adjudicate games based on evaluations it is going to increase the rating of programs that never show very bad evaluation).
Well it was just a suggestion of the kind of things you can incorporate in rating estimation. Estimating the true strength of players as fast as possible requires you to look at many factors other than just the end result. I guess for 'honest computer evaluations' of no more than +=1 pawn difference , the GUI can force a draw even if engine don't resign. This is used in practice so the idea is workable.
If you want to use the pgn of the games and not only the results then it is better to use computer analysis of the games in order to calculate rating
so both players can earn rating points if they played better than their rating and it is possible that both players lose rating points if they played worse than their rating based on computer analysis.
Note that
I do not like this idea because there is a problem in calculating the rating of the strong programs in this way.
For example if you use houdini to analyze the games of houdini it may increase houdini's rating and if you want accurate result by computer analysis you may need significantly more time to analyze the games
relative to the time that is used to play the games.
I guess that you can rate quality of moves by analyzing with houdini for say the top-5 and acknowledge a good player for its move quality by giving more rating even though the end result is a loss for him. Note that in the end whatever model you construct it will be tested for prediction power on games that is not houdini's. Clearly Houdini is gonna be helped by this rating system because its move will always be the top 1, which is the case even when it looses. So you have to weigh that with the end result. At the end of the day, the model can not be perfect unless the engine is perfect. This seems rather complicated but game length seems to be workable to account for 'strong' wins. This is along the line of improvement of bayesel over elostat that 10-0 is different from 1-0 with the bayesian approach
2)For chess human rating:
I am against all these ideas in human-human games because they encourage cheating.
2 players can simply prepare their game at home and earn rating points from their draw if you use computer analysis to calculate rating.
I am also against the idea that 2 draws do not give the same as win and loss for rating or for ranking of humans because I think this idea also encourage cheating(if a pair of win and loss is not equal to 2 draws then players with equal strength can get motivation to fix their result before the game so they get more from their expected 50% result).
Note that if the players are equal strength, they are going to have equal number of wins and losses so their rating difference will always be 0. But if one gets a win e.g. W+3D instead of 2W+D, the ratings are going to differ based on the draw model used. More draws are considered indicative of equality so the W+3D will result in less rating difference.