I know this is an off topic thread, but this is wrong. OpenGL is a replacement for rendering, not the windowing system. If you're running a highly detailed realtime 3d game, this pattern might be what you choose, but OpenGL doesn't force it. If you were say drawing a 3d chessboard, you would likely still defer to the windowing system for invalidate events, and replace your paint handler with rendering an opengl scene, if you instead did a constant rerender and flip loop you would be overworking the graphics card for no good reason as it'd be pumping out hundreds of frames per second for a board that's static much of the time.Michel wrote:Yes of course that is the best way to do it. You draw on a private copy and then the expose event handler just sends the relevant portion to the screen.Actually it would be much better when all drawing took place outside the expose handler, but to a memory buffer rather than the display, so that the expose handler could simply copy that buffer to the screen. That would make it much simpler to copy only the part that was exposed (assuming the OS tells me what the damaged area was on a genuine expose event). In fact the code I just made does maintain such a copy, because the original animation code was reading information back from the screen to be able to restore it later (after the animated piece had evacuated the erea). And this proved 2-3 orders of magnitude slower in cairo than in the original Xt code (probably because each time I want to read a dozen or so pixels from the screen it converts the entire board image from Xt into cairo format). So I now draw everything static always both to the screen and the memory copy, so that I can read back from the copy to erase the animation piece. But this copy would be ideal as image source for expose events.
Well this is the way it always has been (also on windows/mac). The system relies on it being in control over the screen. If you invalidate this assumption then you are in dangerous territory. But it is perfectly possible to hide this. Toolkits do this. And it is also easy for a DrawingArea if you draw on a private buffer.Also, generating system events for no other purpose than to call my own expose event handler also seems a pointless detour. If I want that handler to be called, I can simply call it myself. Or is the idea that events are queued, so that the redraw only starts after I am done with handling the current iput or timer event?
One reason of course is as you point out is that the system can bundle exposure events if too many arrive. Another is that if the invalid region happens to be covered then no expose event needs to be generated.
BTW. A notable exception to this design pattern seems to be OpenGL. Here the system gives you a buffer where you draw on as quickly as possible (not using exposure events). The graphics hardware makes sure this buffer somehow lands up on the screen.
New XBoard alpha
Moderator: Ras
-
kbhearn
- Posts: 411
- Joined: Thu Dec 30, 2010 4:48 am
Re: New XBoard alpha
-
Michel
- Posts: 2292
- Joined: Mon Sep 29, 2008 1:50 am
Re: New XBoard alpha
Yes this is what I had in mind. But I agree you can use OpenGLIf you're running a highly detailed realtime 3d game, this pattern might be what you choose
as a drawing system in the usual way.
-
kbhearn
- Posts: 411
- Joined: Thu Dec 30, 2010 4:48 am
Re: New XBoard alpha
In practice, this should not be noticeable. While one can contrive a reason why tons of invalidate events might come in at once, it's seldom the case, and drawing a simple 2d chessboard is hardly a task that needs to be resource-optimised on a modern pc.hgm wrote:Well, I hope it is smart about combining events then. Because I'd rather redraw a1 and h8 in separate events rather than the whole board...
-
hgm
- Posts: 28452
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: New XBoard alpha
To get back on topic, I'm still not fully convinced that it is smart to always let the system reschedule my drawing requests as expose events. When I want to draw a narrow diagonal arrow, that is just asking for trouble. By drawing it on the memory copy, I destroy the information where exactly it was, and in the best case I can now try to tell it in a very course way to the expose-event handler by invalidating a large number of squares to be redrawn because the arrow passed through them. Very cumbersome, and still a lot is drawn that was not touched. Much easier to draw the arrow both on the backup, and directly to the screen, and not generate any clumsy expose events. That way I would only touch the screen pixels I wanted to change, with zero overhead. The chances that the system could usefully combine the arrow drawing with another expose event are next to zero...
-
kbhearn
- Posts: 411
- Joined: Thu Dec 30, 2010 4:48 am
Re: New XBoard alpha
a) drawing directly on the screen is never optimised. double-buffered drawing is always faster and more visually appealing.
b) you're trying to optimise something that doesn't need to be. the system is designed for simplicity that you only need to draw when you get a paint event.
b) you're trying to optimise something that doesn't need to be. the system is designed for simplicity that you only need to draw when you get a paint event.
-
hgm
- Posts: 28452
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: New XBoard alpha
I must be missing something. I am always 'drawing directly on the screen', right? Whether I am doing it from the the expose handler or my main program. It is the same code that does the drawing, and that code does not know where it was called from. So how can it be differently optimized?
And we might have different notions of 'simplicity'. When I have to draw an arrow on a buffer bitmap, and then add many times as much code to break up the arrow in a lot of square regions, to use an invalidate call on them to trigger expose events, I would not call that simpler than using the exact same drawing code with the screen as destination, and forgetting abot expose events altogether. A system designed for simplicity that forces me to write a lot of extra code to 'keep it simple', is designed wrong...
And we might have different notions of 'simplicity'. When I have to draw an arrow on a buffer bitmap, and then add many times as much code to break up the arrow in a lot of square regions, to use an invalidate call on them to trigger expose events, I would not call that simpler than using the exact same drawing code with the screen as destination, and forgetting abot expose events altogether. A system designed for simplicity that forces me to write a lot of extra code to 'keep it simple', is designed wrong...
-
Michel
- Posts: 2292
- Joined: Mon Sep 29, 2008 1:50 am
Re: New XBoard alpha
If you draw in response to a draw signal, the toolkit knows you are going to draw and can prepare for it appropriately. If you do it directly then the toolkit has no control.I am always 'drawing directly on the screen', right? Whether I am doing it from the the expose handler or my main program.
Doing it directly might seem to work but if it is not mandated by the toolkit (and I believe in GTK it is not, unless you can point me to some official documentation where it says otherwise) then it will make your code less portable.
I am not sure _you_ have to do that. GTK has non-rectangular regions.And we might have different notions of 'simplicity'. When I have to draw an arrow on a buffer bitmap, and then add many times as much code to break up the arrow in a lot of square regions, to use an invalidate call on them to trigger expose events,
-
kbhearn
- Posts: 411
- Joined: Thu Dec 30, 2010 4:48 am
Re: New XBoard alpha
most people would just invalidate the whole window and let the paint redraw it. you're trying to save the machine effort that really doesn't need to be.hgm wrote:I must be missing something. I am always 'drawing directly on the screen', right? Whether I am doing it from the the expose handler or my main program. It is the same code that does the drawing, and that code does not know where it was called from. So how can it be differently optimized?
And we might have different notions of 'simplicity'. When I have to draw an arrow on a buffer bitmap, and then add many times as much code to break up the arrow in a lot of square regions, to use an invalidate call on them to trigger expose events, I would not call that simpler than using the exact same drawing code with the screen as destination, and forgetting abot expose events altogether. A system designed for simplicity that forces me to write a lot of extra code to 'keep it simple', is designed wrong...
and no, you're not usually going to draw directly to the screen. usually you'd draw a new copy of the window in a memory buffer, and then flip buffers, though this would be hid for you by many high level graphics libraries.
-
hgm
- Posts: 28452
- Joined: Fri Mar 10, 2006 10:06 am
- Location: Amsterdam
- Full name: H G Muller
Re: New XBoard alpha
OK, I see. So something hidden is goig on.
But if the toolkit is that smart, why should it burdon me with doing the buffering and then generating expose events? If it knows whether I am drawing from the program or the expose handler, it might as well have generated the expose events by itself when it knew I was drawing from the program.
Come to think of it, why should it require me to provide an expose handler in the first place? It could buffer any draw operation from the program in a buffer it allocated itself, and generate the corresponding expose events, which would call its own private expose handler the application programmer knows nothing about, which merely copies from the widgets private memory buffer to the screen.
Non-rectangular regions would indeed make life easier. Note, however, that I am not using GTK, but only cairo. So far I have not been able to identify an 'invalidate region' call at all in cairo...
But if the toolkit is that smart, why should it burdon me with doing the buffering and then generating expose events? If it knows whether I am drawing from the program or the expose handler, it might as well have generated the expose events by itself when it knew I was drawing from the program.
Come to think of it, why should it require me to provide an expose handler in the first place? It could buffer any draw operation from the program in a buffer it allocated itself, and generate the corresponding expose events, which would call its own private expose handler the application programmer knows nothing about, which merely copies from the widgets private memory buffer to the screen.
Non-rectangular regions would indeed make life easier. Note, however, that I am not using GTK, but only cairo. So far I have not been able to identify an 'invalidate region' call at all in cairo...
-
kbhearn
- Posts: 411
- Joined: Thu Dec 30, 2010 4:48 am
Re: New XBoard alpha
cairo looks like a low level rendering library. so it wouldn't be responsible for invalidates at all, that would come from x11. Then inside your handler for paint events which would be triggered by invalidates, you'd create a cairo surface, and do your drawing on it.