Where do you quote this from? I cannot find anything lke this in the WriteFile docs I have at
http://msdn.microsoft.com/en-us/library ... S.85).aspx .
I can only make sense of what you quote when 'internal' here means internal to the operating system. Meaning the user has no control whatsoever over the flushing of it. The OS has to buffer ('cache') data that has to be written to disk, to wait for the desired sector to pass underneath the write head. and there is no way this could be speeded up.
Reading through the docs I must admit that they are a bit confusing. In normal operation, there are several levels of buffering / caching involved:
1) A user uses a high-level output routine like fprintf() to write to a file. This library routine usually buffers a few KB, because invoking the OS is relatively expensive, and users typically use printf for strings of 10-100 bytes only. So it accumulates them. This is what we normally refer to as 'buffered I/O', and this is what we want to switch off in engine <-> GUI communication, because we expect immediate reply to our fprintf-ed messages, which will not come if they are not actually sent. If it is done, the user has to explicitly flush buffers before starting to ait for an answer. WriteFile() is a low-levelI/O routine, that never does this kind of buffering.
In fact, calling fflush() to flush the fprintf() buffer will use WriteFile() to flush the buffer.
2) There is buffering ('caching') by the OS. Normally, a WriteFile() or fflush() by the user wouldlead the system to copy the user data to a system buffer, so that the WriteFile can return immediately, even though the physical writing might take time to complete. This caching is normally fully transparent to the user. Even when reading the file back before the data is physically written to disk, the OS will substitute the buffer contents, and hand that to the reader. There is no way a user can flush such buffers. But there is an I/O mode that MicroSoft calls 'Ubuffered File I/O'. Basically this does not mean such buffering no longer takes place, but it means that the copying to the OS buffer is skipped, and that the original data in the application memory space is used as the buffer. When the time has arrived that it is physically possible to write the data, the OS now takes it directly from the user buffer. This form of I/O is very tricky, subject to all kind of alignment and size resrtictions on the user data (because it must match the format of the buffers normally used by the OS.
I don't think the latter unbuffered mode makes any sense on pipes. (It might not even exist there.)