Streaming. media and caching web content, apps that use caching and virtual storage, cloud-sync, etc.
No way this would be 164 GB every day. It's a laptop, not a server.
In cloud solutions there usually is a server in one end.. and depending on change frequency and size of cloude storage (can be huge files for video, media, phot databases etc. pushed to the server (from workstations etc) that syncs with the laptop.
Milos wrote: ↑Sun Feb 28, 2021 5:03 pm600 P&E cycles for QLC??? I'd say you are clueless about real life performance of modern NAND Flash.
That's what Samsung offers as warranty. They also have quite some over-provisioning because what's sold isn't 512 GB, but 500 GB in the small model, and one GB is counted as 1,000,000,000 bytes. The catch however is that the warranty is 5 years or 300 TBW, whatever is reached first. That means you'd have to write 164 GB per day on average to reach the 300 TBW within 5 years, and that's far from what the average user will have.
That's a warranty on device not on memory chips. Device has a controller so let me explain a bit how it works in practice. There is more overprovisioning than just difference between 1 billion bytes and 1GB (usually around 10%) but it is almost exclusively used for wear leveling.
Some of the actual P&E cycles are masked with the local cache, but overall that won't increase written capacity by much. So they use a typical "trick". Once you go over lets say 70% of actual total P&E cycles over all the chips controller starts slowing down write speed by quite some margin. Typically once you are above 90% of actual total P&E cycles it goes into crawling speed so it would literally take you couple of years (up to warranty expiration) to use remaining 10% written capacity.
QLC NAND is pretty bad compared to TLC NAND, typically 200 P&E cycles compared to 500 P&E cycles.
There are ofc ton of other things that can be done to improve endurance but these are typically saved only for enterprise devices. One very efficient technique you might find in high-end consumer products is using higher amount of overprovisioning (typically 20%) and then using that 20% and 20% from the base capacity in only SLC mode. That increases number of P&E cycles by a factor of 20-50, so essentially with 20% of total capacity in SLC mode you double up total written capacity in ideal case. Ofc this is not such an easy task in practice coz it requires a lot of intelligence in tracking cold and hot data and efficiency is quite data dependent.
Ckappe wrote: ↑Sun Feb 28, 2021 7:14 pm
As far as I know, Apple does not use the Samsungs you refer to as well in their M1s. Rumor has it that the soldered SSDs are TLC NAND from Western Digital, so real TBW is an unknown really (i would seriously doubt it is over half of what you quote).. (BTW this is also an additional problem with Apple's "un"-openness by not clearly specify components and warranty details, to their customers!!)
WD is actually Toshiba (the same as Intel is actually Micron) in NAND Flash. They don't really own fabs technology. Apple is mainly using Toshiba and SK Hynix NAND flash. These days it's 90% QLC because of the price. I doubt they'd go with TLC in their laptops, but ofc there is always a possibility.
Toshiba TLC is pretty good, but consumer level is consumer level, so you can't expect miracles.
Milos wrote: ↑Sun Feb 28, 2021 8:56 pm So they use a typical "trick". Once you go over lets say 70% of actual total P&E cycles over all the chips controller starts slowing down write speed by quite some margin. Typically once you are above 90% of actual total P&E cycles it goes into crawling speed so it would literally take you couple of years (up to warranty expiration) to use remaining 10% written capacity.
That's interesting. However, I remember some endurance tests on SSDs where they pushed them to quite beyond the TBW limit, and that wouldn't have been possible with such a slow-down I think?
QLC NAND is pretty bad compared to TLC NAND, typically 200 P&E cycles compared to 500 P&E cycles.
But the Samsung Evo 970 Plus is TLC, not QLC, so rating it at 600 cycles doesn't seem too far-fetched.
in only SLC mode.
The 970 has that, too, though more for speed reasons (dubbed "TurboWrite").
Of course, as already mentioned, Apple's M1 does not use a Samsung 970, but I just guesstimated it would be somewhat equivalent because both aim at the better quality end of the consumer segment.
Milos wrote: ↑Sun Feb 28, 2021 8:56 pmOfc this is not such an easy task in practice coz it requires a lot of intelligence in tracking cold and hot data and efficiency is quite data dependent.
Btw., since you mentioned you are testing flash: it's known that SSD flash may lose its data when being unpowered, depending on ambient temperature and wear level. Usually not an issue, what went viral back then with "SSDs lose data after two weeks" was quite misleading. However, the SSD firmware is also stored in the flash I think. Suppose you store an SSD at 25°C in the shelf and leave it there as backup part, after what time would firmware loss become a risk? I.e. how often should such parts be put into a computer and powered for refreshing themselves, and for how long would they need to be powered to achieve that?
Milos wrote: ↑Sun Feb 28, 2021 8:56 pmOfc this is not such an easy task in practice coz it requires a lot of intelligence in tracking cold and hot data and efficiency is quite data dependent.
Btw., since you mentioned you are testing flash: it's known that SSD flash may lose its data when being unpowered, depending on ambient temperature and wear level. Usually not an issue, what went viral back then with "SSDs lose data after two weeks" was quite misleading. However, the SSD firmware is also stored in the flash I think. Suppose you store an SSD at 25°C in the shelf and leave it there as backup part, after what time would firmware loss become a risk? I.e. how often should such parts be put into a computer and powered for refreshing themselves, and for how long would they need to be powered to achieve that?
Retention on room temperature is quite good, i.e. 3+ years at least, in practice if power down (shelved), much longer than that. Already on 40C becomes quite bad (~3 months). When powered-up it refreshes by default (pretty standard in most controllers these days) but ofc it takes a while.
Update about Apple M1 breaking internal soldered SSDs.:
I have stopped my engines tournaments to verify if the cause could be:
1) Arena Chess running H24 on Windows 10 ARM64 on Parallels Desktop 16.3 M1.
2) A second tournament between native M1 engines on BanksiaGui Mac.
Luckily the remaining percentage is still 84% after a week, but obviously Big Sur has continued to write a lot of data. However it could not be directly related to hash tables or 6-Man Sygyzy tablebases (both installed on an external SSD, but the virtual machine swap file is on the internal SSD) . I have read that it could be only a bug on the 3th part algorithm misuring SSD usage, because also Mac mini M1 total time ON isn't accurate.
Are other people experiencing this worrying issue on Mac M1 playing chess engines tests?
PS: If it will not fixed soon by Apple, I hope to solve it buying an external Thunderbolt 3 M2 512 GB SSD, so I'll not risk to compromise my Mac mini M1, substituting SSD when broken.
Chess engines and dedicated chess computers fan since 1981 macOS Sequoia 16GB-512GB, Windows 11 & Ubuntu ARM64. ProteusSF Dev Forum
AlexChess wrote: ↑Thu Mar 04, 2021 9:50 am
Update about Apple M1 breaking internal soldered SSDs.:
I have stopped my engines tournaments to verify if the cause could be:
1) Arena Chess running H24 on Windows 10 ARM64 on Parallels Desktop 16.3 M1.
2) A second tournament between native M1 engines on BanksiaGui Mac.
Luckily the remaining percentage is still 84% after a week, but obviously Big Sur has continued to write a lot of data. However it could not be directly related to hash tables or 6-Man Sygyzy tablebases (both installed on an external SSD, but the virtual machine swap file is on the internal SSD) . I have read that it could be only a bug on the 3th part algorithm misuring SSD usage, because also Mac mini M1 total time ON isn't accurate.
Are other people experiencing this worrying issue on Mac M1 playing chess engines tests?
PS: If it will not fixed soon by Apple, I hope to solve it buying an external Thunderbolt 3 M2 512 GB SSD, so I'll not risk to compromise my Mac mini M1, substituting SSD when broken.
I currently sold most of my Apple stuff apart from an old iPad, mainly because I don't like how Apple restricts the use of the hw and their overall un-openness and security by obscurity proposition. This said my friend has a 16GB Macbook air and we concluded that the best way is probably to use it fully as intended and perhaps even stress the SSD more so at least the device failure is likely to happen within warranty, and not outside the 3 years.. It is usually a pain with service when devices start getting older, especially with Apple's current stance of messing as much as possible with 3rd party service and repairs.
Maybe you are using it for lots of other stuff than chess? If only for chess I would probably just sell it on eBay and use the SSD-saving to buy a spanking new slim Ryzen based gaming laptop for a similar price. Better more compatible and less Apple-tax in general. But that's me. I am pretty tired of paying hard-earned money to be a beta-tester for Apple
Ckappe wrote: ↑Fri Mar 05, 2021 6:11 pmwe concluded that the best way is probably to use it fully as intended and perhaps even stress the SSD more so at least the device failure is likely to happen within warranty, and not outside the 3 years..
The problem comes if Apple launches a software update before the SSD has worn out - which would then leave him with a mostly, but not fully worn out (and non-replaceable) SSD by the end of the warranty. On the other hand, he can't complain because the "it's by Apple" warning sticker was clearly visible.
AlexChess wrote: ↑Thu Mar 04, 2021 9:50 am
Update about Apple M1 breaking internal soldered SSDs.:
I have stopped my engines tournaments to verify if the cause could be:
1) Arena Chess running H24 on Windows 10 ARM64 on Parallels Desktop 16.3 M1.
2) A second tournament between native M1 engines on BanksiaGui Mac.
Luckily the remaining percentage is still 84% after a week, but obviously Big Sur has continued to write a lot of data. However it could not be directly related to hash tables or 6-Man Sygyzy tablebases (both installed on an external SSD, but the virtual machine swap file is on the internal SSD) . I have read that it could be only a bug on the 3th part algorithm misuring SSD usage, because also Mac mini M1 total time ON isn't accurate.
Are other people experiencing this worrying issue on Mac M1 playing chess engines tests?
PS: If it will not fixed soon by Apple, I hope to solve it buying an external Thunderbolt 3 M2 512 GB SSD, so I'll not risk to compromise my Mac mini M1, substituting SSD when broken.
I currently sold most of my Apple stuff apart from an old iPad, mainly because I don't like how Apple restricts the use of the hw and their overall un-openness and security by obscurity proposition. This said my friend has a 16GB Macbook air and we concluded that the best way is probably to use it fully as intended and perhaps even stress the SSD more so at least the device failure is likely to happen within warranty, and not outside the 3 years.. It is usually a pain with service when devices start getting older, especially with Apple's current stance of messing as much as possible with 3rd party service and repairs.
Maybe you are using it for lots of other stuff than chess? If only for chess I would probably just sell it on eBay and use the SSD-saving to buy a spanking new slim Ryzen based gaming laptop for a similar price. Better more compatible and less Apple-tax in general. But that's me. I am pretty tired of paying hard-earned money to be a beta-tester for Apple
I have solved my issue installing Big Sur on an external Samsung T5 USB 3.1 512gb SSD. As you suggest, I'll use it until it will be broken and then I'll buy a new one (it doesn't have S.M.A.R.T, so I cannot even know its status ) With the last update, Apple changed a little the procedure to access recovery and is more simple to install macOS externally and access it. A lot of old posts was saying that USB wasn't bootable, while the expensive Thunderbolt 3 was the only solution. Not true (anymore) ...also USB 3.1 works perfectly now and it is always fast.
BUT I agree with you. Apple policy to sold vital components is unfair and I hope users will start a class action also for the SSDgate!
I have asked them to recall all M1 defective computers on Apple Support Forum and on Twitter. I like macOS but NOT Apple, I'm looking for next open hardware ARM computers to install Linux and upgradable storage and memory https://www.pine64.org/pinebook-pro/https://en.wikipedia.org/wiki/List_of_o ... g_hardware.
Chess engines and dedicated chess computers fan since 1981 macOS Sequoia 16GB-512GB, Windows 11 & Ubuntu ARM64. ProteusSF Dev Forum