Conventional Wisdom on Solid-State Drives

Every time I post about solid-state drive’s (SSDs) there’s always a nay-sayer warning about their “short life” and limited usability. It’s a huge misunderstanding of SSD wear-leveling and endurance to assume that a thousand program/erase (PE) cycles somehow implies that the drive is of less persistent value than a conventional drive. This is wildly inaccurate.

The Old Way

Conventional drives store their information on revolving platters and use magnetic arms to read and assign magnetism to specific locations on each platter. The arms are fragile. The movement of the platters is subject to environmental forces. A drop of only a fraction of an inch can toast your conventional drive. An hour in the car in front of Starbucks or the moisture that makes it through your laptop bag when walking between classes in the rain can kill it. Some are even faulty by design (planned obsolescence) or even if they’re not, can suffer from a random failure at any point in their life from dust or exposure to magnetism or even sunlight. This is the fatal flaw with moving parts. In any entropic system stuff will inevitably go wrong. The endurance you hope for is that gamble that it either won’t be you, or at least it won’t be now.

There have been dozens of studies of both conventional and solid-state drives. Most studies on conventional drives essentially conclude that some are better than others, but that they will all fail randomly at some point. Unfortunately, when it comes to conventional drives there’s really no guaranteed way to know how long your specific drive is going to last.

Even with the best SMART data you can never really plan for when the conventional drive is going to fail. You can look at the brand or model and estimate in months or years, but actual operational time will vary even between devices from the same factory made at the same time in the same room. You just can’t plan for it.

New Tricks

Solid-state drives, however, do not suffer from the randomness of not being able to know for sure if the drive will even survive it’s first year. Due to their lack of vulnerable moving parts, vastly improved tolerances and predictable wear-leveling values, they have a calculable life that can not only be guessed, but very effectively planned and measured. You can pro-actively track with the drive’s own self-diagnostics in order to identify, if not the very hour, at least the week that your SSD will no longer be able to be written to (the data will usually still be readable).

SSDs provide several measures of their PE values to determine drive longevity. TBW and DWPD are basically different faces of the same number of writes before the drive will begin to fail. This can be measured in hours or bytes, but the meaning is consistent between presentations: if each block can be written 1100 times (which is a pretty close approximation based on current market values) then a 250GB drive could have 275TB written to it during its reliable life. A 960GB drive would be able to have just over 1PB (petabyte) written during its reliable life. If you measure the actual writes to your current drive over a couple months (with PerfMon or SMART) you can see exactly how long it would take you to consume that amount. The drive won’t exactly crash and burn on that day, it will just fall out of the vendor-tested effectiveness in a “how many licks does it take to get to the center of a Tootsie Pop” way. Many SSDs will safely write twice as much data or more. You know, as long as you don’t bite into it. 😉

SMART

Every drive for the last 20+ years has supported some level of self diagnostics (SMART), but the detail provided by SSDs is fantastic. SMART provides potentially hundreds of flags to identify, track, and observe various drive usage and diagnostic information. SSDs provide self-diagnostics through SMART that enables you to see their actual writes, reads, and life. Get an SSD and use it a couple months, and you can effectively estimate its life for your actual usage.

For example, my current C: is a 240GB Kingston SSD. As of the writing of this article the drive has been in use for 937 days (2.57 years), and has only been restarted 72 times (roughly twice per month – usually for software updates or installation). It’s written 18,925 GB (<19 TB) in that time, which is about 20.2 GB/day. With the magic 1100 PE number we can safely assume it’ll be able to write about 264 TB in its life. This means that this drive will likely survive another 33 years at my current usage. Give or take.

Now it should be noted that I’m not the typical person, and I do tune the crap out of my hardware (and the hardware of my clients) to ensure we get both the best experience and the best value out of our hardware. I’m not a gamer, but I run more varied applications and services than anyone I know, keeping a lot in RAM and minimizing page file usage to prevent unnecessary writes. This is to say that the typical person with a stock install may only get a “mere” ten to fifteen years out of similar SSD – for a computer where most of the rest of the hardware will be unsupported in 10 years. Task-based users (email + web + Word) could get centuries out of it if tuned properly. Hardcore gamers may only get a couple years, but they will be fantastic years.

I love the performance of my SSD, but believe me when I say I hope I am not still using this drive as my C: drive in 30 years. New developments are made every year and I plan to offload this one into one of my workhorses when I upgrade my primary rig. 🙂

True Wisdom

Should everyone use an SSD as their operating system drive? Yes. Should it be used for everything? No. You wouldn’t haul manure in a Porsche 911, would you?

I use SSDs in all my computers, but for some tasks I use conventional drives as well. I even use a few drives I know are defective but that have great caching capabilities. For example, I do a lot of video transcoding – converting and resampling video to improve quality and performance. This can write as much as 2 terabytes per day on one of my machines. That would kill my Kingston SSD in just over 4 months, so for these I use cheap conventional drives that are disposed of when they inevitably fail. The SSD runs the apps, but the conventional drive acts as a read/write canvas for transcoding. It works very well. But why don’t I just use an SSD anyway – they’re faster, right? Because the performance for video transcoding with FFMPEG is capped at the speed of the CPU anyway, so it’s never going to be bottlenecking at a disk read or write operation on a conventional drive, making use of an SSD a waste of valuable resources.

The choice is yours, of course, but don’t base your decision on whether to buy a solid-state drive on uneducated FUD.

Regards,

Shawn K. Hall
https://SaferPC.info/
https://12PointDesign.com/

Subscribe To Our Newsletter
Sign up to receive notifications of our new posts.
icon

Leave a Reply

Your email address will not be published. Required fields are marked *