It's for the "Fusion Drive." It was just used as a cache basically. You had a larger, slower drive behind it for capacity, this just held frequently accessed data.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
Sounds like intels optane drives
More or less. Fusion Drive was introduced when high capacity SSDs were rather expensive. Although the SSD part had 128 GB iirc. Apple stuck with Fusion Drive BTO options for way too long and also nerfed it to 32 GB as evidenced here.
Man, Intel seriously needs to license Optane out. That technology represents a new paradigm for digital storage. It's simpler/cheaper to manufacturer than flash memory and its speed is more comparable to RAM than flash, it's at least an order of magnitude faster than current nvme drives. It's also three dimensional, so there's potential to make super fast terabyte, even petabyte sized drives.
I wish the world was competing to make better Optane/xPoint drives like they are with flash, it's a shame the tech is locked behind a patent...
As someone who has performed data recovery on optane systems. No, just no.
There's larger slow mechanical storage.
Faster flash and xpoint storage
RAM.
Any device can use a level up to cache and appear faster.
But RAM caching is generally better handled by the OS itself.
Flash coaching isn't an awful idea except when it goes wrong your back to "safety remove disk" being absolutely vital. So the OS needs to be aware and cutting power at the wrong time can kill your install.
Every update says "so not turn off your computer" for a reason but the actual redundancy we now have is leagues better than 10 years ago
God forbid one component in an optane chain becomes unreliable.
Ultimately everything needs to run in RAM. Everything needs persistent storage. A non-standard middle step between persistent and volatile memory is best avoided.
Xpoint was an interesting experiment but CXL replaced it. Ultimately the choice for data centres is to support more RAM. The additional RAM replacing the optane cache while the waits to be written is more compatible and predictable.
You can now have terrabytes of RAM and if you rarely boot and have redundant systems. There's need for the middle step.
The cost of memory and SSD per gigabyte as a cache matters. But RAM error correction and other protocols give even more advantages to avoiding optane.
This. I had one in an iMac a while back.
so it's pretty much like SSHDs nowadays, or like read caching?
Pretty much the same, software solution VS hardware one.
Here's the section for the wikipedia article you mentioned in your comment:
There are two main "hybrid" storage technologies that combine NAND flash memory or SSDs, with the HDD technology: dual-drive hybrid systems and solid-state hybrid drives.
^to^ ^opt^ ^out^^,^ ^pm^ ^me^ ^'optout'.^ ^article^ ^|^ ^about^
It’s probably an SSD for a Fusion Drive setup: https://en.m.wikipedia.org/wiki/Fusion_Drive
It seems to check out for iMac in 2019.
Here's the summary for the wikipedia article you mentioned in your comment:
Fusion Drive is a type of hybrid drive technology created by Apple Inc. It combines a hard disk drive with a NAND flash storage (solid-state drive of 24 GB or more) and presents it as a single Core Storage managed logical volume with the space of both drives combined. The operating system automatically manages the contents of the drive so the most frequently accessed files are stored on the faster flash storage, while infrequently used items move to or stay on the hard drive. For example, if spreadsheet software is used often, the software will be moved to the flash storage for faster user access. In software, this logical volume speeds up performance of the computer by performing both caching for faster writes and auto tiering for faster reads.
^to^ ^opt^ ^out^^,^ ^pm^ ^me^ ^'optout'.^ ^article^ ^|^ ^about^
A wikipedia bot ? Great ! ❤️
Thanks.
You are the one who made it ? Then It's me who should thank you ! Anything that improve lemmy is greatly appreciated.
Giving it a fancy name does not hide the fact it would be much better to replace the HDD with an SSD the same size. Typical Apple - marketing over substance.
Well to be fair they rolled these out back when SSDs were much more expensive and typically you could choose to pay extra for a real SSD.
2019? Even if that was the last year for it, it should've been replaced by SSD years earlier. Small SSDs for caching made sense in ~2011, but not much later.
Lol wtf, this was in a 2019 computer?!
WHAT?
High capacity m.2's have existed for over a decade. They were extremely affordable in 2019.
That's Apple for ya.
I would think it's for high-speed caching or something, like for disk cache or video editing. NVMe is used for direct access to the memory from the CPU, speeding it up even further. Most likely it's for an older system before SSDs were cheap. That also means it's likely SLC instead of the MLC or 3D architecture we're mainly using now. That would translate to much faster relative speed (compared to similar generations of similar storage), and very good write endurance.... Again, compared to other examples of the same kind of storage from a similar generation.
There's a lot of good reasons to use small fast drives like this.
Back in the day, I took two very fast HDDs and put them in RAID 0, and attached it to my system as my application storage. I was on windows 2000 or Windows XP at the time and I had to drop to safe mode to move the contents of the "program files" folder onto the array, and remount the RAID as the folder in question. Took my a few tries to get it right, but application listing times were very fast after that.
In the early days of SSDs, I set up a small SSD for my OS and main Windows apps, and redirected my user files to a classic hard drive, since I don't generally need high-speed access to my music and videos and such. Windows has a faculty for this where you can redirect your user folder to another location in its entirety, so my entire local user folder was on that drive, while everything else was on the SSD. I also pointed my steam games to the HDD so I didn't have any issues with the size of game downloads.
Now that SSDs are fairly inexpensive, I've rebuilt my system on all flash, so I don't need any weird disk configurations any longer.
I had my OS on 2 15000 RPM drives in raid 0.
Nice.
Such a tiny storage space gave me a free netbook.
That netbook had 32GB, of which windows and office took about 27. Then windows decided it needed an update of 8GB. Game over, as the flash was soldered on the board.
Wiped windows, installed Linux and libreoffice and some extra things. Used about 5GB.
Fwiw, I had a tablet do this a couple years back. after opening it up again recently it looks like Windows created an option where you can still upgrade, but you have to plug in a flash drive so it has extra space to download the update. Then it patches the system and removes the windows.old file to free up space on the internal storage.
No need for that now. Windows is gone for good from that machine.
Anyone remember Intel Optane? Nah, just me.
Interesting to see that Apple's price to capacity ratio styed the same
Early centrino laptops had a 16-32gig solid state drive in addition to mechanical disks purely to speed boost time. It was part of the spec in order to advertise Intel Centrino. SSD was too expensive to do the whole disk so just a simple operating system essentials raid type setup made booting incredibly fast for 2010 standards.