Lsi Cachecade Keygen Music

Posted on

With CacheCade, LSI is offering a quick SSD read/write cache for RAID volumes.The functional approach is similar to Adaptec's maxCache technology.Detailed information about the configuration of a CacheCade volume can be found once more in the Configuring the LSI MegaRAID CacheCade. White Stripes 300 Mph Torrential Outpour Blues Live http://shurll.com/b48ql. Get the best deal for LSI SAS RAID Cards from the largest online selection at eBay.com. Browse your favorite brands affordable prices free shipping on many items.

Active4 years, 9 months ago

LSI offers their CacheCade storage tiering technology, which allows SSD devices to be used as read and write caches to augment traditional RAID arrays.

Other vendors have adopted similar technologies; HP SmartArray controllers have their SmartCache. Adaptec has MaxCache.. Not to mention a number of software-based acceleration tools (sTec EnhanceIO, Velobit, FusionIO ioTurbine, Intel CAS, Facebook flashcache?).

Coming from a ZFS background, I make use of different types of SSDs to handle read caching (L2ARC) and write caching (ZIL) duties. Different traits are needed for their respective workloads; Low-latency and endurance for write caching. High capacity for read.

  • Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play?
  • When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged.
  • Do writes go straight to SSD or do they hit the controller's cache first?
  • How intelligent is the read caching algorithm? I understand how the ZFS ARC and L2ARC functions. Is there any insight into the CacheCade tiering process?
  • What metrics exist to monitor the effectiveness of the CacheCade setup? Is there a method to observe a cache hit ratio or percentage? How can you tell if it's really working?

I'm interested in opinions and feedback on the LSI solution. Any caveats? Tips?

Community
ewwhiteewwhiteLsi raid controller software
177k80 gold badges384 silver badges743 bronze badges

3 Answers

Since CacheCade SSDs can be used for write and read cache, what purpose does the RAID controller's onboard NVRAM play?

If you leave the write caching feature of the controller enabled, the NVRAM will still be used primarily. The SSD write cache will typically only be used for larger quantities of write data, where the NVRAM alone is not enough to keep up.

When used as a write cache, what danger is there to the CacheCade SSDs in terms of write endurance? Using consumer SSDs seems to be encouraged.

This depends on how often your writes are actually causing the SSD write cache to become necessary.. whether or not your drives are able to handle the write load quickly enough that the NVRAM doesn't fill up. In most scenarios I've seen, the write cache gets little to no action most of the time, so I wouldn't expect this to have a big impact on write endurance - most writes to the SSDs are likely to be part of your read caching.

Do writes go straight to SSD or do they hit the controller's cache first?

Answered above.. Controller cache is hit first, SSD cache is more of a 2nd line of defense.

How intelligent is the read caching algorithm? I understand how the ZFS ARC and L2ARC functions. Is there any insight into the CacheCade tiering process?

Sorry.. no knowledge to contribute on that - hopefully someone else will have some insight?

What metrics exist to monitor the effectiveness of the CacheCade setup? Is there a method to observe a cache hit ratio or percentage? How can you tell if it's working?

It doesn't look like any monitoring tools are available for this as there are with other SAN implementations of this feature set.. And since the CacheCade virtual disk doesn't get presented to the OS, you may not have any way to manually monitor activity either. This may just require further testing to verify effectiveness..

Opinion/observation: In a lot of cases (when used correctly, read cache appropriately sized for the working data set) this feature makes things FLY. But in the end, it can be hit-and-miss.

JimNimJimNim

Speaking about hardware solutions I found no way to know exact hit ratio or something. I believe there are 2 reasons for that: the volume behind controller appears as a single drive (and so it should 'just work'), and it is hard to count 'hits' which will be not for files but rather for HDD sectors so there'll may be some hit rate even on empty HDD which may be confusing. Moreover algorithms behind 'hybridisation' are non-public so knowing hitrate won't help much. You just buy it and put it to work - low spendings (compared to pure SSD solution), nice speed impact.

'Buy it and use it' approach is pretty good thing to consider, but the fact is noone knows for sure how to build the fastest combination: should we use several big HDD and several big cache SSD, or should we use many small HDD and several big SSD etc., and what's the difference between 100 or, say, 500 Gb or 2000Gb of SSD cache (even 500 looks overkill if volume hot data are small-sized), and should it be like 2x64Gb or 8x8Gb to have data transfer paralleled. Again, each vendor uses its own algorithm and may change it on next firmware update.

I write this mostly to say that my findings given me strange answer: if you use some general-purpose and general-load-profiled server, then h/w hybrid controller is fine even with relatively small SSDs, but if your tasks used to be specific you'd better go for s/w solution (which you'll be able to choose since you're the only one who knows the load profile) or for some high-priced PCI-card storages.

2017

AlexanderAlexander
4341 gold badge5 silver badges18 bronze badges

I have tried it on Dell R515 with Dell PERC H700 Raid w 1GB memory with couple 500MB/s SSDs

I did my bench marking few hours after installation, and after 48 hours

I didn't see much improvement for write speed, a little improvement for Read, I did the test a white ago and I don't have numbers now.

But it wasn't significant and I ended up using the storage box without this feature.

From my experience, most of these software are just a joke! if you need storage tiering, then build your own, get reliable hardware from Dell & fill the box with SSDs

Lsi Raid Controller Software

Lsi raid software

At my work place storage tiering work really well HP 3Par & adaptive optimization add-on, it works as advertised but this solution is around 100K :)

user1007727user1007727

Not the answer you're looking for? Browse other questions tagged storagecachehardware-raidlsi or ask your own question.

Lsi Cachecade Keygen Music
Active2 years, 5 months ago

I'm trying to add a RAID10 SSD Cache to a VD by this command:

In the reference of this storcli command, it says raid10 is possible but it doesn't allow me to create a RAID10 array for a cache.

Any ideas?

ispirtoispirto

3 Answers

It seems RAID1 with 4 SSDs means RAID10 in LSI's world. Here is a reply from LSI:

ispirtoispirto

Did you try type=10?

I believe that you can only use CacheCade SSDs in RAID 0 or RAID 1. I'm not sure why you'd need RAID 1+0, though. The upper limit to the caching size per controller is 512GB, no?

ewwhiteewwhite
177k80 gold badges384 silver badges743 bronze badges

use 4 ssds in MSM to create 1 cachecade drivegroup, select raid 1.There is no raid 10 option, however, the created drivegroup will appear to be 2x of single ssd size, and write back by default. I strongly believe it is indeed raid 10.p.s. you can also create 2 or more cachecade drivegroup, with ssd of different sizes, might be a better solution.

zhaozhao

Not the answer you're looking for? Browse other questions tagged raidraid10lsimegaraidmegacli or ask your own question.