MR2 0 Posted November 3, 2013 I'm curious if anyone's trying the SSD's? if so what ones and how's it going for you... we're having issues where our normal 10k 146gb disks are struggling to be able to offload data fast enough to keep up with the incomming footage, the incomming network traffic is averaging 250 Mbps, the outgoing peaks at 310 to 330 Mbps... the problem with SSD's is I can see the wear rate being an issue? Share this post Link to post Share on other sites
thewireguys 3 Posted November 3, 2013 I have many servers running the OS on SSDs but nothing recording to SSD. What VMS are you using? Share this post Link to post Share on other sites
MR2 0 Posted November 3, 2013 is anyone actually offloading to Nearline storage (7200rpm or lower) ? I've contemplated just dumping straight into the archive and avoid having to double handle the data. I do however like the idea of the latest 4-6 hours of data being on faster disks to go back to recent events. Share this post Link to post Share on other sites
thewireguys 3 Posted November 3, 2013 (edited) Milestone/ONSSI are the only VMS that I know of that require high speed drives for short term storage. I just find it interesting that Avigilon pulls video faster then both of them using slower drives. Edited November 4, 2013 by Guest Share this post Link to post Share on other sites
MR2 0 Posted November 3, 2013 it is interesting, Milestone seems to dump the files as 2meg .PIC files, what does the core Avigilon data look like? Share this post Link to post Share on other sites
buellwinkle 0 Posted November 4, 2013 SSD drives are not really good for this as their write life is limited especially low cost multi-level cell technology used in cheaper consumer grade drives. If you want to do this and have it last more than a year or two, consider single-level cell (SLC) SSD drives. If your NVR software has a database, for example I believe XProtect as example uses MS SQL Server, you can probably put the database on SSD for better performance as well as the software and OS. Share this post Link to post Share on other sites
MR2 0 Posted November 4, 2013 whelp, my problem is I'm trying to figure out if I've out specced the server we have or not, we're at around 25 camera's and at the very least the disks appear to be struggling, the disk queue does not go particularly high however as mentioned the offload of data is slowing down... I'm splitting our archive array from 11 drives in a Raid 5 to a pair of 8 drive raid 5 arrays, so with a bit of luck that'll speed it up enough Share this post Link to post Share on other sites
buellwinkle 0 Posted November 4, 2013 RAID 5 is good for reads, but can be slower for writes, especially on large arrays. If you are having performance issues on the drives, you can configure RAID 0 or 10. But you are on the right track reducing the drive counts for RAID 5, personally I would go with 2+1 on RAID 5, much better performance, comparable or maybe even better than RAID 10. Also, split the drive on separate controllers, putting 16 drives on one controller can slow things down. Share this post Link to post Share on other sites
MR2 0 Posted November 4, 2013 yeah wish I had the dollars for something with a separate controller, it's a NAS unit that's fairly beefy and I'm hoping splitting the disks works, if not then maybe we'll need a second NAS and split the arrays between NAS units. Share this post Link to post Share on other sites
ssmith10pn 0 Posted November 4, 2013 What method are you transporting the data to the NAS? If your doing Ethernet that's your problem. We switched to HBA and all our troubles went away and been running for 2 years without a glitch. Share this post Link to post Share on other sites
MR2 0 Posted November 4, 2013 What method are you transporting the data to the NAS? If your doing Ethernet that's your problem. We switched to HBA and all our troubles went away and been running for 2 years without a glitch. Yep, Ethernet, so you went with what... ISCSI? Fiber Channel? Share this post Link to post Share on other sites
buellwinkle 0 Posted November 4, 2013 Hopefully at least 10GigE on it's own VLAN. Your best bet is to get DAS (direct attached storage) or hopefully you have enough internal capacity to put say 8 drives in your server, then use the 8 internal drives + 8 on the NAS to get more disk bandwidth. We used to use ratio of 3 NAS or SAN drives to equal performance of 1 internal drive. Share this post Link to post Share on other sites
ak357 0 Posted November 4, 2013 What method are you transporting the data to the NAS? If your doing Ethernet that's your problem. We switched to HBA and all our troubles went away and been running for 2 years without a glitch. Yep, Ethernet, so you went with what... ISCSI? Fiber Channel? HBA=host bus adapters Share this post Link to post Share on other sites
MR2 0 Posted November 4, 2013 there are many different types of host bus adapters, I'm curious which type he has you can have Direct SAS attached, or have a FC attached NAS... after more info what he's doing Share this post Link to post Share on other sites
buellwinkle 0 Posted November 4, 2013 HBA's implies a SAN, not sure it's any faster than a modern day NAS. I've tested EMC's SAN for a join project on/off for a year until the product was ready to take to market and I can tell you that if you really tune a SAN or NAS, maybe you can get it to the point where it's 1/3rd the performance of direct attached disk but I'm talking their low end SAN, $300K range. What we did to mitigate the slowness was to mix internal drives to get the aggregate performance of both. Share this post Link to post Share on other sites
MR2 0 Posted November 4, 2013 Indeed, that's why I want more info on what he's running I'd love to run direct attached SAS but the price is right out there, I'm suspecting we're at the stage (if everything I'm doing is not working) that we split off to multiple servers and just keep each install/Server to 20 Camera's Share this post Link to post Share on other sites
hardwired 0 Posted November 4, 2013 I've been building my larger systems with direct attached storage (using Areca 1882ix series controllers), but typically using 7200rpm SATA drives for storage. I usually try to keep my total bandwidth a little lower than what you are doing, though. A system I just finished used the Areca controller, four 15K Hitachi 600GB Ultrastar's in RAID 10 for the OS, and 32X WD 3TB 7200 RPM SAS drives, RAID 6 in two 16 bay enclosures. That one I'm comfortable pushing a little harder. One plus for Avigilon, the outbound viewing traffic is usually quite a bit lower than what I was used to seeing with Milestone, unless you turn the resolution down quite a bit in the Milestone smart client (default is full resolution in Milestone, which tortures video cards and networks unnecessarily). Share this post Link to post Share on other sites
ak_camguy 0 Posted November 4, 2013 Are you sure its only 230Mbps? That's not very fast and even a 7200rpm sata 6GB/sec drive should be able to do that... Simple math.... 230Mbps/sec = 28.75 MB/sec.... Your drive should easily be able to handle that transfer speed. I would check your network as the bottle neck. I do around 5MB/sec transfers to my NAS over wireless using N150. You should be able to write easily at 20MB/sec if you have a gig port on your NAS/SAN. Oh, does your array/server/nas/SAN device have dual gig ports? If it supports link aggregation, I would look into configuring that. Should help with network speeds. Share this post Link to post Share on other sites
buellwinkle 0 Posted November 4, 2013 He said 310-330Mbps, but really, that's sounds like a max'ed out GigE using TCP/IP so real numbers can be higher. If breaking them in to separate NAS devices, at least keep each on it's on NIC and subnet on the server. Still think it's better to take the drives out, SATA or SAS and get a DAS enclosure with a built in RAID controller (SATA or SAS) and use them that way. Share this post Link to post Share on other sites
MR2 0 Posted November 4, 2013 Network is OK, all these devices are connected to HP 5120's with 10gig backbones, when I do a copy/paste between shares I get nothing less than 100meg a sec, averaging 103/104meg start to finish Copying from a Mirrored pair of drives operating as the system's boot to the first array & second arrays I'm getting 70-80 MB/Sec once it fills the cache copying from the servers arrays to the NAS, just a file copy from one of the arrays to both of the NAS arrays at the same time I get a solid 55 MB/Sec(each).. maybe a software issue, trying not to de-virtualize from 2012 Edit: just did another test where, rather than uploading a single large file for the test, I uploaded a whole pack of the Milestone files and the copy speed was the same as Milestone, so I guess it is array config, might try increasing the array block size or see what else I can do... Share this post Link to post Share on other sites
ssmith10pn 0 Posted November 4, 2013 HBA's implies a SAN, not sure it's any faster than a modern day NAS. I've tested EMC's SAN for a join project on/off for a year until the product was ready to take to market and I can tell you that if you really tune a SAN or NAS, maybe you can get it to the point where it's 1/3rd the performance of direct attached disk but I'm talking their low end SAN, $300K range. What we did to mitigate the slowness was to mix internal drives to get the aggregate performance of both. Sorry for the confusion. It's a EMC NS480 NAS but since we are using HBA cards it sees it as a SAN. I agree direct attached storage is the best. We do a lot of that too. Share this post Link to post Share on other sites
MR2 0 Posted November 4, 2013 Hah! you guys with your ridiculous budgets... I'm Jealous. going to try REFS for the file system and see if that helps, if not then change the host disk file system, if not then try splitting the disks into pairs of arrays (going to be messy but meh.) Share this post Link to post Share on other sites
ssmith10pn 0 Posted November 5, 2013 Hah! you guys with your ridiculous budgets... I'm Jealous. There is about 40 more boxes just like that on the same campus. Share this post Link to post Share on other sites
ilkevinli 0 Posted November 5, 2013 Those are great photos. Thanks for sharing. Share this post Link to post Share on other sites
thewireguys 3 Posted November 5, 2013 HBA's implies a SAN, not sure it's any faster than a modern day NAS. I've tested EMC's SAN for a join project on/off for a year until the product was ready to take to market and I can tell you that if you really tune a SAN or NAS, maybe you can get it to the point where it's 1/3rd the performance of direct attached disk but I'm talking their low end SAN, $300K range. What we did to mitigate the slowness was to mix internal drives to get the aggregate performance of both. Sorry for the confusion. It's a EMC NS480 NAS but since we are using HBA cards it sees it as a SAN. I agree direct attached storage is the best. We do a lot of that too. I love server PORN Share this post Link to post Share on other sites