mnirenberg 0 Posted November 30, 2013 Hello. I have a client that requires they have 2-3 years of data accessible for liability purposes. We currently have a 20 drive array and we have nothing but issues with crashing and drive failures. We are using openeye software on windows 7. Does anyone have any ideas on what would be the right way to handle this type of client. Any help would be appreciated. Thanks Mike Share this post Link to post Share on other sites
thewireguys 3 Posted November 30, 2013 So how many TB of storage do you requre? Share this post Link to post Share on other sites
mnirenberg 0 Posted November 30, 2013 enough to store 2 years plus. its a 32 channel analog system. Share this post Link to post Share on other sites
ilk 0 Posted November 30, 2013 enough to store 2 years plus. its a 32 channel analog system. What model hard drives are you using and what temperature are they running at? Ilkie Share this post Link to post Share on other sites
thewireguys 3 Posted November 30, 2013 enough to store 2 years plus. its a 32 channel analog system. Ok well you should know how many TB of storage you need. Do you know how to calculate this? It can easily be done for example I am currently working on a system with 250TB of storage Share this post Link to post Share on other sites
mnirenberg 0 Posted November 30, 2013 40TB I figure. Share this post Link to post Share on other sites
thewireguys 3 Posted November 30, 2013 Just built a 40.9TB (18 3TB Segate drives)Array using a LSI RAID card configured as a RAID 6 with one hot spare. The customer is running Avigilon and we haven't had any issues with the Array. Share this post Link to post Share on other sites
SectorSecurity 0 Posted December 2, 2013 Follow the wireguys advice, he is going down the right road, you will want hot swapbale with hot spare(s). Also I suggest using server drivers, which are meant to stand up to this type of abuse, not just any desktop hard drive. Also you now have to think 2 years is a long time, what if the building burns down? Do they require offsite storage? If so how are you going to handle that? Share this post Link to post Share on other sites
mnirenberg 0 Posted December 2, 2013 I assume you mean server drives not drivers. If not please explain. Thanks for you input!! I agree about the offsite. Share this post Link to post Share on other sites
SectorSecurity 0 Posted December 2, 2013 Yes I meant Drives, not drivers. Share this post Link to post Share on other sites
mnirenberg 0 Posted December 7, 2013 Would you mind listing the specs. Share this post Link to post Share on other sites
mnirenberg 0 Posted December 7, 2013 Wire guy would you mind tell me the raid card and array that you used for the scenario you listed here? Share this post Link to post Share on other sites
survtech 0 Posted December 7, 2013 I seriously recommend you consider a true hardware RAID box rather than a home-brew system. A well-designed RAID system can achieve 99.9999% ("four 9's") to 99.99999% ("five 9's") reliability. Use smaller RAID groups (no more than 10 drives total in an 8+2 RAID6 configuration) with at least one global hot spare per chassis. Having worked with RAID-for-video for over 10 years, I can attest that well-designed systems rarely crash or lose data. Share this post Link to post Share on other sites
thewireguys 3 Posted December 7, 2013 Wire guy would you mind tell me the raid card and array that you used for the scenario you listed here? I am currently working on system the previous integrator installed. Most RAID cards are LSI in 24 or 36 bay Supermicro custom boxes. Share this post Link to post Share on other sites
survtech 0 Posted December 7, 2013 On the minus side, almost every IT-oriented RAID system I've encountered has had trouble handling high bandwidth video storage. The key culprits are what some RAID manufacturers call "Drive Check Period" and "Maximum Drive Response Timeout". Drive Check Period is essentially how often the system checks the drives and Maximum Drive Response Timeout is the length of time the system allows for a drive to respond to I/O requests. Systems that record continuously high bitrates often encounter drives that are slow to respond due to performing bad block relocation and other maintenance tasks while writing continuously. The drives may be perfectly good but just responded a bit too slow so the system fails perfectly good drives. This can result in, at the best, way too many "failed" drives and at the worst, data loss. Some people I know recommend removing and re-inserting failed drives due to the likelihood that they are not really bad. I think that is risky if you don't fully test them prior to re-inserting them so I prefer to just set the RAIDs' parameters accordingly. Share this post Link to post Share on other sites
thewireguys 3 Posted December 7, 2013 The system with 40TB has between 200-230 Mbps coming into the server and about 50Mbps going out to clients. I built the RAID which took about 30min then it continued to optimize in the backround while the system was recording. I didnt see any performance issues with our VMS solution while it was optimizing. So far so good. Share this post Link to post Share on other sites
mnirenberg 0 Posted December 10, 2013 Survtech can you post a link of a raid box that you would recommend? Thanks Share this post Link to post Share on other sites