Jump to content

amirm

Members
  • Content Count

    87
  • Joined

  • Last visited

Everything posted by amirm

  1. On recording to internal SD card, keep in mind that (NAND) flash memory in general hates writes. It is the slowest mode of operation and degrades the cells as you use it. Multi-level cells, the cheaper and higher density type, is much worse in this regard. And there will be serious card to card variations as far as how long they last as some have smarter logic than others to reallocate near failing cells. So while SD card storage in camera is a very convenient mode of operation, unfortunately NAND flash memory is rather poorly matched to CCTV applications. Mobotix itself makes a strong point of this. If it were me, I would make sure the recording is infrequent and triggered by image changing (mistakenly called motion sensing). Setting it to loop all the time means you will be changing cards every few months.
  2. I think you already have the full story . Resolution is independent of picture quality. So yes, if the camera has roughly 1 million pixels or more can be called megapixel. On HD, that is a soft definition too even though the specs for each scan rate is defined by SMPTE, ATSC and other organizations like them. Generally though, 720p and higher is considered "HD."
  3. Sure. I have to pack for a trip tomorrow so a quick response now but let me know and I can provide a more detailed answer later. First, let me mention that nobody hardly outside of CCTV world uses the term H.264 anymore. That naming more or less went away when ITU merged its activity with MPEG and new name, "AVC" was born. I will use that term from here on but the topic is the same. MPEG-4 AVC is an "advanced" form of video compression in that it provides much more efficient compression than MPEG-2 and MPEG-4 "ASP" (which has nothing to do with AVC despite having the same name!) Part of this efficiency comes from new ways of doing things. But part of it comes from a Chinese menu of "tools" or features that the encoder can use to improve efficiency. Let me give you an example. In these types of video codecs, the screen is divided in square blocks and each is compressed separately. This is why you see "blocking" artifacts when the data rate is not enough to keep up with source activity. In MPEG-4 AVC, there are a choice of two block sizes: 4x4 and 8x8 pixels. In MPEG-2 the only block size was 8x8. For some images, smaller blocks works better and vice versa. The encoder may actually encode both ways, compare them with each other, and then decide which one to use. Unfortunately all of these tools make encoding very slow and "expensive" if you have to build hardware for them. To ease that pain, the specification makes many of these features optional. Both decoder and encoder can choose to only implement a subset of the features and if so, you cannot exceed that capability. This brings us to your question. To keep the permutations of implementations from going crazy, the standards organization create "profiles" which groups certain features together. Think of a convenience package in a car where you get leather seats and navigation together. Same idea here. The baseline profile with least features BP/CBP with the difference being how error resilient one is vs the other. This is the lowest cost implementation and leaves out many features include the different block sizes explained above. MP or Main Profile was supposed to be the most obvious implementation point for AVC. But no sooner than the standard was designed that they were forced to revise it and add the variable block size feature above (prompted by what my team at Microsoft did in building the competing VC-1 solution ). This led to... High Profile (HiP or HP). This is the profile used in Blu-ray format and supports variable block size as explained. Variable block size can be quite handy to achieve sharper images/higher compression ratios but due to high expense, I doubt that we will see it in CCTV applications that often. Above HP, we run into specialized profiles for video post production, archiving, and high resolution source capturing. In other words, not for consumer applications. In consumer land, each video digital sample has 8 bits for black and white (Luminance) information and 8 bits for color (Chrominance). And we have four times less color samples as we have for black and white (i.e color resolution is that much lower). Hi10P allows each sample to go up to 10 bits. This allows better resolution in post production work (reduces possibility of banding). Hi422P doubles the color sample resolution. Hi444P doubles the color sample resolution yet again. So hopefully this gets you started on figuring out what these things mean at high level. As to how you can tell if an encoder is compliant with any of this, the answer is that you can't. The specification doesn't mandate what an encoder must do. It only mandates what the decoder can do! I can design an MPEG-4 AVC encoder which is hardly better than MPEG-2 and still call it MPEG-4 AVC! Such an encoder could choose to not deploy any of the optimization tools and do a lazy job of using the mandatory features. The only way to tell then is to ask the company what encoder they have and what "tools" they support.
  4. That is a strange term for them to use but given your explanation it is the "keyframe distance." The encoder starts the compression with a full frame image compressed kind of like a JPEG called an I-Frame/keyframe. From then on, it sends incremental changes for parts of the screen which have changed (so called "B" and "P" frames). This is more efficient but also causes errors to accumulate which you indicate as blurry images. The encodre will generate an I-Frame when it thinks incremental frames no longer work well. This causes a sudden change in fidelity especially in less than well implemented encoders as you noted. The "refresh rate" above forces a keyframe whether the encoder thought it was necessary or not. 1-second interval is kind of short though so it is a shame that it is not programmable. But if the data rate is high enough it should not matter as much.
  5. Thanks. Is it a protected line? I see Mobotix advertised everywhere but not Avigilon (and hence my question on relative cost). Their player is much more intuitive than Mobotix and I like the fact that they change the dynamic range on still frames to make the image more viewable (it would be nicer if they gave me control of that though). I like the IR cut filter solution better than Mobotix which requires you to select color or black & white cameras seperately (or buy a dual-head one). My application is the dome which they already have but they seem much taller than Mobotix which for residential applications, is a negative. JPEG-2000 is better than MXPEG in image quality but less efficient. Another negative is lack of online documentation as far as any SDK for third-party integration.
  6. How does the cost of Avigilon compare to Mobotix?
  7. You can download the software and play with it. It comes with dummy streams for the cameras. I crashed it once I think after playing with it a lot. But otherwise, it was 90% intuitive.
  8. MPEG standards do not specify any encoder functionality. All the spec says is how to decode the video. So you are right that encoder is free to utilize as many "tools" as it likes. At the extreme a MPEG-4 encoder can act simply like motion-JPEG for example (and first generation MPEG-4 encoders did exactly that). As long as the video can be decoded, it can be called "MPEG-4." There are also different profiles like "ASP" (Advanced Simple Profile) which increases efficiency somewhat. What is the frame rate that you tested the DCS-2102?
  9. Keyframe distance is variable. The encoder will make that decision on every frame. Note however it is not just motion that will trigger a keyframe to be sent. But rather, the encoder deciding that sending the incremental change is more than sending a keyframe. Take the example of the sun coming out all of a sudden. Nothing has moved but every pixel has changed. So instead of sending the difference between every pixel and last keyframe, it is more efficient to simply generate a new keyframe. Fair enough. One note though. In MPEG-4 ASP there is sometimes accumulated error. That is, if you keep sending changes relative to the original keyframe, the image degrades over time. You can see this effect when watching a mostly static image and seeing it get better at regular intervals when the keyframe arrives. This is another reason keyframe distance is not set to too large of a number.
  10. The answer is "neither" . MPEG-4 like other interframe codecs, sends out a full frame of video called I-frame. It then transmits differences between that reference frame and what is happening now. You are correct that in theory, if nothing changes, nothing needs to be transmitted. However, MPEG-4 encoders by default, repeat the I-frame after certain amount of time, whether there has been any motion or not. Check to see if there is a parameter called "I-frame" or "keyframe" distance. Increasing this number to a high value, will reduce the frequency of I-frames. Note that setting the keyframe distance too high may make it difficult to seek into a specific portion of the recorded video later. And things like fast forward may not work nearly as well. Both of these are reasons for having frequent I-frames. As a way of example, DVD and Blu-ray disc formats use 0.5 second keyframe distance to allow features like above. Let me know if above is not clear.
  11. Are you taking images by shooting through a glass window? If so, I would test by opening the window especially at night.
  12. Hello everyone. A while aga I wrote a detailed post on performance of analog camera. I have received a bunch of request for doing the same for IP cameras. Instead of just having a post which may be hard to find, I thought it might be best to have it in article form and posted some place. Marc, one of the forum members was kind enough to host the article for me (I have no business relationship with him or any other camera maker). Here it is: http://www.monitoryourassets.com/ip-vs-analog/ I hope you find it useful. For the experienced crowd out there, appreciate any feedback you might have.
  13. Cat5 and cand cat6 have the same *network* bandwidth. Gigabit is supported just the same on both. Cat-6 however, is a higher quality cable, reducing the possibilitly of inteference and ability to do better at longer lengths. So to the extent your cable is getting routed through noisy environment, it is a better bet. Not an issue in this space but for extending HD video over HDMI, cat6 is recommended over cat-5 because there, we don't have retranmissions.
  14. Just found this product: http://www.microseven.com/hrctech/front/productdetail.asp?productid=13 No experience with it thought but says it supports four channels and H.264. But seems to record on its own hard disk though.
  15. amirm

    What is the highest CCTV camera resolution?

    "Better" is relative in this case as well... "better" for what? Better for fine dtail recognition. Telephotos as a rule have a much better MTF (Modulation Transfer Function: sharpness and contrast). They have much less tendency to suffer from CA (Chromatic Aberration: color bleeding). And tend to have less geometric distortion (pincushion and barrel distorion: lines bending). They also have better corner sharpness. All of this matters when you try to zoom in only to find mush instead of detail that the sensor is capable of. Here is the MTF for to lenses of comparable age as far as design era and relative cost. The flatter and higher the curves, the better the lens. First is the Canon 400F5.6 telephoto: Now the 20f2.8 wide angle: Huge difference, no? The corners of the wide angle (the curves on the right) have nearly no resolving power, compared to the telephoto which does not suffer hardly at all as you go from center (left of the curve) to the corner (right of the curve) of the lens. So if you have the bad guy in the corner of your camera, you better pray you are not using a wide angle like above! There are better wide angle lenses of course. Here is canon 35f1.4: Let's see how that contrasts with a higher end telephote, namely the 500F4.0: If you look, you can NOT find a single none telephoto lens which has the performance of the 500. CCTV world does get a free ride though. Given the tiny sensors used here, they don't push the corners of the lens as much as the above lenses do on a 35mm full-frame digital camera. True although one has to be aware of the design period. Once more, Canon's newer zooms tend to outperform their older prime (fixed focal) lenses in most criteria. For this reason, most pros use zooms without hesitation. Such was not the case 30 years ago when I got into photography.
  16. Just clarifying. The banding is different than noise due to high gain. The sensor noise would be random and not lines. It is the side effect of the input amp being at highest gain point (and signal at lowest point) which allows noise from the rest of the camera (which would have more regular pattern) to be picked up. It is true that improper resizing causes quantization noise (banding as you call it). However, that kind of banding has a very different look. And further, would occur whether the camera is picking up low light or full sun. If you look at the daytime shots, there is no banding at all. So no, this banding is not due to resizing distortion. I suspect you are right but I don't know for sure. If it is using the same three megapixel sensor, then it will have a better signal to noise ratio than its SEC brother since the process of resizing will act as noise reduction. Reason is that proper resize algorithm would filter high frequencies and noise spectrum likewise gets filtered down. Let's look at this in detail. Imagine if you resized the image by half by adding two adjacent pixels and divided by two to get your new pixel. Now look at the scenario of one black pixel and one noisy pixel right next to it (typical situation in a low light situation). Add these two together and divide by two. What do you get? The new pixel will have half the intensity of the noisy pixel (0 + N = N/2)! This kind of filter is not that great in practice so it typically is not used but should give you an idea of why a resizing filter like this reduces noise.
  17. OK, here is the answer to the quiz. When there is very little light, the camera increases the "gain" (amplification) of the sensor quite high in order to extract the tiny amount of signal coming from it. The lines get created due to noise in the rest of the camera bleeding through the analog gain stage. The noise is random to the extent that what the camera is doing from line to line may be different (e.g. outputing something on Ethernet port or not). So yes, having power supply noise or reading the pixels using certain logic all contribute to noise which shows up in the final image. While we are on topic of noise, the lower the temprature, the less of it in the image! So your camera will work better in winter than summer. And proper comparison of two cameras requires that they both be run at the same temprature....
  18. From the documentaion yes. The way they get more sensitivity is to remove the IR filter which is necessary for color CMOS sensors for proper color reproduction.
  19. Another good guess. But no, that is not it. OK, that could also be remotedly related . Come to think of it, power supply noise could also contribute to this....
  20. Good guess but no. Well, OK, it has a remote connection . Note that the camera fetches those pixels the same way all the time. So their appearance in this example versus others has to be something else... I will post the answer soon if no one else wants to guess....
  21. Thanks for the kind words Sawbones. While I am posting this, here is a pop quiz: what are those horizontal lines on the black and white picture Marc posted just above?
  22. amirm

    Formula for days of recording.

    The math depends on one number but otherwise is very simple. You must know the data rate for each channel. Assuming you are using full resolution analog camera, we can use an assumption of 2 mbit/sec per channel of continuous recording. This number by the way, is selectable by you. It can be lowered and with it, lower the image quality. Or higher for improved quality (I am assuming MPEG-4 compression here). Now to turn that number into days of recording, you need to do the following: 1. Divide the data rate by 8 to convert it to megabytes/sec (8 bits in one byte). 2. Multiply by 3600 to get the rate per hour. Multiply again by 24 to get days. 3. Multiply by number of channels you have. Now let's do the math for your machine: 2/8*3600*24 = 21.6 Gigabytes/day 21 * 12 cameras = 453 Gigabytes/day So your 160 Gigabyte drive would not even last one day let alone 7! You can reduce the frame rate of some of the cameras as to reduce their data rate but you are so far away from making this work that I can't see that being enough. Fortunately, a 1.5 TByte drive goes for $130. Add a couple and you are good to go (assuming you don't need redundancy). Now someone double check may math .
  23. Yes, big difference in reduction of compression artifacts. The two terms are not interchangable. Let me explain. An interlaced sensor sends the horizontal pixels the same as a progressive sensor. However, in vertical dimention, in one shot ("field") we get the odd lines, and in the next one, even ones. Now, if the camera and subject are 100% stationary, then nothing is lost in using interlace. We paint the odd lines in 1/60th of a second and even ones at the next 1/60th of second. Your eye would "filter" these into thinking all odd and even lines were sent at once. But life is not made up of static objects and rock solid camera mounts. As soon as either moves, the odd and even lines are no longer from the same image, but different ones (due to things moving 1/60th of a second later). If the motion is fast enough, your vertical resolution reduces from 480 lines to 240 lines (i.e. odd and even lines capture entirely different images). So in a nutshel, our broadcast standard has a variable vertical resolution. For motion, it has half the resolution of static objects. There are two solutions here: 1. Average the two "fields" and created one "frame" out of them. This would smooth the differences between the lines, replacing the jagged line with a softer, albeit lower resolution image. 2. Using motion tracking to attempt to synthesize what could have been there if we did not have interlace. This is hard to do right and hence my comment regarding it not being as common in this space. I don't know which scheme Axis is using but neither is the same as having a progressive sensor at the start. Interlace is a complex transformation of the image and it is very hard to undo its effect after the fact. The ONLY way to avoid it is to use an IP camera with progressive sensor. Having said this, the Axis is doing a nice job there.
  24. Not the "IT" model presented here: Mobotix builds great cameras but has made a mess of explaining the resolutions/models of their cameras. Only the "Sec" models have megapixel resolution. It is hard to seperate CA (lateral chromatic abberation) from NTSC/PAL artifacts. My money is on the latter though .
  25. You are correct in this point but I believe both images are posted at their sensor resolution. The second one seems to be set to PAL scan rate though. So it is a bit larger because of that. Here is a resize of it to VGA. It gets a bit sharper but all the issues remain: Indeed, these are all factors that also contribute. But they are harder to prove using just an image presented here. Nor do I believe they are the top reasons the image is so much worse than the other subjectively.
×