Sunday, December 28, 2014

PSE and the DPP - a whole lotta flashing going on

 One things that seems to have to come to light since the introduction of the DPP delivery spec at the start of October is just how badly folks understand the requirement for PSE (Photo Sensitive Epilepsy) with respect to TV deliverables. A lot of this is down to the fact that since the late '90s the industry has essentially had two standards - OFCOM and Harding
I did a talk to NBC-Universal's TV dept in July and you can snag the notes here - pay attention to p.12 to see why the important thing now is that the metadata of your AS-11 file needs to specify what algorithm was used and pass/fail.

For a (slightly noisy!) audio recording - download here.
For all the notes and test clips - Google Drive folder

Friday, December 19, 2014

Article in Broadcast Film & Video magazine, no.2

Wow - I'm appearing in this magazine every month at the moment! This article is all about "value-added systems integration"

Tuesday, December 16, 2014

Article in Broadcast Film & Video magazine

This is an article in Broadcast Film & Video which was a re-write of a presentation I did over the summer. You can see it as a PDF here.

Saturday, December 06, 2014

The Engineer's Bench "Video Compression 101"



Hugh and Phil go over the principles of the Discrete Cosine Transform as applied to video compression and the differences between IFrame and long-GOP codecs.
Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Tuesday, December 02, 2014

Why do manufactuers over-specify power requirements for broadcast equipment?

It's actually a rhetorical question and I'm glad they do. Most of the time I have to tell a customers' electrician and air-con contractor how much power (and hence how much heat) the machine room will be pulling/genarating. Most customers refuse to believe that 99.9% of the electrical power entering a server room/TV MCR leaves it as heat! Just think about it; a 1v video signal leaving the room and terminating into 75 ohms represents a tiny amount of energy. Everything winds up as heat and so I've got to the point where I tell the electrician how many amps we'll need and the aircon guy how many BTUs of heat he'll have to move. By turning them into different units the customer stops complaining!
Anyway - why are the numbers always so different? I've been installing Avid shared storage chassis since 1999 when Unity v 1.2 was considered clever - 500 Gigs across three arrays and usable by around ten edit rooms. Fast forward to 2014 and the ISIS range are what you'll buy from Avid and the new ISIS 2500 near-line storage is just the thing for cheaper, non-edit storage.


This is the rear of this monster - two supplies with 20A C19 inlet connectors and you can see from the clamp-meter that the thing is pulling 1.3 - de-powering one of the supplies shows the current draw by the single supply rise to 2.6A (so they are properly balanced). Re-powering the thing shows that the total draw across both PSUs rises to 3.3A for around thirty seconds but settles to the total 2.6A once everything is up and running. 
So, P=IV and (not forgetting the inductive load which has a power-factor of 0.8) means we are seeing a bit less than a kW max. However - on the Avid website;

 

Wednesday, November 19, 2014

Earth Leakage - in water!

I recently bought an Agilent U1191A clamp-meter. This is a piece of test equipment that can measure current flowing in a conductor without having to break the circuit (how you would if all you had was a digital multimeter). The jaws physically couple around the conductor in question and by induction you can measure the electrical current flowing in the conductor. 

Clamp meters have moved on somewhat since I last had to buy one (a Fluke; sometime in the late nineties). This one is a pretty competent DVM as well as being able to sample and hold min, max and average values across all setting. For most days it could definitely do double duty against my Amprobe 37XR multimeter - EXCEPT the Agilent doesn't have non-contact voltage detection (the Amprobe does!). Anyhow - how do you get four and a half digits of resolution across multiple ranges on a brand-name test set for less than a hundred quid? Engineers today, don't know they're born...!

Today I was called to a customer's site where they have three canal-barges, each with two or three edit rooms on board. In the bilges of each boat there is room for little half-height equipment cabinet where they have the shared-storage chassis, network switches etc. They've been suffering an unusual number of equipment failures (motherboards dieing etc) and since they also seem to have RCDs tripping out as a regular feature my first thought was earth leakage.
Here is a picture of the electrical termination point for each boat - two 32A feeders go into the hull, one for the pumps and one for the mains distribution board. The cables are permanently suspended in the water (and have been for many years!) and the ones I inspected had clearly been submerged for so long there has been lots of water ingress into the rubber jacket of the cables. One felt almost ready to crumble in my hands.

Insulation has both electrical resistance and capacitance – and it conducts current through both paths. Given the high resistance of insulation, very little current should actually leak. But -- if the insulation is old or damaged, the resistance is lower and substantial current may flow. Additionally, longer conductors have a higher capacitance, causing more leakage current. Attaching the clamp meter to the incoming earth bond (pre-the consumer unit) measured a massive 100mA of leakage current. This not only risks the equipment being fed off this supply - there is an imbalance between the live and neutral cores and Class-1 equipment is often upset by this, and power supplies can pass this residual current to the earth-plane on PCBs.

More worryingly you've also compromised the safety action of any RCD (Residual Current Devices) in the feed. 

So - my advice was; replace those 32A feeders with marine-grade power cable as soon as possible.

Friday, November 14, 2014

Perceptual video quality & compression - PSNR measurements

Compression is a fact of life, there have only been two production VTRs that stored uncompressed video - D1 and D5; they are no longer used because they were both SD (and D1 was only eight-bit video). So, the vast majority of the material we handle is compressed and so there should be a way of quantitatively judging it. There three methods of numerically analysing how good pictures are, but for the most part and engineer or editor proclaiming "...those pictures look a bit soft" is what still passes for picture quality analysis! 

Don't regard this blog-post as definitive (which of my rambling are?!), but this comes from a discussion with a couple of industry colleagues earlier in the week about the quality of satellite contribution circuits. They'd got into a bit of a to-and-fro with the carrier over bit-rates and chroma-sampling structures ("but 4:1:1 isn't as good as 4:2:0 for the same data rate" etc.) which for my money entirely misses the point. You have to assess the picture quality of a compressed link (satellite, IP, etc.) not on encoder settings but on perceived picture quality. A few thoughts;
  1. Modern codecs perform better than older codecs when considering data rate vs quality
  2. Progressive pictures compress nicer than interlaced pictures
  3. Statistical multiplexing always produces better results over multiplexed connections
  4. Long GOP codecs outperform iFrame codecs by a factor of 5:1 typically.
The three methods used for determining video quality are;
  • Peak signal-to-noise ratio - PSNR
  • The "Just Noticeable Difference" or JND; a very BBC-type measurement and commonly derived by asking a crowd of observers to assess picture quality.  I like the idea of this and when I was at the BBC you'd often hear people specifying "...half a JND" as being a required spec; that was also know as a "gnats" as in "gnat's whisker"! It's problematic because it describes perception rather than the effect of exposure - an editor friend told me that he preferred to work on DigiBeta pictures over DVCam footage not because he could spot the difference between shots from the more expensive format over the cheaper format (all other things - camera, lens, lighting etc being equal), but the more compressed pictures just made him feel more tired by the end of the day. The JND takes no account of the cumulative effect of looking at compressed footage that may at that moment look just as good but more subtly takes it toll on the viewer.
  • Mean Opinion Score - MOS; very similar to the JND but with a numeric score. I won't talk about this.
So, PSNR - to steal from Wikipedia;
...the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale.
PSNR is calculated as a rolling set of differences between source material and the compressed version and is most easily defined via the mean squared error (MSE). Given a noise-free m×n monochrome image I and its noisy approximation K, MSE is defined as:


The point is that it can be calculated from the pixels. No observer bias is involved.
Engineers love quick rules of thumb, and PSNR for video images are no different;
  • For idential images the MSE is zero and hence the PSNR is infinite (more dBs = better pictures!)
  • 40dBs is considered to be indistinguishable from uncompressed production quality (that's where the BBC JND lives!)
  • 32dBs is considered desirable for quality broadcast link circuits
  • High twenties is what you can expect for over-the-air transmission - the 10Mbit DVB-T2 pictures you watch on Freeview or Sky.
 
So, to return to my point 1 (above) - here is a graph showing data rates against codec types for the same SD pictures. Venerable old MPEG2 (from the mid-90s!) up against vanilla MPEG4 (late nineties) and AVC (AKA H.264 / MPEG4.pt10 - early noughties). A full 6dBs of quality (twice as good in layman's terms?) lie between those two codecs at 2.2Mbit/sec (all other things being equal - use of a Stat Mux etc). You could even dive in further to MPEG2 and see the difference between the implementations from twenty years ago and what folks like Main Concepts are doing in 2014). The decoders particularly are now much better at hiding macro-block edges and recovering from corrupt frames.

Point 2 (above) seems obvious, but only when comparing 1080i with 1080p pictures AT THE SAME FRAMERATE; so perhaps best to say 1080i vs 1080PsF; Interlaced pictures will always be a challenge as pixels (and hence macro-blocks) move within a frame, unlike progressive pictures. BUT, you still get better motion rendition withing interlaced frames for the same framerate. Eventually we'll have moved to 1080 50/60P and so it'll be a moot point.
This graph shows data rate for HD pictures, we expect over-the-air HD to be at ten megabits in the UK.

So, how to make these assessments if you're worried about a contribution circuit or transmission path that you're responsible for? If you work in coding and mux then you probably already have tools to assess. PSNR is such an important part of delivery specs/SLAs in broadcast (you need to keep the accountants at bay after all!) that you'll have a Tektronix PQA600 or a Rohde & Schwarz DVMS-series test set. 
However, "traditional" video quality measurement needs access to the compressed signal and the original which may not be possible; particularly in the case of my pals who are at loggerheads with their satellite provider. What you need is a test signal that you can feed over the connection and then make an assessment from the picture content as to how badly the pictures are being degraded. I've banged on about the SRI Visualiser before but it has a compression multiburst that shows you lines-of-TV-resolution against perceived bit-depth. You can then relate the two worst-case lines/bit-depth figures to the table of PSNR values.

 
Forgive the voice-over!

Monday, November 10, 2014

The death of videotape; long time coming.

Say it ain't so! I read the news today, oh boy.....
Sony is to stop selling its range of ½-inch tape machines and camcorders in just over a year’s time. The manufacturer has targeted March 2016 as the date by which it will cease sales and distribution of its professional VTRs and camcorders, owing to what it described as “the global trend of migration towards file-based operation”.

VTs have been a constant feature of my 26 years in broadcast engineering - I spent three years in VT maintenance when I was in BBC TV News and all through my time in facilities in the nineties/early noughties and my last dozen years working for a reseller the most dense way of storing data (which is what video has been for twenty years) is on magnetic tape using a rotating head-drum. 

VTRs are mechanical and hence unreliable; you can't pull rust-on-sellotape (a crude description of videotape) over a rotating metal drum without things wearing out and when I started I estimate that at least a half of all broadcast engineering hours were spent fixing decks. I certainly enjoyed that mix of electronics and mechanics and when I left BBC TV news my supervisor in VT maintenance had this made for me - at the time he reckoned I had done more than a hundred head-drums.

So, here are a few memories about VT formats I have had to deal with. It was all analogue when I started with the D1 format just starting to make inroads. By the mid-90s DigitalBetacam had become the predominant format for most production and post-production and Sony continued their domination of tape formats with HDCam and HDCamSR in the late nineties/early noughties. Since then it's been disk-based (XDCam) and flash-based (SxS, P2 etc etc.) - I haven't done anything more than cleaning a tape path or head-drum in the last decade but I used to be a pretty good VT-fixer!

  • 2" Quad; The original broadcast tape format which was on the wein when I joined the Beeb. At Lime Grove studios we did have a couple of Ampex VR 2000 machines. These beasts needed a compressed air feed to hold the tape on the transversely rotating drum. They weren't used for editing, just for archiving P as B recordings. I remember watching an episode of "Star Trek" (original!) being transmitted and marveling at home good composite pictures could look.
  • 1" C-format; specifically the Ampex VPR-2B (which was the BBC's 1" of choice) was a bit more of a workhorse machine. Again, BetaSP and Umatic HiBand where more prevalent at BBC News when I was there but when I went out into the independent industry in 1993 1" was a lot more widely used, particularly the Sony BVH-series machines (the choice when I was at CTV in St John's Wood) and then the Ampex VPR3 when I got to Soho in 1994 mastering to an analogue format was already diminishing.
  • BetacamSP; The BVW75P was the first piece of broadcast equipment I got to know to the component level. The summer after I joined the BBC they bought three hundred of them and I jumped on the overtime to do acceptance testing. Consequently I got to know the signal path and then in 1990 I got transferred to VTR maintenance and pretty much serviced the same machines I'd been taking delivery of two years before. They were the broadcast workhorse until the late nineties and I still see them. When I went to Nigeria the whole place was still running on them. Here are some photos of the insides.
  • D1; I didn't get into Soho until the second generation of D1s had arrived - the DVR2000 series (the 1000 series had half a rack of processing for trick-speed playback). D1 was the first 8-bit uncompressed SD VTR format. All the high-end facilities in Soho made good money out of them - when you could hire a 3 machine D1 room for £650 an hour! Many a time I heard engineers dismiss DigiBeta for it being compressed but I've NEVER seen Digi compression artifacts but I have seen shallow ramp banding on D1 (8-bit vs 10-bit). D1 decks were very expensive (£100k) and cost an arm and a leg to maintain. However - being able to do more than half a dozen generations on and off tape gave rise to all those effects heavy pop videos in the early nineties.
  • D2; Sony quickly realized that they'd need a composite digital machine for run of the mill TV production work (i.e. people who needed a drop-in replacement for 1") and so they bought the format off Ampex in some complex licensing deal that allowed Ampex to sell badged BVW75s. Ampex's VPR300 was a terrible machine; we had them at Oasis TV and you could often not get recordings made in the morning to playback on the same machine in the afternoon. After that the Sony DVR28 was an eye-opener. It could stripe tapes at high speed as well!
  • D3; Like D2 a digital composite machine but like D5 a 1/2" tape path which meant a practical all-in-one camcorder was possible. The BBC embraced the AJD350 the year I left and according to a pal at Panasonic of the first 98 machines they never got more than sixty working simultaneously. They format got really good after v.2 software when stability, RF performance etc improved. The operational side of the machine was entirely unlike Sony with a very complicated screen surrounded by buttons. Panasonic had to replace heads - almost no usable serviceable parts inside...!
  • D5; Channel Four were the only UK broadcaster to commit to D5 which was Panasonic's answer to D1 (but, a 1/2" format with 10-bit video, uncompressed - eventually there was an HD variant as well). You could tell the machine ran so close to the edge in terms of heat performance. We had two of them at Oasis and every morning I would pull out the long boards (below the tape transport) and re-seat all the chips - a day's worth of heat made all the socketed devices rise up. The machine shared mechanics with the AJD-350 D3 machine and with an option decoder board would replay D3 tapes. The tape-stock was the same in both cases and when I was at CTV we would send "D5" masters to Channel Four which were really D3 recordings with a D5 card in the tape sleeve! They never spotted it and it saved us hiring D5 machines (we owned D3). The rumor at the time was that C4 had been given the initial set of machines free to establish the format which was (even then) viewed as entirely unsuitable for a broadcaster. I never id much maintenance on them save cleaning etc. You had to send them back to Panasonic for heads etc.
  • DigiBeta; Whereas the first gen digital VTs (uncompressed, either 230mBits/sec for D1 & D5, 155mBits/sec for D2 & D3) required manual tracking for record like 1" the DVW-series 1/2" Digi had a pilot tracking tone system that allowed the machine to track itself for record. It could even do an insert edit if the control track was damaged by driving the phase of the scanner-lock based on the difference in signal strength between the pilot tones and the head and tail of each video track. Consequently I rarely saw machines that made incompatible recordings (that was a constant feature of all the earlier digital VTR formats).  The DVW was also the first machine to feature a Viterbi decoder in the bitstream path off tape and so you tended to get a green channel light (no error correction or concealment) until 1800hours of tape wear and then over a couple of days it would go to orange (error correction) to red channel condition (error concealment). Compared to all those earlier formats (I used to clean the heads on D1 & D2s every day of use!) they had a very low TCO.
  • BetacamSX;
  • DVCPro / DVCam / miniDV;
  • IMX;
  • HDCam;
  • HDCamSR;
  • Umatic;
  • VHS;






Friday, October 24, 2014

Fibre 102; CWDM & encircled flux - The Engineer's Bench Podcast

Hugh and Phil talk about optical multiplexing as well as new methods for accurately testing fibre cables. A few tips on basic fibre cleaning as well.


Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Tuesday, October 21, 2014

Digital path & SDi does not colour-accurate pictures make...

Recent years have seen many prosumer camcorders with HDMI outputs so that you can get access to the uncompressed RGB sensor output rather than having to make do with the H.264 (typically) encoded Y, Cb, Cr data (from the flash or disk-based recorder). This makes lots of sense and has given rise to HDMI -> SDi converters like the ones Mr. Blackmagic sells;


These take any HDMI 1.4 resolution (all the way to 4k UHD - 3840 x 2160 at a maximum of 30P in 4:2:2 colour space) and convert to single-link (1.5G, 3G or BM's home-brewed 6Gbit/sec) SDi. Excellent, you'd think; and they are so long as you only use them for video-type sources - camcorders etc. Don't assume they are of any use in turning the output of your computer into SDi!

So, here's the test rig; my Macbook Pro 15" running Mavericks with a Thunderbolt -> DVI breakout connecting to the HDMI input of the BM converter.


The SDi output is fed to my trusty Tektronix WFM7100-series and I'm running a known-good recording of 10-bit, 1080 50i 100% EBU colour bars on the 2nd display.


Now let's take a look at the state of the bars; not pretty - the luminance response is all over the place with a very funky gamma that has really gone awry in the bright parts of the picture. The blue colour-difference channel is not so bad, but the Cr (red colour difference) is really crushed in the cyan end of it's response; look at the vector display (top-left).


That's not to say that the pictures don't look good on the monitor; but they aren't colour accurate in the way they need to be if you're using this as a method of profiling an SDi display - and I have seen people use this method with Light Illusion to derive the colour space of a display and then generate a LUT to make the display look how they want.


So, my first thought was, head over to the display profile and see if it's just using the wrong RGB numbers; OS-X and Windows both support standard profiles like Adobe RGB or sRGB which are more suited to print and web graphics prep but not necessarily our beloved broadcast Rec.709 colour space. Imagine my horror when I realised that the colour display profile that OS-X had used was the one that shipped with the Blackmagic! How did they not even get that right?!

To be fair even the 709 profile that comes with the OS is wrong; the take-away is don't use these kinds of gadgets if you need accurate colourimentry. For XBoxes or just getting a high-quality desktop feed as SDi they are fine, but not if accurate broadcast pictures are needed.

Wednesday, September 10, 2014

Ongoing Barnfind notes

As I get more familiar with Barnfind's products I need to make a note of some of their gotchas!

  1. The Tx and Rx lights on all SFP ports are not data lights, rather they are clock lights and as such only light when you're dealing with synchronous broadcast signals - HD/SDi, MADI, AES etc.
  2. The CWDM multiplexer/de-multiplexer (they are exactly the same unit!) works both ways and if you have signals going bi-directionally on a fibre each port is an input or an output; that takes a bit of getting your head around!
  3. In a similar vein the wavelength quoted on SFPs (1350nM in this image) only really applies to the transmitted signal; SFPs are "colourblind" - they don't mind what wavelength they receive. So, once a signal leaves the CDWM de/multiplexer you can take it into any of your SFPs for input to the crosspoint router - again, it's not particularly intuitive as we're used to "tuning" or "demultiplexing" other signals to the frequency they'll be used at.
  4. The BarnStudio software; the manual says that it comes set up hard-set to a 192.168.0.1 address; our didn't, it was setup for DHCP and since it doesn't respond to multicast PINGs it took me a while to figure this out.
  5. If you are routing ethernet out to the multiplexer it is always directional (i.e. it takes up the in and out of an SFP and needs two ports on the multiplexer). 
  6. See note 1 above (no activity lights) for ethernet.
  7. If you're using the same machine to drive BarnStudio and test an ethernet connection (by sending it via an ethernet SFP -> fibre -> multiplexer -> BarnMini -> SFP -> ethernet) you run the risk of an ethernet loop and subsequent broadcast-storm! Wesley & I suffered this and couldn't figure out why the entire workshop network was down. Much better to use your rucksack router & a RaspberryPi as a separate test network.
  8. The BarnStudio software - although good, is a tad hard to read initially as if you give the ins and outs proper names they re-order alphabetically rather than in order of the SFPs and BNCs. Just remember - inputs are down the left axis and outputs are along the top (most things are present in both).

Tuesday, September 09, 2014

HD playback - how we did it a quarter of a century ago

In 1988 I went to a demo of HDVS in Studio 2 at Television Centre; the left-hand image shows the 1" uncompressed RGB recorders that ran around eight times the speed of regular C-Format videotape. On the right is current-model Blackmagic Hyperdeck Shuttle - a small HD/SD record/playback box that uses SSDs and can handle uncompressed, ProRes and Avid DNxHD. It's in pieces 'cause I'd just finished fixing it and was testing it. BUT, given that they are two-hundred quid you might wonder if they're worth fixing?! The Sony was definitely worth fixing as each machine came in at >£100K.



Makes me wonder what I'll be doing by the end of my career?!

Monday, August 25, 2014

Barnfind's high-speed data router and optical CWDM for TV infrastructure

I've been a very slack blogger over the last five weeks due to work (installations, running training, getting trained!) and holiday (splendid). I spent a few days last week in Norway as the guest of Barnfind in Sandefjord.
Norway seems to be a lovely country if not a tad expensive (£24 for a round of three pints).  Barnfind are a small company whose engineers used to be with Nevion - you've probably come across their VikinX range of HD/SDi and other facilities routers. 

I had a long Skype chat with Barnfind a month ago and kind of 'got' their range. It's not a one-for-one replacement for any other specific products rather a platform that nicely ties together all digital signals within a facility; synchronous (SDi, MADI, AES, etc) and asynchronous (ethernet - copper & fibre, fibre-channel). They also make CWDM very do'able in a broadcast environment. As we move towards an entire IP infrastructure these are the kind of platforms that allow an easy transition. 

The basic product (the BarnOne BTF1-01) is a 32x32 generic data router and 16 bi-directional SFP ports. The SFPs can be any MSA-compliant units but Barnfind manufacture their own at very reasonable costs (much less than Cisco!). You could insert Ethernet, SDi i/o, fibre or any of around 150 variations they offer. This allows you to route SDi in and out over fibre, insert AES into an SDi stream, convert ethernet to/from fibre etc etc. 
3G HD/SDi input/output SFP

Clearly some signal types don't sensibly convert; routing an SDi stream to a fibre channel-equipped port won't replace an HP workstation running Avid! But where is does make sense everything is taken care of for you. In the case of all video signals (composite, SDi and HDMI are all supported) the signal is converted in the SFP to 3G SDi before it is passed to the 32x32 router.

The BarnOne range extends to several variations - the lower board which carries the router and the first sixteen ports can be joined by two upper boards carrying BNCs, more SFP holes or (more interestingly) CWDM fibre modules. Essentially having BNCs on an upper board allows you to avoid SDi SFPs (it's marginally cheaper to do it on an 8-way board than an extra 8-holes with video-SFPs).
 
The other end of the link could be easily served by their BarnMini units - essentially replacing Blackmagic or AJA converters but integrating very nicely with the BarnOne. For less than £400 you can get either two BNCs with an SFP hole or two SFPs. 

The whole thing makes sense when you realise that all the signal intelligence is in the SFPs - the dual-port BarnMini can do anything that makes sense; maybe you need to route some ethernet coming in on single-mode fibre and send it out over existing multi-mode cable. Again, AES, SDi, MADI as well as all fibre and copper networking are supported. 

Before I start banging on about Course Wave Division Multiplexing it is worth including a photo of the insides of a BarnOne so you can see the control card they use.

That's right! It's a RaspberryPi! When re-invent the wheel; they claim they tested a few Linux SOC boards and found the humble Pi to be the most reliable and they make use of the watchdog timer to ensure it's always listening for config updates. Their BarnStudio software not only allows you to configure the system (including all the monitoring via SNMP) but you can also control the router. They also support several manufacturers generic control panels if that's what's needed.

CWDM

Single mode cable (more often than not) is used to carry a network feed, a 3G video feed or some other data. We quote wavelengths for fibre (typically 1310nM) rather than frequency and so often forget that the sidebands of the signal we put down a fibre are tiny relative to the centre frequency. 1400nM of wavelength is around 200Thz (yes, 200 x 10^12 Hz!) which makes the 4.5Ghz bandwidth of the best-quality HD video coax look very modest. So, we could divide up the single-mode range into many wavelengths and use each for a different purpose; a bit like the radio stations on the VHF band.
These are the standard wavelengths for CWDM working; sixteen channels and by convention the two colours run in different directions (SDi input/output or ethernet Tx and Rx for example). This allows you to have SFPs that put the signal they send/receive onto specific wavelengths. Then, with a simple passive optical splitter/combiner (which is all a CWDM multiplexer is). With all this in place you can use your Banrfind kit to multiplex sixteen functions in and out of a single 9/125u fibre. If that's all you have between premises then this is a lifesaver. The multiplexers are in the £400 range. 

We opened one up last week and it was a thing of beauty; all passive optical engineering with tiny dichroic filters. So, by careful planning you could send multiple SDi signals, ethernet, fibre channel and other things to and from a remote site over a single strand of mono-mode fibre. It could be up to 80kms away. 
There is a further development of the multiplexing technology called Dense Wave Division Multiplexing - DWDM which allows up to 192 channels and very far distances (by the use of Eridium-doped optical amplifier - but in the case of Barnfind (and other manufacturers) the cost of DWDM vs CWDM is fivefold!


The BTF1-07 is the box I've ordered as my demo unit; it has sixteen SFP holes, eight bi-directional BNCs as well as a CWDM multiplexer.

Saturday, July 19, 2014

MPEG transport stream corruption; effect on pictures

I'm running training at the MET's video forensics lab soon and part of it is explaining how DCT-based compression works and particularly the effects of corruption on long-GOP MPEG transport streams as delivered in DVB-MUXes. One of the illustration videos is below. 
Three clips; each with a momentary corruption to the data stream and in each case you can see how the decompressor can't reconstruct a proper picture until the next i-Frame. The second half of the clip shows me framing through with the I, B, or P frame indicated top-left - you'll need to make it full screen to see that as the marker is quite small.

Wednesday, July 16, 2014

The challenges of modern picture quality analysis.

This is a recent article written for a trade magazine;

Engineers have sought to quantify the quality of the video signal since the birth of television. Since all aspects of the TV picture are represented first by voltages (analogue) and then by numbers-in-a-bit-stream (digital) you have to make measurements to really know anything about the quality of your TV pictures.
“When you can measure what you are speaking about, and express it in numbers, you can know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind: it may be the beginning of knowledge but you have scarcely, in your thoughts, advanced to the stage of science.” - William Thompson, Lord Kelvin.
In the dim and distant days of monochrome tele the only things the TV engineer had to worry about was the blacks and the whites. Common consent had us placing the blacks (dark areas of the picture) as a low signal (at zero volts) and the whites (bright parts of the picture) up at 0.7 of a volt. In addition we allocated the 0.3v below black to the synchronising pulses – the electronic equivalent of the sprocket holes in film; a mechanism that allows the receiving equipment to know when new lines and frames of video were starting so that the picture is “locked” and not free-running (“try adjusting the vertical-hold!”). Once all those things are well-defined then Mr. Sony’s cameras work nicely with Mr. Grassvalley’s vision mixer and the engineer at the broadcast centre can adjust the incoming signal from the OB truck such that it looks right on the wavefrom monitor and hence the pictures will match what left the truck. Dark shadows and bright clouds will look like what the camera operator saw.


Fig.1 – monochrome TV signal, two lines.

So far so good; but people wanted colour TV and so all of a sudden the way colour is encoded needs to be considered. With colour comes grading and the “look” of pictures and colourists need to see different representations of the colour parts of the signal for artistic reasons. Engineers need to ensure that the colour content of the picture is constrained to the legal gamut of colours that the transmission system can handle; nobody wants things to change colour as they get to air! Tektronix have always been the gold-standard for TV test and measurement and to this day if you ask an engineer or colourist what kind of test equipment they’d like it’s going to be a Tek.


Fig.2 – colour TV signal, several types of display

As we moved from analogue to digital working in the 1990s and then from standard definition to higher resolutions in the noughties the principle of looking at the lines and fields of the TV signal remained; we assumed that if one frame got through the system with minimal/acceptable levels of distortion then all subsequent frames would; and as we know - the illusion of television is that many frames make a moving sequence.
However – with the introduction of “long GOP” (lit. Group Of Pictures) video compression in the 90s it became apparent that we don’t treat every frame of video the same. On a compressed video-link there are I-Frames (the ones where the entire picture can be re-built) and other, more complex beasts, called B-Frames and P-Frames; these serve to convey the differences (by not sending all complete video frames, but merely the differences from previous and subsequent ones we achieve video data rate reduction AKA compression). You’ve no doubt seen the on-air fault where some parts of a picture seem to have become “stuck” in the previous scene where other parts of the picture are behaving normally. Then, suddenly the picture rights itself. What you have witnessed is a corrupt I-Frame; all your set-top-box can now do is show the changes as they arrive and you don’t get a re-built complete frame until the next I-Frame arrives (typ. half a second later). This is just one kind of “temporal” fault.


Fig.3 – MPEG multiplex fault

So, now we have to consider things happening in time as well as the pixels, lines & fields of video.  The colour-bars you’re looking at coming across the satellite link may look splendid, but perhaps the link is only able to convey pictures that change minimally between frames and as soon as moving video arrives it looks awful. Perhaps the fields of video have been reversed (it’s a more common technical fault than you’d expect) and you’ll only see that on moving pictures; and it’s not nice!

Test signals have always served to “stress” the system they are being run through; the traditional colour bars have their colours set at the extreme ends of what the system can handle – you never see that amount of saturated yellow in pictures coming out of a TV camera! We want our test signal to show faults; if the pictures look good it should be because the OB link or editing workstation is capable of carrying the worse-case pictures.
So, now we need a test signal that is not just the same frame of video repeated endlessly; we need a test that changes and serves as a challenge to compression encoders and is constructed in such a way as to have predictable picture effects that highlight when your production chain is sub-par. Ideally we could see these drop-offs on a picture monitor and not on a £10k Tektronix test set. Once we have an animated test signal we can test not just for the degradation of compression but also those field-cadence problems. We can also test for lip-sync errors, exaggerating the effect of audio being late or early with respect to the video and more importantly all of these tests can be done by an operator rather than the expensive engineer. We’d also like the sequence to be constructed such that faults are visible on a 24” video monitor from the across the other side of a busy machine room or studio gallery.

Fig.4 – SRI Visualizer - a still frame; it moves normally!

  • Lip-sync: Measure and quantify the synchronization offset between audio and video – a single frame of sync is easily missed on camera pictures.
  • Bit depth: Detect 10-bit to 8-bit truncation – in a modern facility a mix of eight and ten bit video is a fact of life, but no client wants unnecessary loss of dynamic range.
  • Compression fidelity: Measure and quantify compression levels in real time; again, real camera pictures often make this effect hard to spot.
  • Colour matrix mismatch: Determine high-definition (709) and standard-definition (601) colour space conversion errors. These colour shifts are subtle until the director is shouting about that shade of red he wants!
  • Chroma subsampling: Determine the Chroma subsampling being used (4:2:2, 4:2:0, 4:1:1…)
  • Chroma upsampling: Reveal how missing chroma samples are interpolated
  • Field dominance: Definitively determine field order reversal.
  • Chroma motion error: Demonstrate incorrect processing of chroma in interlaced 4:2:0 systems
  • Subjective image fidelity: Perform a rapid check of system integrity
  • Colour conversion accuracy: Verify colour-space conversions
  • Display gamma: Measure monitor gamma quickly
  • Black clipping: Reveal black clipping and accurately set monitor black level
  • White clipping: Determine if highlights are being blown out
  • Noise: See how an encoder handles noise
  • Skipped frames: Detect repeated and dropped frames
The SRI Visualiser is available as a hardware product (the TG-100 which also includes equally innovative audio tests) which can be installed into a machine room to augment/replace existing SPG-type test signal generators. You can also purchase and download it as several kinds of video files which can prove very useful in file-based workflows; injecting the sequence at the start of the post/production workflow and confirming all is well at the very last stage. An hour of time paying attention at the start of a new production will pay dividends by highlighting exactly where any picture faults creep in. Did that colour-shift occur during editing, grading, VFX or when the show was transcoded for distribution? Without a test system like The Visualizer these problems are hard to track down.

Tuesday, July 08, 2014

Temporal-based video tests; The SRI Visualizer

We've recently take on SRI as a supplier and I'm very excited about their test system.


You can read the article I've written for a couple of industry magazines here.
SRI International

Thursday, July 03, 2014

Someone who has some actual experience of file-based deliverables!

There are an awful lot of people touting themselves as "DPP consultants" and "file-based technologists" in the UK TV industry at the moment. For the most part they are riding the gravy train of the DPP roadshow and the fact that come October all the big broadcasters will expect file-based delivery AND material QC'ed to the DPP specification. The majority of these folks have not delivered a single minute of material for terrestrial broadcast but the one chap who really does know his stuff is my good pal Simon Brett (recently moved from National Geographic/Fox UK to NBC-Universal).


Here he is at a recent evening event we laid on; if you want to know some actual details (what bit of software to use to handle your metadata etc) then Simon is your man.  He's quite an engaging speaker (and he bigs-me-up for colourimetry!)

Tuesday, July 01, 2014

Near synchronous video over UDP/IP with Sony's NXL-IP55


Just about the only things that caught my eye at the recent Beyond HD Masters day at Bafta was Sony's IP Live system. This is a single product called the NXL-IP55 which puts four 1080i 4:2:2 signals over a gigabit connection - so modest 6:1 compression with a well-defined single field of latency. The camera channels can go either way (so three source cameras and a return preview monitor for example) and embedded audio plus tally & camera head (colour) and lens control are included. It's quite expensive ($10k per end) but is the only video-over-IP device which I've seen so far which is suitable for live production.  


http://www.sony.co.uk/pro/press/pr-sony-nab-av-over-ip-interface

Saturday, June 28, 2014

Measuring fibre cabling and the problem of encircled flux loss


Last week I went on a very interesting training day courtesy of Nexans - data cable & parts supplier. I went looking forward to learning all about the new standards surrounding catagory-8 cabling for 40 and 56 Gigabit ethernet (a massive 1600Mhz of bandwidth down a twisted pair cable!) and the new GG45 connector; but those things will have to wait for another blog post! The thing that really tickled my fancy is the new standard for measuring the response of multi-mode fibre.
Multi-mode fibre works in a fundamentally different fashion to single mode (they are as different as twisted-pair and coaxial copper cable; but they look very similair). If you want a bit of a primer on fibre then Hugh & I did an episode of The Engineer's Bench a couple of years ago on the subject.



As we've gone from one-gig to greater than 10Gigbits/sec in OM3 and OM4 cable and engineers have often noted the lack of consistency between different manufacturers light-source testers. You might get as much as 0.5dB of difference between say an Owl and a JDSU calibrated light source and detector. We typically use a 20dB(m) laser at 850nM to test OM3 and we always just deliver the loss figres to the client, but it would be good to know if your absolute reading is of any use at all?

Well, the answer is that LED or VCSEL (Vertical-cavity surface-emitting laser) will tend to "overfill" the fibre and high-order modes of light travel (to a degree) down the cladding of the cable.
Launch conditions correspond to how optical power is launched into the fiber core when measuring fiber attenuation. Ideal launch conditions should occur if the light is distributed through the whole fiber core.


Transmission of Light in Multimode Fiber in Underfilled Conditions 


Transmission of Light in Multimode Fiber in Overfilled Conditions


An overfilled launch condition occurs when the launch spot size and angular distribution are larger than the fiber core (for example, when the source is a light-emitting diode [LED]). Incident light that falls outside the fiber core is lost as well as light that is at angles greater than the angle of acceptance for the fiber core. Light sources affect attenuation measurements such that one that underfills the fiber exhibits a lower attenuation value than the actual, whereas one that overfills the fiber exhibits a higher attenuation value than the actual. The new parameter covered in the IEC 61280-4-1 Ed2 standard from June 2009 is known as Encircled Flux (EF), which is related to distribution of power in the fiber core and also the launch spot size (radius) and angular distribution.

All the manufacturers are producing EF-compliant testers so you don't need to worry about inaccurate reading due to these high-order modes, but for now there are some suggestions.


Multimode launch cables allow for the signal to achieve modal equilibrium, but it does not ensure test equipment will be EF-compliant based on the IEC 61280-4-1 standard.
Multimode launch cables are used to reveal the insertion loss and reflectance of the near-end connection to the link under OTDR test. They also reduce the impact of possible fiber anomalies near the light source on the test.

If the fiber is overfilled, high-order mode power loss can significantly affect measurement results. Fiber mandrels that act as “low-pass mode filters” can eliminate power in high-order modes. It effectively eliminates all loosely coupled modes that are generated by an overfilled light source while it passes tightly coupled modes on with little or no attenuation. This solution does not make test equipment EF-compliant.


Mode conditioning patch cords reduce the impact of differential mode delay on transmission reliability in Gigabit Ethernet applications, such as 1000Base-LX. They also properly propagate the laser VCSEL light along a multimode fiber. This solution does not make test equipment EF-compliant

Monday, June 16, 2014

Oscilloscope Watch project update

I'm still monkeying around with the oscilloscope watch and hopefully I'll shoot a bit more video later this week, but it's still very much in Beta and I got this update of what the final case will look like from the Gabriel Anzziani, the developer.


Tuesday, June 10, 2014

HD/SDi physical layer measurements - gotta be Tektronix

Here's a brilliant document from Tek - tell you all you need to know about eye pattern, jitter etc.
http://www.tek.com/document/how-guide/sdi-eye-and-jitter-measurements...

Monday, June 09, 2014

Quick & dirty RS422 VTR control from your laptop

If you need to control a deck (or test routes through an RS422 matrix) then a very quick and easy way of doing it is via your laptop. Modern machines don't have RS232 ports and so you can't just hook up an RS232-422 balancer, but USB serial devices are cheap - the Addenda RS-USB4 is the one I keep in my rucksack.



It's worth noting some of the settings you need for the serial port in Windows.


You then need a bit of software that will talk the P2 protocol. WSony II has been around forever but still works under Windows7. You have to run it as admin for it to access the serial port.

Once it runs you have to tell it what port number gave it - It shows up as Com1 on my machine by default, but bear in mind that Windows will often assign it to Com5 or greater. The WSonyII will only talk up to Com4 so you might need to change that (see the first image above in the advanced tab).

Now you've got everything you need to drive a VT or test that an RS422 circuit is working.

Oh, if you don't have the CD that came with the Addenda adaptor cable you'll need to download the software here.

Remember; P2 over RS422 runs at 38,400 baud.