Thursday, August 01, 2013

PSE detection, interlaced video and VidChecker

I've been preparing some training notes on PSE (hence last week's podcast) but noticed that VidChecker's analysis of a recent BBC recording suggested there were more flash events than were apparent to the eye. Have a look at this Off air MPEG2 from BBC News, 22nd July and pay attention to the timecodes as shown in the analysis in the second screen-grab. 

It looks like I've discovered a bug in VidChecker! The mixes to and from white that seem to provoke the PSE violation are at field rate (as you'd expect from any studio vision mixer) but VidChecker, by necessity, does a frame analysis and so the magic "...no more than 20 Cd/m2" prohibition on frame-frame luminance changes (as measured on a displayed calibrated at 200 Cd/m2 for peak white) are twice as likely to be triggered.

I mentioned this to the guys at VidCheck and this was the response;

Phil

You are right that the first 3 below are fades to white and back again and should not really be picked up as flashing.  

Scott has taken a look at the file, and it appears that the problem is in the interlacing artefacts during the fades where alternate lines are lighter and darker.  He has put in a fix for the next release.

Thanks again for the file and for finding this!  Attached some docs. You may already have them

Regards

Simon

The helpful document they sent was ITU rec 1702 which has a few more pointers than the OFCOM spec I used in the podcast.

Tuesday, July 30, 2013

Engineer's Bench Podcast - Photosensitive Epilepsy

Phil goes over OFCOM, DPP, Harding and other aspects of PSE.

Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Friday, July 12, 2013

Some notes on monitor calibration software

Last week I was at a customer's site calibrating monitors and because it was before midday the colourist wasn't there. So - knowing that in the past I'd set the white point on their displays to 80Cd/m2 and 6504k (as per BBC spec!) I balanced them thus.
Later in the day I got a very heated message from the production manager; the colourist wasn't happy and he wanted me back in there to match the Sony BVM to his 55" plasma "...which looks a lot better - much closer to correct". I asked him what standard he wanted the Sony set to and of course he had no idea.
Also - I've had a few people getting very excited about VirtualForge by SpectraCal; it's a test signal generator for a Mac with SDi o/p. The trick is that it talks over the network to their other product CalMan which can talk to the various USB-attached photometers (the XRite etc). They make great play of the fact that this in now a closed-loop where the test patterns can be changed automatically by the probe software. Presumably it still has to tell you what adjustments to make to the display and so how that is any better than you looking at the measurement and make the changes is beyond me.
It makes sense if you have a Sony probe attached to a Sony monitor - the monitor cycles though the various test patterns and reads the probe; it then tweaks the monitor's settings and you hopefully wind up with a properly calibrated displays. Having to have two computers and two bits of software (as well as a network) seems convoluted.
I'll stick with my trustee PM5639s (I have LCD and CRT probes for them) and the occasional hire of a PhotoResearch PR655 when I really need a spectralradiometer over a photometer.

  • Test signals for monitor calibration aren't hard - 10% gray, 50% grey, 100% peak white, various saturated colour fields, 100% bars and PLUGE allow you to do anything to a monitor that doesn't need the covers taking off and you getting down to component level.
  • Cheap USB photometers that claim to cover different display technologies are plain wrong; LCDs, CRTs, Plasma and OLED all have different metamerisms - a spectralradiometer is the only gadget that is display-technology agnostic.
  • Computer monitors and TVs are not grading displays - the MacBook Pro that I'm typing this on is calibrated using Apple's colour tool to D65 but when I point the PM5639 at it the colour temperature is 7340k at 220Cd/m2 (how wrong can it be for grading work?!)
  • LUTs can only decrease the dynamic range of a display device - never improve it. The best thing is to get the display calibrated before you start applying LUTs (and then only to simulate the look of a film stock etc).

Colour calibration isn't hard, but it requires understanding the nature of colour and vision and not just spending $395 on a bit of software.
Oh - BTW I carry all my test signals around on a little BlackMagic Hyperdeck. It's battery powered, fits in my rucksack and does proper 709 colour space between it's HD/SDi and HDMI outputs.

Tuesday, June 25, 2013

RaspberryPi - OpenElec media centre, MPEG2 playback and iPlayer

A new version of OpenElec is out - it's reckoned to be the best build of XMBC for the Pi (very stable and lightweight). The two reasons I'd like to use the Pi (well, aside from it being a cheap AirPlay replacement) is that it can playback my MPEG2 off-air recordings and be a very competent BBC iPlayer.
Licenseing for MPEG2 playback - it's a very modest couple of quid to enable hardware MPEG2 decoding (MPEG4 and H.264 is there already);

the root p/w for it is openelec

With that done it's trivial to grab the iPlayer app and install it - so far it's played very nicely!

The other VOD apps you might like are;
  • 4OD
  • TWiT
  • Blip.tv (feat. The Engineer's Bench podcast!)
  • Youtube
These are all available from the Programs->Install AddOns menu.

Sunday, June 23, 2013

Engineer's Bench "Ultra High Definition - 4k & 8k Television" next podcast

Hugh and Phil go into the emerging standards for Ultra High Definition television.

Find it on iTunes, vanilla RSS, YouTube or the show notes website. My notes for this episode (with relevant URLs)

Friday, June 21, 2013

After 'Beyond HD Masters 2013' - some notes, pixel-free and even frame-free video!

This video was shown in one of the sessions; the speaker was Professor Philip Willis of The University of Bath's Computer Science department.



It shows various pieces of video that have been converted to a contour/vector representation where instead of using pixels in a raster to represent video they use contours (which also have shading associated with them) and vectors (which dictate how the contours are moving). This is not an effort to compress the data load; although Prof. Willis was at pains to point out that they have not made any efforts to optimise or do any bit-rate reduction calculations on the data, rather it is a way of representing high resolution video in a pixel-free manner. This might provide a useful transport/mezzanine format for moving 4k and 8k television around, rendering the pictures at the resolution of the target display device. 
The upshot of this is that rendering at a higher resolution than the material was shot at shows none of the aliasing that you'd expect from pixel-based video. Although you can't get more detail than was there originally the codec fails gracefully such that the images are not unpleasant to look at (unlike the low-res YouTube clip above!).
Prof. Willis gave a tantalizing little extra in the Q&A sessions - he implied that they are looking to give the contours/vectors a time-based element so that they move not only in X-Y space, but along the t-axis such that the pixel-free video now becomes frame free! You can render the resulting data as easily at 1920x1080 @60FPS as you could 720x576 @59.98 fields without any aliasing in the spacial dimensions OR temporally; say goodbye to standards conversion!

The original paper is a bit heavyweight but if your happy with vector maths it is understandable.

Sunday, June 16, 2013

After 'Beyond HD Masters 2013' - some notes, cable interfaces and frame-rates.

Today you can buy equipment that works at the TV "4k" resolution which is also referred to as "quad-HD" because it has twice the number of pixels horizontally and twice the number of active lines; 3,980 x 2,160. Blackmagic have already implemented what they call 6G SDi - i.e. 4 x 1.5Gbit/sec 1920x1080 @30FPS (max) with 4:2:2 colour sampling. If you want 50 or 60P at 4:2:2 you'd need 12G and should you want to go to 4:4:4 RGB at 12bit then you're looking at >20G! 
Whilst a coax interface still (just!) works at 6G (and I'd point you towards some research I did in 2009) it seems like single-mode fibre is the only sensible interface that we'll have for synchronous video as 4K starts to be used for live production.
Richard Salmon from the BBC showed that with the huge amount of resolution that 4k brings the human brain recoils if there isn't enough temporal resolution to make moving images look as good as static images. Imagine a rapid pan across the crowd at a football stadium. At sub 100 frames per sec you don't see enough detail in the picture (each pixel is smeared so as to make it look like a much lower resolution image) but when the camera stops the pan you suddenly notice the huge amount of detail. That difference in static and dynamic resolution can, in extreme cases, cause nausea.  With this in mind it seems that the standard for live TV will be 4:2:2 colour encoding at 120 FPS! Anyone for 24Gbit/sec video?! v1.4 HDMI currently only supports sub-8 Gigabits/sec. So - it seems like we're going to have to wait for cable standards to catch up and when it does it'll probably be 9/125ยต fibre.
Take this to 8k (which is the second half of the proposed UHD TV standard) then we're looking at 96Gbits/sec! Even current standard fibre struggles with that! So - the other interesting technology that may well form the mezzanine format for moving over cables and networks is pixel-free video; but that's for another blog post!

Wednesday, June 12, 2013

After 'Beyond HD Masters 2013' - some notes, 4k TV colourimetry

I had a day at BAFTA listening to various speakers from the industry talking about 4K (quad-HD, UltraHD, etc etc) and the surrounding standards.
ITU Rec.2020 is the document that covers 4k TV (in fact it defines two resolutions - 3840 × 2160 and 7680 × 4320 - which I'll refer to as 4k and 8k television, but these aren't the same as the 4096 pixels and 8192 wide resolutions used in digital film).

  • The colour space is monstrous! The 2020 triangle is even bigger than the P3 colour space (as defined by the DCI) - take that Mr. Dolby! It'll be a while before ANY displace device can faithfully reproduce that gamut. Thankfully we stay with D65 for white (well, 6504k to be strictly correct - Planck's was re-calculated in the 70s) and the primaries are;
  • red: 0.708, 0.292
  • green: 0.170, 0.797
  • blue: 0.131, 0.046
  • white: 0.3127, 0.3290
  • The new luma transfer function is: Y'= 0.2627 R = 0.6780 G + 0.0593 B and for the first time ever in television an allowance for constant luminance has been allowed. There is an almost philosophical argument by Charles Poynton and others that constant luminance is the way to go. Essentially the gamma response should be applied only to the derived luminance rather than the three colour components. I suppose your feeling on that comes down on whether you think gamma is correcting for the camera response (that's what I was always taught at the Beeb in analogue SD days) OR if gamma is a tool to give better dynamic range in the dark areas of the picture. I expect that constant luminance (proper Y as opposed to Y' / "luma") should best be constant in the case of 12-bit video (where you have so much more dynamic range anyway) but remain pre-corrected RGB in the case of 10 and 8 bit 4k.
  • Frame rates are defined up to 120FPS with no interlaced framerates - unfortunately non-integer (23.98, 29.97, 59.94) are still hanging around like bad smell! The Beeb's Richard Salmon showed a very convincing argument for >100FPS for sports footage. Essentially as you have more resolution the difference between detail in static pictures and moving scenes become objectionable. The problem is that currently HDMI 1.4 only supports a maximum of 30 FPS at 4k and so we're waiting for HDMI 2.0.

Monday, June 10, 2013

Amulet with the Linux Wacom driver under sub v3.8 kernel

Amulet are a great company - I believe their KVM-over-IP extender to be MUCH better than Avocent, AdderLink and ThinkLogical's offerings, but like everyone else, once you packetise up DVI and USB and send it over a network you are going to have to deal with funny effects of devices and drivers that play fast & lose with the specifications. This is from their engineering team;

Wacom tablet and Red Hat Linux has been traced to a bug in the Red Hat Linux Wacom driver. The driver has a very obvious bug in it whereby outward data is declared as inward. The Wacom tablet simply ignores this when it is connected directly, although this works it is actually because the tablet USB stack is sloppy. The rack card however cannot ignore anything else we wouldn’t be able to operate at all so it believes what the Red Hat driver is saying and waits for an event which then never happens.

As detailed below in our analysis of the Red Hat and main Linux code drivers this issue has been resolved in the latest Linux build, with the fix implementing exactly what we would expect to see. This however has not yet found its way in to Red Hat, although may do so later this year, again as per the details below.

To enable use of the Wacom tablet on the current version of Red Hat (5.3) you are using we have added a work around to the rack card firmware. It appears to work but will require testing in the lab  over the next couple of next week before we can release. The workaround has to circumvent many checks in the code that are designed to catch driver bugs just like this. As a result the workaround is very specific to Wacom devices and this exact bug so we can’t guarantee it will work if anything changes.

Red Hat 6.3 (Amulet Hotkey test version)
  • Red Hat version - Red Hat 6.3 (Santiago)
  • Kernel version - 2.6.32-279.el6.x86_64
  • RPM package - 2.6.32-279.el6.x86_64
  • Wacom driver source - wacom_sys.c
  • Driver bug - Incorrect bitmask causing incorrect data direction expectation in USB driver.
Code copied below shows the addition of the correct bit mask command (in red) in the latest Linux driver
  • Code in current driver - USB_REQ_GET_REPORT, USB_TYPE_CLASS | USB_RECIP_INTERFACE
  • Code in latest Linux v3.3 driver - USB_REQ_GET_REPORT, USB_DIR_IN, USB_TYPE_CLASS | USB_RECIP_INTERFACE

Red Hat 5.3 (Client version)

Although versions numbers differ to Amulet Hotkey test version the Wacom driver is the same and has the same bug present
  • Latest Red Hat release (6.4)
  • RPM package - kernel-2.6.32-358.el6
  • Wacom driver source - wacom_sys.c
  • Driver bug - Still present as per RPM package 2.6.32-279.el6.x86_64

Main Linux code base
  • Version - v3.2.45
  • Driver bug - Present
  • Version - v3.3
  • Driver bug - Resolved
Future Red Hat 

  • Wikipedia gives the time frame for the next major RHL release (7.x) as Q3/4 2013
  • We can't be sure whether this would incorporate the fix to the Wacom driver however Wikipedia states "Red Hat Enterprise Linux 7 (?), 2013-6+ Will be based on Fedora 18,[17] which as of March 2013 uses Linux kernel 3.8.[18]". This would indicate that the upcoming release of Red Hat 7 will fix the issue, as it will use a newer kernel, v3.8

Friday, June 07, 2013

Timecode Toolbox - iPhone app review

I do occasional reviews for Sarah Lane's show iFive for the iPhone on the TWiT network - here's a recent one.

Tuesday, June 04, 2013

Fixing Macbook & MacbookPro computers

Pretty much every portable Apple computer I've owned has had a hard drive fail and the optical drives all go (normally a week outside of a year old!) - although they aren't hard to fix it does pay to have a walk through and this is where iFixit come in - they have guides for most machines and they carry the parts.
Before you start pulling current gen Mac laptops apart make sure you have the necessary drivers. Mine required the following (all in watchmaker sizes); pozi-drive, pentelobe, Torx and Trilobe.

Monday, May 27, 2013

New Engineer's Bench Podcast "Traditional Video QC with Tektronix"

Hugh and Phil go through some of the principles of traditional video QC using the Tektronix WFM and WVR series test sets. Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Saturday, May 25, 2013

Failure of the BBC's "Digital Media Initiative" and other large IT projects

I still describe myself as a Broadcast Engineer rather than a project manager - and in fairness I do spend more time looking at cable-schedules and schematics than Gantt charts. However; I am often responsible for other people's money in achieving what they want in TV and data facilities. Compared to the BBC DMI project the largest project I've had overall financial responsibility for came to a bit more than 1% of the size of that gig so I am in no means an expert. However - I have worked for dozens of large and small broadcasters and I think I've seen some of the best and worst aspects of other people's project management styles.
I feel sorry for John Linwood - the BBC's CTO who has been suspended over the whole debacle. It's telling that the Beeb now have a CTO rather than a Chief Engineer as that latter term implies that you've had a career in broadcast engineering - you've calibrated monitors, fixed studio cameras (and then racked them in live productions), installed Media Composer (and supported the editors), replaced the head-drums in VTRs as well as the myriad other bits of experience that the top technical job at the world's most prestigious broadcaster would imply. John is a similar age to me and so I'd expect him to remember mk.2 telecines, tubed cameras and 1" VTRs but he's a software guy; ex-Yahoo, ex-Microsoft and it's there you'll find both the justification for him being the Beeb's top technical guy and the reasons for the pickle he's now in. The DMI was a software project BUT software projects have a huge propensity to fail. Around 30% of software projects in commercial industry fail - but that's OK; you have to take risks and great things don't come if there wasn't a danger of failure (that's why it's a risk!). However - in government IT projects the risk of failure is an awful lot higher - typically 70% for publicly-funded IT projects. You'd expect project manager who are being paid with tax-dollars to be more risk-averse but the opposite seems true. Clearly this has been the case at the BBC with the DMI. 
Ross Anderson has written extensively about this kind of thing; he did an interview with Stephen Fry on Radio 4 a couple of years ago which makes a lot of these points; as an aside his brilliant book "Security Engineering" is now in the public domain.

Here are a few thoughts on big-organisation technical projects;
  • The danger of "not invented here" - when I was at the BBC most custom projects (i.e. equipment and solutions not bought in from external manufacturers) were often specified and implemented by Research Department. In fact too many projects were as there was an attitude of "nobody understands what we do except us" and so consequently too many things were done by people who might be doing it for the first or second time (chief engineers of facilities will have designed/built maybe two or three machines rooms in their twenty year career - I've done it dozens of times in the last decade!). 
  • "Gold plating" everything - In the late eighties/early nineties there was a guy in BBC Research Department who liked the DEC MicroVax 3100-series running VMS (at the time it was a £35k industrial computer) and so whenever a project needed an external computer we'd see a MicroVax appear in the machine room. Automated upload of new weather symbols to the Quantel Paintbox - throw a MicroVax at it. Download realtime financial data from Reuters to make the Aston strap for the breakfast news financial segment; control the caption generator with a MicroVax! However - when it fell on the maintenance department to make something work they typically used a BBC Micro (we all had them at home and new how to program/homebrew them) - the ASTED project to control external logo generators and make them work with the Aston caption machine for news programmes was all done via a £350 BBC Micro, not a £35k MicroVax.
  • In a similair vein I had a friend who was working on the NHS unified records system for EDS - he spent eighteen months working on a new secure-VPN protocol; that's not a problem that needs solving! That one is already done with a choice of closed and open-software solutions.
  • Don't despise project management methodologies. Lots of engineers have a distrust of Prince2 and it's ilk and for sub-£1m projects the overheads are too onerous, but there are so many valuable lessons that proper project managers bring. In a recent discussion with a colleague about how one project had turned quite painful we realised that the PM101 principle of fully involving the users had been almost totally missed by the customer and it was proving hard to get the poor editors and assistants to buy-into the new system once they saw the implementation.
  • "Good, Fast, Cheap - choose only two" - this is the warning PMs often give and it's regarded as customers as a prohibition rather than a good principle to run a project by. The tension of having to hold those three ideas and adjust the sliders as necessary means you don't push them all to the max and expect the best outcome. If they had regarded this principle there is no way they would have let the thing drag on for five years. Timely projects are the best kind.
  • Specify, specify, specify - the more you leave to fortune or to the contractors' discretion means you have too many undefined problems.
  • Don't disregard experience - the engineer who taught me all I really know about broadcast SI - Chris Clegg - had one thing he used to say about designing facilities; "...given a crew of qualified operators they should be able to run this studio/OB truck/machine room with only fifteen minutes instruction from the usual operator". You can't do that without intimately understanding how TV workflows interact with facilities and how operators and assistants operate. The point is that the best broadcast project managers have years of experience in those areas. Professional project managers (who aren't experienced engineers) don't have those insights.
One thing I can't understand is why Siemens were initially given the project; are they renowned for any of the things that were trying to be achieved? - big database, large video storage, transcoding, version management and the underlying filesystem to make it all hang together? Given that it also has to work with your editing, transmission, and VOD platforms why on Earth not get Avid, Isilon, Google etc involved; OneFS for the backend and GFS for the database sound like they were designed for this (and they also run most of the big Internet media sites).

It'll be interesting to see if the BBC takes on an experienced engineer as their next CTO.

Friday, May 24, 2013

AJA HD10AMA analogue audio pinouts


The HD10AMA is a great de/embedder with two HD/SDi outs; very useful gadget. Here is the D25 details.

Tuesday, May 21, 2013

Managing multiple identical sound devices in OS-X

I use Skype (although I may be looking for alternatives due to Microsoft's proved snooping - see here) and I like to have two sound devices so that the radio can keep playing through my speakers without me having to reach for volume control when I take a call. Also, when podcasting, I use the same laptop to run the presentation, keep the Skype call going and make the recording (that chews up three sound devices!). So along with the laptop's internal sound chip I have two cheap external USB dongles. Since they are identical they show themselves with the same name is all apps and invariably (especially if I've been away from my desk for a day and re-booted the OS without any of the USB devices attached) Skype picks up the wrong sound devices as default. It's trivial to change back but I always get it wrong ("..is the headset the first or second one"?!)

In Utilities is the Audio MIDI setup application (which I've never used before) where you can set "aggregated sound devices" - presumably to allow the same audio to play through several outputs? But - it allows you to create a proxy for a device and give it a sensible name.

So, I made new devices for the two USB sound dongles and gave them sensible names.
This now means that when I look at available sound devices in other apps (particularly Skype) I see things I can distinguish!



Saturday, May 11, 2013

Engineer's Bench podcast - new episode; "File based QC for TV delivery"



Hugh and Phil go over some of the basics surrounding delivery of TV shows as files. We then do a QC pass using Vidchecker. Find it on iTunes, vanilla RSS, YouTube or the show notes website.

Tuesday, May 07, 2013

Raspberry Pi - firmware

Apparently they haven't been that thorough on delivering them with up-to-date firmware;


So, many thanks to my colleague Dave "the Don" Poves for the following;

Phil,

I do not know if you know that Rapsbian will not update the firmware as ArchLinux does as part of the system upgrades. This is a manual process, but you can automatise it.

To find out your current firmware release you need to issue:

# /opt/vc/bin/vcgencmd version

If the build is not from the last two weeks it is outdated.


You can update it by doing the following:

$ sudo apt-get install ca-certificates git-core

$ sudo wget http://goo.gl/1BOfJ -O /usr/bin/rpi-update

(The short URL points to this one, it just saves a lot of typing: https://github.com/raspberrypi/firmware/tree/master/boot)

$ sudo chmod +x /usr/bin/rpi-update


Once the above script has been installed you can get the latest firmware by typing:

$ sudo rpi-update



Also I am a big fan of Mosh (http://mosh.mit.edu/) and the use of keys so you do not need to type your credentials every time you remote into your Pi. It is a really amazing product. I am getting a new one for some more nefarious objectives. :)

Regards,
 David


There we go, much better!

Saturday, May 04, 2013

Adventures with the Raspberry Pi

Unless you've been under a rock for the last few months you can't have failed to have seen the Raspberry Pi; that credit-card sized ARM-based computer that sells for £25. Although it's modestly powered it does stand up as a small Linux computer for server, desktop or media-centre use. Remember; we had servers twenty years ago when no computer on Earth was this powerful!

So I've been monkeying around with a couple of these boards for a week or so and here are a few observations. To start with a few initial notes;
  • Make sure the SD card you use is both properly formatted and has a valid OS image on it; I battled with one board for days before re-formating it and sticking a new copy of Debian on it. The board literally looks dead if the boot-loader is corrupt.
  • I didn't find any of the problems a lot of online folks claim is an issue - power supplies; I've tried everything from iPhone, Kindle, no-name USB through to the USB-service port on the back of my TV. All powered the boards fine. I haven't measured it I imagine we're talking less than an amp at 5v.
  • It's a lot easier to us one of the many tools to format the card and copy on the OS image. I've been using RPi-sd card builder v1.1
So - there are numerous pre-compiled OS images for download and I suppose it depends on what you want to do with it. My first port of call was a Debian build called "Wheezy" which seems to be the general Linux distro of choice that comes with the KDE desktop. The card must be partitioned into a FAT32 formatted boot partition (64 megabytes) and the rest of the card (at least two gigs) as an ext4 Linux partition. YaST allows you to max-out the main partition when it runs. So long as you know what IP address gets assigned at boot-time you can SSH into the board;


So with Debian installed you can use it as a proper desktop machine or a network server. There are two USB2 ports and so it makes for a very powerful NAS head for a regular USB drive.

The other application that seems to attract the most attention is XMBC for which there are several builds. I stuck OpenElec onto another SD card (this is the joy - you can switch £5 SD cards around and you've got a new machine).



The XMBC builds don't come with a desktop or even an X-client (it writes straight into the display buffer - and it has MPEG2 & H.264 decoding on the board). But it is just the job for a Media Centre.
Another feature of the OpenElec build is support for the Apple Airplay protocol so you can "throw" media playback to it from iTunes or iPad/iPhone;


Several folks online have commented about the poor output quality of the analogue audio 3.5mm jack. The HDMI audio is fine and although the mini-jack ain't great I discovered a couple of things that improved it to tolerable;
  • Don't use a USB hub to power the Pi - most don't have a great regulation of the 5v rail which also tend to be noisy. A Kindle or Apple wall-wart suffices.
  • Use a good-quality USB cable; I had a cheapie cable powering the board and it covered the audio output in hiss. A Kindle cable did the job.  



Wednesday, April 17, 2013

Do you ever need timecode?

...and all you've got is your laptop or MP3 player? It happens to me sometimes, and then I'm glad of having a few minutes of code as a recording. It's here, on my Dropbox