Saturday, June 24, 2017

Replacing the laser on an Arcam 62 CD Player

A few months ago I bought a CD player so that, for the first time in ages, I had a decent hi-fi set-up at home.  Prior to this I was using Chromecast Audio plugged into my Denon amplifier, and high bitrate mp3 over UPnP which isn't bad at all; but since most of my collection consists of physical discs, and secondhand players are cheap these days, I thought I'd take the plunge.  My local Oxfam had an Arcam Diva 62 player for £50 so it was in a good cause too.

Unfortunately the machine quickly failed.  It started with some clicks during playback - then it took a long time to read a loaded disc, and there were more unwelcome nosies as the disc played.  Finally it refused to play any disc at all.  This is usually a problem with the laser (either dirty or simply worn out). I'd grown to like the player and replacement lasers are cheap so I ordered a new one from ebay.

The laser that fits the Arcam 62 is a Sony KSS-213B. I'd read elsewhere that it's a KSS-213C so that's what I ordered but the two are interchangeable.  I actually ordered the complete mechanism KSS-213CDM just in case I could replace the whole thing but this wasn't the same size so I just used the laser.  Here's how it's done.

You'll need Torx T20 and T10 screwdrivers, a soldering iron (and ideally a solder pump), and about an hour.

Unplug the CD player and if you're wearing an anti-static wrist strap so much the better (otherwise touch a radiator pipe from time to time to earth yourself).

Firstly remove the four T20 screws, two from each side of the unit.  Then the three T10 screws at the back (at the top).  The cover will then slide off easily.

You then need to remove the three T10 screws holding in the main transport mechanism, there's one on each side (one shown below) and one at the rear.


The transport is connected to the main board by two ribbon cables.  You'll need to gently unplug the smaller, white cable at the main board end.  The grey cable needs to come out at the other end (the transport mechanism) and you just very gently pull the cable straight out.  Take care not to kink it.
Uplug the smaller cable as marked above
Simply pull out the larger cable.  To refit it push it gently back in.

The transport consists of three pieces: the top, the drawer and the main drive.  The top is held in place on each side by a couple of triangular prongs - squeeze these together gently and you'll be able to lift the top - and drawer - off.  It's likely at this point that one or more of the cogs that open the drawer will fall off, don't worry too much so long as you reassemble them correctly when you put everything back together - see the picture below.


On to the laser replacement.  All that keeps the laser in place is the metal rod 

This can be eased out by pressing at the white tab at the left end

Carefully feed the rod out until it is fully removed, you can then just gently take out the laser.  Disconnect the ribbon cable (like the grey cable earlier this is a pull-out, push-in job).  Before fitting the new laser it's very likely that it will need a bit of soldering.  Most lasers come with a protective "short circuit", I don't know why (perhaps to stop them shooting down spacecraft in transit?).  Anyway this needs to be removed.  Below I have the two lasers side by side, the old one on the left and on the right the new one with the protective short-circuit in place.  With a soldering iron and (if you have one) a pump remove as much solder as possible until there's a clear break between the two halves.

With that done, you can fit the new laser.  From here on everything is - as the Haynes Manuals used to say - the reverse of removal.  Connect the ribbon cable to the new laser and feed the small metal rod back through until it's fully in place.

Carefully reassemble the transport mechanism, starting with the drawer and then the top.  If it doesn't feel like it's coming together double-check that all the cogs fit together and that the white one is the right way up.  

Screw the transport back into the main body of the CD player, refit the cover and you should - with luck - have a player that's as good as ne

I plucked up the courage to do all of this after reading this post from www.hifigear.co.uk about replacing the laser on a similar model.  In the end about the only part that was the same was the steps to remove the front cover - but anyway, I found that post helpful and I hope you find my write-up is too.



Monday, January 02, 2017

Messing around in UPnP with socat 

For the last year I've been working on a UPnP server framework in Ruby.  Mostly I've only been able to do this in short bursts punctuated by long periods of nothing - as the github statistics will show.  But over the quiet bit between Christmas and New Year I've been able to make some progress and the server will now respond to SSDP search requests.

To test it I've been using gupnp-universal-cp which is a gui that can act as a client for any UPnP server.  But it's not (or not easily) scriptable so I can't write any automated tests.  Thanks to Javier Lopez I've found a scriptable way, the socat tool which basically pings network packets at an address and optionally stores the response.

My first attempt was the command

socat -T5 -t5 STDIO UDP4-DATAGRAM:239.255.255.250:1900,ttl=4,sourceport=54321 < socat.in > search.out

which sends the contents of the socat.in file to the UPnP multicast address, the idea being any servers out there would send a Discovery response which gets put into search.out.  You need to pick a random, free port (I chose 54321) for the server to respond to. 

socat.in contained a standard M-SEARCH request (everything between the two lines below)
_____________________________________________________________
M-SEARCH * HTTP/1.1
HOST: 239.255.255.250:1900
MAN: "ssdp:discover"
MX: 1
ST: ssdp:all

 

_____________________________________________________________


However I found that only some of the servers I was expecting to respond were responding.  I ran the socat command again wrapped in strace and saw that the responses were being generated and picked up by socat, but weren't being dumped in the output file.

Looking at the socat documentation and the strace output in detail the reason for this is that some UPnP servers will respond to an M-SEARCH from port 1900 and some from a random, ephemeral port.  The UPnP standard doesn't say which is correct, but the socat command will only process responses from source port 1900 even if they are sent to the correct destination port (54321 in this case).

I'm not sure if this behaviour can be overridden in socat, to work around it I tried the following

socat -T5 -t5 STDIO UDP4-SENDTO:239.255.255.250:1900,ttl=4,sourceport=54322,reuseaddr < socat.in & socat -T5 -t5 STDIO UDP4-RECV:54322,reuseaddr > socat.out2

This runs socat twice, once to send the M-SEARCH requests and then immediately starts a second instance to receive them from any source port.  This worked much better with every UPnP server on my network sending a response that was recorded in socat.out2


Thursday, November 17, 2016

IPSO Facto Crapso




On Sunday 22nd May 2016, a month before the EU Referendum, the Express headline screamed "12M TURKS SAY THEY'LL COME TO THE UK".  The subheading began "Those planning to move.." (emphasis mine) and the story was based on a poll that had asked respondents whether they "would consider moving" to the UK if Turkey ever joined the EU.

Think about that for a minute. "Would consider" in the small print was enlarged to "planning" in the subhead and further to "will" in the main headline (without any single quotes to paraphrase, which is the headline-writer's usual way of qualifying a dubious allegation).

I have, in the past, considered becoming a concert pianist (I've never even got as far as Grade 1).  It doesn't mean I'm even planning to take lessons, much less that I will ever take to the stage.

The headline was clearly a blatant lie.  And you didn't have to be an Express purchaser or regular reader (I'm not) to read it - it would have been on display in shops everywhere and shown on newspaper review segments on television.  So it would have reached a much wider audience than the 370,000-odd people who bought the paper.

In January 2016 IPSO amended Clause 1 of the Editor's Code of Practice to make it clear that inaccuracy included publishing headlines that were not supported by the text.  An open and shut case here, surely?  So I - and apparently many others - complained.

The Legal Adviser at the Express acknowledged the complaint later that week.  On 16 June IPSO informed me that it was carrying out its own investigation.  Funnily enough, after 3 weeks of silence, the Express got back in touch a few hours later, to say that 
"It is now clear that the question we asked was flawed and the data produced by the polling company was therefore wrong."
and that a correction would be published on 19 June, the last Sunday before the referendum.  My issue wasn't with the nature of the question (although plenty of others had, with good reason, complained about it) but the way the results had been blatantly misreported in the headline.  I replied to the Express to that effect.  They did not respond.

On 19 June they published their correction on page 2 of the newspaper, and halfway down the (incredibly long) homepage online.


Compare the size, placement and language of the correction headline ("Turkey poll findings flawed: clarification") to that of the original headline.  Never mind the text (which incidentally was spread across three columns rather than the four of the "news" article above - was this a deliberate attempt to put people off reading something that already looked rather dense and forbidding?);  newspaper articles are emphasised by their headline - if they weren't then why would tabloids use so much front page space on the headline and so little on the actual article?  Prominence is determined by the headline.  

The correction headline takes up approximately one tenth of the space of the original. And it's on an inside page.  I couldn't find advertising rates for the Express, but the Sunday Telegraph (who do publish their rate card online) charge a 60% premium for a front page advert over an inside page one, so it's reasonable to assume that a front page article is around 60% more prominent than an inside page equivalent.  Or, put another way, by my measure the correction was about sixteen times less prominent than the original.

IPSO's code says that corrections must be published "with due prominence" but does not disclose further details about what that means.

The substance of the correction was - in between a lot of self-justification about sample sizes and the like - that by including members of the family in the question about considering migration there was a risk that some responses would have been double-counted had two members of the same family been questioned, or under-counted had a respondent answered on behalf of his or her entire family.  So the question was indeed flawed, but I wasn't unhappy about the niceties of the polling method, I was unhappy about a headline that was an outright lie.  I put this to the Express and asked for confirmation that their own internal complaints procedure had been exhausted.  Again, they never bothered to reply.

IPSO carried on with its own investigation.

And on.

And on again.

Eventually, in late August, they advised me of the newspaper's response:
"Are you asking me whether I accept that the headline and sub-heading of an article that was inaccurate, were inaccurate because they could not be supported by the inaccurate text? I am not sure what the point of this further complaint is"
It's pretty clear that the Express weren't going out of their way to engage with the issue; I was surprised (but perhaps shouldn't be) that their attitude to their own chosen regulator was so dismissive.

IPSO advised me that the matter would go before their Complaints Committee and that I would hear from them after the meeting on 12 October (it would be dealt with at a formal meeting rather than via correspondence).  So they were making all the right noises about taking this seriously and I was hopeful that they would agree with my argument.

I was naive.

On 31 October they advised me of their draft ruling, it's now published on their website.  And it's wrong on so many levels:

They have recorded my complaint as "upheld" because they agreed that the poll question (which I hadn't complained about) was flawed.  However they didn't uphold what I did raise: the misleading headline and the prominence of the correction.

"..the page 2 clarification was sufficiently prominent, given its comprehensive nature, and bearing in mind that the newspaper had acted in a pro-active manner and, crucially, before the Referendum to address the inaccuracy quickly"

Apparently the prompt and proactive nature of the Express' published correction counts towards IPSO's measure of "prominence".  Prompt in this case meaning 4 weeks after the original article - more than enough time to investigate - and at the last possible moment before the EU Referendum vote on 23 June.  And proactive meaning after receiving a substantial number of complaints, and after IPSO had launched its own investigation.  Never mind that the headline was tiny and it was buried on an inside page.  Hundreds of thousands of people who don't buy the paper but saw the headline in shops or on TV would never see the correction.  But that's fine. 

And the headline not being supported by the text of the article was, apparently, not a problem either because the correct (or, correct-ish) text appeared early on in the article.  

 "[The committee] noted that, in addition to setting out the exact wording of the question asked of respondents in the body of the text, the second paragraph made clear that those asked “would consider relocating” were Turkey to join the EU"

So their overall conclusion was that the action already taken by the Express was, conveniently, sufficient to atone for all the failings in the original article.

I requested that IPSO review both of these decisions, but they have refused - they only allow reviews if the original investigation was "procedurally flawed"; since their procedures are opaque there's no way of telling whether this is the case or not.

So what have we learned, or had confirmed, from this exercise?
  1. An earlier IPSO ruling (also widely criticised for the prominence of the correction IPSO negotiated) about a Sun headline based on an inaccurate poll doesn't appear to have inhibited newspapers from continuing to construct inflammatory rubbish from dubious polling.
  2. If a newspaper wishes get away with a false headline it merely needs to introduce a minor error into the story.  That way it can publish an obscure correction for the minor error, and escape further sanction for the major one.
  3. IPSO takes no account of people who have seen, but not bought, the paper when determining the impact of a falsehood and the way it should be corrected.
  4. A headline that's not supported by the text isn't misleading if the truth appears fairly early in the article.  In what circumstances IPSO would actually dare to determine this part of the Editor's Code had been breached I really don't know.
  5. A correction that's about sixteen times less prominent than the original article has "due prominence" according to IPSO. One wonders how small and hidden away a correction would have to be to fail their prominence test.
  6. Newspapers - or at least the Express - treat complainants and IPSO with contempt.  Remember, twice I didn't even get an acknowledgement from the Express, and their attitude towards IPSO was unhelpful to say the least.
  7. IPSO should rename themselves IPSLOW to set people's expectations.  The original article was published more than five months before their eventual ruling.
  8. They do not permit their reasoning - however illogical - to be challenged.  
  9. For some reason IPSO are keen to make it look like they have brought a satisfactory conclusion to this by claiming to have upheld a complaint about a controversial article - with complete disregard for the truth.
Is this regulator really fit for purpose? 

If you'd like to tell IPSO what you think of them you can reach them at inquiries@ipso.co.uk or @IpsoNews on Twitter.  In closing I'd like to acknowledge that the staff I've had contact with at IPSO have been unfailingly polite throughout, but that counts for little when the organisation as a whole is so laughably complacent and ineffective.

Friday, May 23, 2014

One more thing about Darktable and processing negatives..

I forgot to mention that Darktable does actually come with a module that inverts negative images automatically.  To do this you need to feed it a small sample of unexposed but developed film (e.g. the borders of a negative, or the blank frame at the start / end of the roll); it will then - theoretically - invert and neutralise that colour tint.  I say theoretically because I've never got it to work to my satisfaction; hence my method (which also allows processing in batches).

Wednesday, May 21, 2014

Converting Negatives using Darktable

Yes, it's been a while.  Again.

Over the past year I've been digitising my negatives using the method I outlined in earlier posts, and found a method of converting them in batches that gives better results and more control over the process.  Part of what drove me to look at alternatives was the discovery that different rolls of film weren't being converted consistently using my original method; some of them came out yellower or bluer than others.  I guess this is something to do with the age of the negative, brand of film, or how it was developed - or a combination of all of these.  My new workflow allows easier fine-tuning of the results and processing of batches of images in one go which is the best of both worlds - it means you need only tweak the settings for one image in a set of negatives from the same film, then apply these settings to all the others.

The software I'm using is called Darktable.  It's open source (free) but only runs on Linux.   If you're not comfortable installing Linux directly onto your machine then you can - as I have done - use Virtualbox or similar virtualisation software to create a virtual machine that you can install Linux onto (I use the Ubuntu flavour but there are loads to choose from). The Darktable website has a guide to installation on Linux (although your favourite version of Linux may already have Darktable available for installation, it's likely to be an old version so it's best to go to the website to get the latest); and with Virtualbox it's easy to share folders between the host (Windows) machine and the guest (Linux) systems.  Darktable won't run as fast in a virtual environment because it will have less memory to play with, and won't be able to exploit the processing power of your graphics card (assuming you have one capable of supporting OpenCL and with 1GB or more of video RAM), but it works.

Darktable will run, grudgingly, on 32 bit computers with 2GB of RAM but 64 bit and more memory makes it a lot faster and more stable.  

I'll walk through the process of converting negatives using a couple of sample pictures.  Firstly, start Darktable, then on the top left under "import", click the "folder" button.  Navigate to the folder containing your images and press "open".  Your images will appear in the "lighttable" view which is a collection of the images you've imported.


Double-click on an image to open it for editing in "darkroom" view.  The left-hand side contains information about the image and the edits that have already been applied to it (more on this later); the right hand side allows you to access the various different editing tools.  The one we're most interested in is "tone curve".



The "tone curve" module maps input colours to output ones.  The curve (initially it's a straight line) describes how the mapping is made from the point on the x-axis of the graph to the equivalent point on the y-axis.  A straight line from bottom left to top right maps every value to its identical equivalent (ie doesn't change the image).  A straight line from top left to bottom right will effectively invert the image - high values (light colours) will get mapped to low values (dark) and vice versa.

By default the tone curve only works on the brightness (Luminosity) of the image.  However underneath the graph there's an option called "Scale Chroma" which you should switch to manual; this will then allow you to switch to the tabs above the graph marked a and b, and make changes to them.  Rather than working on brightness, the "a" graph works on the magenta/green characteristic of the image and the "b" graph on the blue/yellow.  So changing the line from bottom-left/top-right to top-left/bottom right on all three graphs will effectively invert the brightness and the colours, as you can see below.



However, there's further fine-tuning required.  From here, go back to the lighttable by clicking on "lighttable" at the top right of the screen.  The image in the lighttable should change to reflect the inversion.

Darktable applies, by default, a set of enhancements to every RAW image.  These are "base curve" and "sharpen", and are meant to make your pictures appear in Darktable more like they would look straight from the camera in JPG format.  However "base curve" is another application of the tone curve; since we're trying to invert the image this is an interference we can do without.  Personally I prefer to leave sharpening for later as well.  So we need to get rid of these enhancements - unfortunately Darktable doesn't make it intuitive.  The way I've found that works is to select the edited image in lighttable, then press the "copy" button on the right (under the "history stack" menu).  Then un-tick "base curve" and "sharpen" and press OK. 


 Now click on another image in the lighttable, and under the "history stack" menu again change the option next to "paste all" to "overwrite".  Then click "paste all".  This will apply just your tone curve to the image, and remove the base curve and sharpening effects.  You can now click "copy all" to take the tone curve effect, then choose "invert selection" from the "select" menu and finally click "paste all" again to apply the change to all the other images in the lighttable.  All your images should now have the inversion tone curve applied to  them, and nothing else.

Now it's time to fine-tune.  Double-click on any image to open it in darkroom mode, and start adjusting the curves on the L, a and b graphs.  Your image will lack contrast and you'll see from the histogram on the L graph that the pixels are all around medium brightness.  To increase the contrast , adjust the curve so that it is steeper.  Ensure that the curve starts and ends outside the histogram otherwise you'll make the dark areas of the image too dark and/or the bright too bright.  Drag the curve downwards to make the overall image darker, upwards to make it brighter.  A steeper curve will enhance contrast, a shallower one will reduce it.


Steepening the curve on the "a" graph will increase colour saturation for magenta / green colours.  Moving the midpoint of the curve upwards will make the image more magenta, moving it lower will make it greener.  You can also drag the top half of the curve to adjust the magenta parts of the image (make them more or less saturated) whilst leaving green alone, and vice-versa.  The "b" graph operates similarly for blue / yellow.  Because I've used blue and green flash gels when photographing negatives to neutralise the orange colour cast I usually find that only small adjustments are needed upwards and downwards, but the curve needs to be quite steep to bring back the necessary colour saturation.


Clicking on the pipette at the top right hand corner of the tool will bring up a small square on the image which you can click and move around; numbers will appear on the graph showing how the input value of the pixel at the centre of the square maps to the output.  For the "a" and "b" graphs, the value 0 means colour-neutral (ie white or grey).  By using this on parts of the images known to be grey you can see how the curve should be adjusted to get a perfect result.

Beware of over-saturating the colours in the image (making the "a" and "b" curves too steep); I've also found that parts of a picture meant to be dark brown (e.g. wood furniture) can be problematic to get right - if the "a" graph isn't carefully adjusted you can end up with greenish patches; the solution to this is to raise the top of the curve (make things more magenta) until the green just disappears, but not so much that any faces in the images start to become blotchy.

There are other options in Darktable for sharpening and removing image noise, I've found that "Denoise (profiled)" and the "equalizer" tool using the "Denoise (subtle)" preset work well for this.  Basically from this point forward you should make whatever adjustments you want to perfect the image.  You can then go back to Lighttable mode and use the copy all / invert selection / paste all method to apply the same changes to all your other images, before exporting.

Sunday, January 27, 2013

Digitising old negatives - Part 3 - processing

At the end of the last post I'd got a collection of digital images of colour negative film which I needed to turn into the digital equivalent of positive prints.  And I want to do it in a way that's as time-efficient as possible so I don't wish to have to work on each image individually unless it's to make some fine adjustment on a particularly valuable picture from the past.  Here's what I did:

Firstly, I took a sample of the images in JPG format and loaded them into the GIMP which is a free equivalent of Photoshop (If you have Photoshop, Lightroom or something like it, it will work just as well).  I then flipped the image vertically (In GIMP the menu steps are Image..Transform..Flip Vertically) and inverted the colours to turn the negative into a basic postive (Colors..Invert).  This gave me a somewhat washed-out looking image with a bluish tinge (because I hadn't completely eliminated the orange tint from the original negative; the remaining orange had been inverted to blue).

Using the Colors..Curves commands I played around with the curves for the different channels (reducing the blue, increasing the red, and changing from a straight line to more of an S-shape) until the colour tone of the picture looked right.  I then saved those curve settings as a preset and applied it to a few more images I had taken; if one didn't look right I'd change the settings, save again, and undo / reapply to the previous images until I had a set of curve adjustments that did a good job across a wide range of pictures.  Here, for reference, is what worked for me.


I also used the Levels tool to change the gamma setting from 1 to 1.5; this basically made the mid-tones of the images brighter which better reflected the prints I was comparing them to (although those prints were - I think - quite contrasty.  There's more shadow detail coming out of the digitised negatives than was ever on the prints).

These adjustments are all very well, but they would be time-consuming to apply (even with saved presets) to hundreds of images and the GIMP is only an 8-bit editing tool which means that some colour detail will be lost when using curves in this way.  The final step was to set up a way of adjusting the 16-bit RAW files from my camera, with as little human intervention as possible.

I did this using a free command-line tool called ImageMagick (IM).  IM has a lot of good ways of adjusting pictures, but one of the most useful for this job is something called Hald-Clut.  The idea is that you generate an image with all possible colours in it, apply whatever colour adjustments you like to that image, and can then use that image as a means to apply to same adjustments to any other images you wish to process.

The first step is to get IM to generate the Hald image by using the Windows command prompt and entering

convert hald:8 hald.png

Then use GIMP (or equivalent) to make the same adjustments (except the flip) to hald.png as you made to the negatives ie invert the colours, apply the curves and the gamma; and save the result as a .png file.  Although the hald.png file only has an 8-bit colour depth, IM will do some interpolation when processing 16-bit images against a Hald image to retain 16 bits of colour detail.

I then created a file called conv.bat (downloadable here, but you'll need to rename the extension from .txt to .bat - Google won't allow me to upload it with the right extension) with the following single line of text

for %%n in (*.tif) do convert %%n -flip hald.png -hald-clut -quality 97 %%~nn.jpg

which means "for each file with a .tif extension, use IM to firstly flip the file along its vertical axis, then apply the colour adjustments "encoded" in hald.png, finally save the result as a JPG file with quality setting 97.


Thereafter, it's a two stage process with all of your images.  Firstly convert the RAW files from your camera into 16-bit TIFF images, applying your favourite noise reduction software (I used Sony's Image Data Lightbox software for this, it came with my camera and will happily do a batch in one go).  Although IM will read RAW files directly, the Sony software is better at applying noise reduction than IM.  I set it to only apply noise reduction, not to try any other enhancements (white balance, exposure compensation etc.).

I copied the conv.bat file to the folder containing both the TIFF images and the hald.png file (you can download mine here), double-clicked on it to run, and it set of happily converting. Each file only takes a few seconds and the results (based on the negatives I've converted so far) are pretty consistent.

Saturday, January 26, 2013

Digitising old negatives - Part 2 - exposure

In my last post I described how I obtained and assembled a negative or slide copier that will work on a crop-sensor (APS-C) DSLR.  In this part I'll explain how to I got consistent results.  I was aiming for three things

  1. The slide had (obviously!) to be in focus
  2. Exposure had to be more or less correct
  3. Colour negatives have a strong orange tint.  I wanted to remove this as much as possible (I could have left it all to post-processing on the computer, but figured that the less drastic the adjustments I made in software, the better the overall quality would be).
Focusing was easy, because the M42 to Sony lens mount adapter I have has a "focus confirmation" chip, meaning that when you turn the focus ring on the lens a green dot will show up in the viewfinder when the image is sharply focussed.  This isn't usually perfect, but in this case it's good enough.  I set the aperture on my lens to its widest (lowest number - f/1.7), loaded a negative into the holder, pointed the camera at the light on the ceiling and adjusted the focus until the green dot appeared.  If your adapter or camera doesn't have the benefit of focus confirmation, or "live view" or equivalent, then you may have to find the best focus point by trial and error. Adding one or more extension tubes to a lens reduces the depth of field (range of distance for which an image is in focus) drastically so be prepared for some fine tuning.

To get a consistent exposure, I decided to use my flashgun.  This also meant I could compensate for the orange tint in the negatives by putting flash gels (tinted pieces of plastic) in front of the the flash.  I bought a few sets very cheaply from flashgels.co.uk and rather than buy a clip for them found it easiest simply to use the wide angle diffuser that came with my flash to hold them in place.  My flash (a Sony F42AM) can be triggered wirelessly by the camera but a cable-attached one would have worked too.





In order to get as sharp a picture as possible, I set the aperture on the lens down to f/8 (most lenses are at maximum sharpness around this point; the increased depth of field from a smaller aperture also meant it would matter less if the focus was slightly off).

I then experimented with various flash power settings and distances between the flash and the negative to get a reasonable exposure (one where the histogram displayed on the camera when reviewing pictures bulges more or less in the middle).  I found that setting my flash (which has a Guide Number of 42) on half power and placing it the length of a standard bic biro (14.5 cm) from the negative gave a good result.  The inverse square law applies here (light intensity falls off by a factor of four as the distance between flash and subject is doubled), so full power at about 20cm distance should give an equivalent exposure.

Because I'm using a fixed aperture and setting the flash power myself, shutter speed is more or less irrelevant but I put the camera into Manual mode and chose a shutter speed of 1/160.  A couple of minor camera settings which helped (again to ensure consistency between shots) - I set white balance to "flash" and the "DRO" optimiser off.  Finally I set the camera to save both RAW and JPG files; Part 3 explains why.

To work out which combination of gels to use, I put some an unexposed but developed part of a negative (from the start or end of the film) into the slide holder and took some shots, swapping various gels in and out, until the colour most closely resembled light grey; this took three Half-CTB (Lee filter 202) gels and one Quarter plus green (Lee 246) one.

My first few shots were spoiled by dust and dirt on the inside and outside of the diffuser glass at the end of the Accura; it's important to make sure that this - as well as the negative of course - is as clean as possible before shooting.  My negatives were in strips of four, and to start with I was lining them up through the camera viewfinder which was tedious as I had to open up the aperture so I could see enough to adjust the position of the negative, then close it down again (whilst remembering not to touch the focus ring) and of course ensure the camera was positioned correctly relative to the flash.  After a while I got a feel for where the negative should be (going by the positioning of the sprockets on either side) so was able to shoot, move to the next negative in the strip, shoot again, etc. reasonably quickly.  The results were decently sharp and exposed - how to process them into good positive pictures again?  That's the subject of Part 3.

One final point on the negatives - it's best to have the side with the film emulsion facing the camera which means the image will have to be "flipped" on its vertical axis later.  This means putting the negative into the holder (a) with the picture upside-down, and (b) with the lettering / numbers at the top of each frame showing back-to-front when looked at from behind the camera.

Digitising old negatives - Part 1 - kit

Over the past year or so I've been investigating how to transfer a set of old slides and negatives to the computer.  I was looking for a method that would provide reasonable quality, be fast, and fairly cheap.  And thanks to some information from herehere and here, I think I've found it.  (I'm very grateful to the authors of those articles for posting about their experiences, and it's in the same spirit that I've decided to document what I have done).

Although I have a scanner - one of those cheap HP all-in-one scanner / printer / copier devices - it isn't very good and scanning film is, by all accounts, a slow process.  Instead I looked into using a slide copier (sometimes called a slide duplicator), a piece of kit that was common a few decades ago.  Slide copiers are basically tubes with a lens inside, one end has a holder for slides (and often negatives), the other end screws into a compatible SLR camera body.  Most copiers used the M42 lens mount which meant that they were compatible with a wide range of cameras.  There was no need to set focus because the slide would always be a fixed distance from the camera. Secondhand copies are quite easy to find on ebay.  My Digital SLR, a Sony, can use M42 lenses with the help of a cheap adapter (most other brands of DSLR can too).  So why not go down this route?

The reason is that these copiers were designed to produce a 1:1 copy of the slide on the 35mm film of an old SLR.  Most Digital SLRs have a sensor - often called APS-C size - that's smaller than 35mm film; they aren't true "35mm" cameras at all.  Only "full frame" DSLRs have a sensor of the right size, and they are expensive (£2000 upwards at the time of writing).  I don't have one. The effect of using a slide copier on a camera with an APS-C sized sensor would be to crop the slide; the outer edges would not be copied (about a third of the total area).

Instead, I've opted for a different kind of duplicator that doesn't have its own lens and has some flexibility in its set-up, the Accura Variable Magnification Duplicator. This dates from the late 1960s / early 1970s and was also sold under the Miranda and Panagor names.  It is much rarer than the standard slide copiers but does turn up on ebay (usually from USA sellers) every few weeks.  It consists of a slide and negative holder which screws into a set of metal rings, the assembly then (possibly with the help of a filter ring step-up or step-down adapter) screws onto your camera lens.

My copy of the duplicator came with the holder, two rings labelled "6 Japan", an unlabelled one, and one with "49F7". The "6 Japan" rings screw into the slide copier itself; they are extension tubes with a diameter matching the old  "Cokin Series 6" (sometimes Roman numeral - VI is used) filter size.  The unlabelled ring is an adapter between this  and "Cokin Series 7" (or VII) size and the final ring is an adapter between Series 7 and a 49mm filter mount.  What this means in plain English is that by screwing all of these bits together in the correct order you can then screw the assembly onto a lens with a 49mm filter thread and - just as importantly - put a bit of distance between your lens and the slide holder.  If your lens has a different filter thread size (55mm is common), adapter rings to convert between the two are common on ebay or photography shops; if your lens filter is more than 49mm (and most will be) then you'll need to search for a step down filter ring of the correct size (xxmm to 49mm).

The set-up I have just described is how the Accura was originally meant to be used; the slide would be of more or less the right distance from the lens to ensure that the whole slide was captured when a lens of the standard 50mm focal length was used to photograph it.  There is an adjustment screw on the Accura which allows for fine-tuning of the distance; the manual that came with it suggests that using this adjustment you can compensate for your lens having a focal length of anything between 50 and 58mm.

However, using this arrangement with today's DSLRs (unless they are full-frame) leaves us with two major problems to solve (and a minor one concerning exposure which I'll cover later).

  • In order to capture the full slide (or negative) on an APS-C size DSLR sensor you either need a lens with a shorter focal length (around 35mm), or you need to put more distance between your lens and the slide holder
  • Unless you have a specialist macro lens to hand, it's unlikely that you will be able to focus on the negative - the distance between it and the lens will be too short for the lens' optics to cope
 Turning to the second problem first, the cheapest way to set up a close focusing lens was to use an old M42 manual focusing / aperture lens with an adapter for my DSLR and some extension tubes.  Extension tubes fit between the back of the lens and the adapter, and their effect is to dramatically shorten the minimum (and maximum) distance over which the lens will focus.  My M42 lens and extension tubes (which I already had - another reason for going down this route) cost around £20 in total and are widely available secondhand or on ebay.

My M42 lens had a 50mm focal length so I still needed to solve the first problem.  35mm M42 lenses are comparatively rare and expensive, and I was concerned that one might introduce a slight degree of "barrel distortion" on the final picture, so I chose to put more distance between the lens and the slide holder.  The easiest way to do this was to hit ebay (one last time) and buy a whole set of those filter step up / step down rings I mentioned earlier; for £10 (from a Chinese seller) I got a set that would adapt from 49mm all the way up to 82mm and back again.  All the rings screw together; by a bit of trial and error I found that I could achieve the right distance by putting the rings that went from 49mm to 62mm and back again between my lens filter thread and the "49F7" adapter.


The final piece of kit to add in was a flashgun - again I was able to make use of one I already owned.  Using that to control exposure will the the subject of Part 2 of this post.

I've described a set-up using a Sony DSLR and manual focus lens, but there's no reason why this shouldn't work for other DSLRs or even for compact digital cameras that take standard filter sizes on their lenses (or can fitted with an adapter to do so).  A native fitting macro lens of around 50mm focal length could be used (dispensing with the extension tubes); alternatively Canon, Nikon and Pentax DSLRs can use M42 lenses with an adapter.  The combination of Nikon and M42 isn't usually recommended because you can only achieve close focusing (ie not beyond a few metres) but since closer focusing is what we want for this exercise it shouldn't matter.