Peter Brett's Blog

Tuesday, December 10, 2013

Chilli and lime dark chocolate tarts

In the second round of the baking competition at work, I baked another invention of mine: sweet pastry tarts, filled with a dark chocolate ganache flavoured with chilli and lime, and decorated with candied chillies.

They didn't do very well with the judges — they thought there was too much chocolate filling and/or it was too rich, and they found the candied chillies too spicy. On the other hand, the whole batch got eaten, so it's not all bad news.

This time-consuming and labour-intensive recipe makes 8 tarts.

Ingredients

For the candied chillies:

  1. 1/2 cup water
  2. 1/2 cup sugar
  3. 1 lime
  4. 2 mild chillies

For the pastry cases:

  1. 250 g plain flour
  2. 35 g icing sugar
  3. 140 g cold unsalted butter
  4. 2 egg yolks
  5. 1.5 tbsp cold water

For the chilli and lime dark chocolate ganache filling:

  1. 100 ml double cream
  2. 25 g caster sugar
  3. 100 g dark chocolate
  4. 12 g butter
  5. 2 limes
  6. 2 bird's eye chillies

Candied chillies

Make the candied chillies first — they keep for ages, so you can make them a good while in advance.

Cut the chillies into thin, circular slices, and remove the seeds (tweezers are useful). Take the peel of about a quarter of a lime, and slice it into strips as thinly as possible.

In a heavy-bottomed saucepan, heat the water and sugar to make a syrup. When it gets to the boil, carefully add the lime peel and chilli slices and simmer for 20 mins.

Strain the sugar syrup to remove the chilli and lime — save the syrup for later — and lay the pieces out on a silicone baking sheet. Bake in the oven for an hour at about 90 °C, until they are dry to the touch.

Sweet pastry cases

Put the the flour, icing sugar and butter in a food processor and pulse a few times until the mixture becomes about the consistency of breadcrumbs. Add the yolks and cold water and pulse until the mixture comes together. You may need to add a tiny bit more water. Knead the pastry a couple of times — literally only enough that it comes together into a ball — then wrap it in clingfilm and put it in the fridge to chill for about an hour.

Clear a shelf in the fridge and prepare 8 individual-size pastry tins (about 7.5–8 cm diameter).

Divide the dough into 8 equal portions. Roll each piece out to about 15 cm diameter and carefully place them in the pastry tins, pushing it out to fill the corners. If any holes appear, push them back together again. There should be 2&ndash cm of excess pastry protruding from the edges of the tin; trim back any much more than this.

Prick the bottom of each case with a fork and place them in the fridge to chill for at least an hour. By making sure that the cases are well rested you will avoid the need to use baking beans.

Preheat the oven to 180 °C (fan) and place a baking train the oven to heat. When the pastry cases are rested, place them directly onto the hot baking tray and into the oven, and bake for approx. 12 min until golden. Be very careful that the pastry doesn't catch!

When pastry cases come out of the oven, immediately trim the excess pastry from the cases before they become brittle, using a sharp knife. Leave them to cool in the tins on a cooling rack.

Chilli and lime chocolate ganache filling

Finely chop the chillies and zest the limes.

Place the cream, sugar, chillies and half the lime zest in a saucepan. Warm over a low heat. (The longer you infuse the cream, the stronger the filling will be).

Meanwhile, break the chocolate into pieces. Put the chocolate, butter and remaining lime zest in a mixing bowl.

When the cream is almost at boiling point, strain it onto the chocolate and butter. Whisk the mixture slowly until the chocolate and butter has melted and the ganache is smooth and glossy. If the chocolate doesn't quite melt, heat the mixing bowl over a pan of hot water (but make sure the bowl doesn't touch the water!)

If the filling isn't strong enough, you can add a couple of teaspoons of the chilli sugar syrup left over from making the candied chillies earlier.

While the ganache is still warm, carefully spoon it into the pastry cases. Decorate with the candied chillies.

N.b. the ganache will take at least a couple of hours to set; you can put it in the fridge to help it along, but it may make the top lose its glossy finish.

Labels:

Sunday, December 01, 2013

Stripy chocolate, vanilla and coffee cake

At Sharp Labs we're having a baking competition going on to raise money for Helen & Douglas House. I foolishly decided to enter it.

There are three rounds. The first round, which took place on the 25th November, was sponge cakes. I invented a variation on a coffee cake. It's made up of six alternating layers of chocolate and vanilla sponge, bound together and coated with a coffee buttercream icing. This recipe is for a large cake which will happily make 16 slices.

Ingredients

For the vanilla sponge:

  • 165 g unsalted butter (at room temperature)
  • 165 g caster sugar
  • 3 large eggs
  • 165 g self raising flour, sifted
  • 1.5 tsp vanilla essence
  • Hot water (if required)

For the chocolate sponge:

  • 165 g unsalted butter (at room temperature)
  • 165 g caster sugar
  • 3 large eggs
  • 155 g self raising flour, sifted
  • 1 heaped tbsp cocoa powder, sifted
  • Hot water (if required)

For the coffee buttercream:

  • 600 g icing sugar
  • 375 g unsalted butter (at room temperature)
  • 150 ml strong espresso coffee (about 3 shots)

Method

Preheat the oven to 155 ℃ (fan). Position a shelf near the middle of the oven for the cakes. Line the bottoms of two deep 20 cm springform or sandwich tins with baking parchment.

Each of the sponge batters is prepared in the same way (it's best to do prepare them in parallel in two bowls so that you can bake the cakes simultaneously):

  1. Cream butter and sugar together using an electric hand mixer until light and fluffy.
  2. In a measuring jug, beat the eggs. Then add them little by little to the butter & sugar mixture, making sure to fully combine each addition before the next. For the vanilla sponge, add the vanilla essence at this stage.
  3. Sift about a quarter of the flour (or flour and cocoa mixture) into the mixture, from a height of about 50 cm so as to air the flour well. Carefully and gently fold the flour in (you want to trap as much air as possible at this stage). Repeat until all the flour has been combined.

Transfer the sponge batters into the tins, and place the tins at mid-level of the oven near the front. Bake for 25-30 mins. When they are cooked, they'll (1) make a popping sound like rice crispies, (2) feel springy when lightly touched near the centre with a fingertip and (3) a sharp knife inserted all the way through will come out clean.

About 1-2 mins after removing the cakes from the oven, turn them out, carefully peel off the baking parchment, and leave them to cool for about half an hour.

Carefully slice each of the cakes into three horizontal slices, approximately 1 cm in thickness. I found that a very very sharp knife and a lot of patience was more successful than using a cake wire.

Make the buttercream by putting the butter and icing sugar into a bowl and beating them with an electric hand mixer while slowly adding the espresso.

Assemble the cake by putting a vanilla slice of sponge on a turntable, adding a thin layer of butter cream and levelling it off, then adding a chocolate slice on top, and continuing until all six slices are built up. Make sure on each layer to spread the buttercream all the way to the edge.

Use the remaining buttercream icing to smoothly coat the exterior of the cake. Use a side scraper and a turntable to get vertical sides and horizontal top! You should have some icing leftover.

Finally, you can optionally use cocoa powder and/or walnuts to decorate the finished cake.

Labels:

Saturday, November 23, 2013

Black onion seed and rye crackers

Here's a recipe for some nice crunchy rye crackers. I adapted it from a rosemary cracker recipe that my father figured out. It makes about 24 large crackers, but it very much depends on how you cut them.

  • 160 g plain flour
  • 120 g rye flour
  • 80 ml cold water
  • 60 ml olive oil (+ extra for brushing)
  • 1 tsp baking powder
  • 0.5 tsp baking salt
  • 1.5 tsp black onion seeds
  • Crystal salt
  • Black pepper
  • Crushed, dried seaweed
  • Za'atar

Pre-heat the oven to 230 ℃ fan. Put baking sheets into the oven to preheat.

In a mixing bowl, combine the flours, baking powder, baking salt and black onion seeds. Add the water and olive oil and knead briefly to form a smooth dough. Do not overwork the dough; you do not want gluten strands to form.

Divide the mixture into three parts. Wrap two in clingfilm while you work with the third.

Using a rolling pin, roll one third of the dough out as thinly as possible onto a silicone sheet. Using a dough blade or palette knife, gently score across to divide the sheet into crackers.

Sprinkle the top with salt crystals, seaweed, coarsely-ground black pepper and a generous sprinkle of za'atar. Gently pass the rolling pin over the sheet again to press the toppings into the dough.

Transfer to the oven and bake for roughly ten minutes, or until the top begins to darken at the edges.

Labels:

Sunday, May 12, 2013

The IEEE does not do Open Access

Summary: By the commonly-accepted definition of the term, IEEE journals offer real Open Access (OA) publishing options if and only if your funding body mandates Open Access publishing.

Introduction

This time last year, I posted a survey of journals and Open Access in the field of remote sensing. As I have been being encouraged by my department to publish in the IEEE Transactions on Geoscience and Remote Sensing (where I currently have a paper going through its second review stage), over the last year I have been trying to determine what, exactly, IEEE Publishing means when it claims to offer "open access".

What is Open Access (OA)?

As I mentioned in my previous post, most people who are interested in widening the general public's access to scientific literature understand "Fully Open Access" to mean compliance with the Budapest Open Access Initiative definition (BOAI):

Free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of... articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.

The subject of OA publication of research results is topic of quite a lot of public debate in the UK at the moment, due to the UK Research Councils (RCUK) issuing new guidelines and requirements on the topic. The new RCUK Policy on Open Access came into force on 1st April 2013, and contains a definition of OA.

RCUK defines Open Access as unrestricted, on-line access to peer-reviewed and published research papers. Specifically a user must be able to do the following free of any access charge:

  • Read published papers in an electronic format;
  • Search for and re-use the content of published papers both manually and using automated tools (such as those for text and data mining) provided that any such re-use is subject to full and proper attribution and does not infringe any copyrights to third-party material included in the paper.

Furthermore, RCUK clearly express a preference for publication using a Creative Commons Attribution (CC-BY) licence, and require such a licence to be used when RCUK funds are used to pay an Article Processing Charge (APC) for an OA paper. Specifically, they say that:

Crucially, the CC-BY licence removes any doubt or ambiguity as to what may be done with papers, and allows re-use without having to go back to the publisher to check conditions or ask for specific conditions.

As a researcher funded by EPSRC, I was of course very keen to determine whether the IEEE's "open access" publishing options comply with the new policy.

"Open access" at the IEEE

The IEEE claim to offer three options for OA publishing: hybrid journals, a new IEEE Access mega journal, and "fully OA" journals. One the bright side, the IEEE seems to treat all three the same way in terms of the general process, fees, etc., so I will not discuss the differences between them here.

Some aspects of the IEEE's approach to OA are quite clearly explained in the FAQ, and provide an interesting contrast with the the policies at unambiguously fully OA journals such as PLOS ONE. The IEEE charge an APC of $1750 per paper; PLOS ONE charges $1350. The IEEE requires copyright assignment; PLOS ONE allows authors to retain their copyrights. The IEEE's licencing of APC-paid OA articles is almost impossible to determine; PLOS ONE is unambiguously CC-BY.

But what is that licence? Exactly how open are "OA" articles published in IEEE journals? With reference to RCUK's definition of OA, the first point is clearly satisfied — users can read the paper free of charge on IEEE Xplore. Trying to pin the second point down has been quite a quest.

The IEEE allows authors to distribute a "post-print" (the accepted version of a manuscript, i.e. their final draft of a paper after peer review but before it goes through the IEEE's editing process and is prepared for printing). This can be placed on a personal website and/or uploaded to an institutional repository. At the University of Surrey, for example, papers can be placed on Surrey Research Insight. Unfortunately, this "Green OA" approach does not satisfy the RCUK's requirement to enable re-use; the licence is very explicit. As per the IEEE PSPB Operations Manual, the IEEE requires the following notice to be displayed with post-prints:

© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

With Green OA clearly ruled out as an option, what about when an APC is paid (also known as "Gold OA")? This is option preferred by RCUK. I initially tried to figure this out by e-mailing the IEEE intellectual property rights office, but I never received any reply. I also e-mailed the editor of TGRS, and this also elicited no response.

My last and most recent attempt involved e-mailing IEEE Xplore tech support, asking where on the website I could find licence information for a specific recent "open access" TGRS paper that I had downloaded.

I have been unsuccessfully attempting to determine the license under which "Open Access" journal articles from IEEE journals are available from IEEE Xplore.

For example, the following paper:

Zakhvatkina, N.Y.; Alexandrov, V.Y.; Johannessen, O.M.; Sandven, S.; Frolov, I.Y., "Classification of Sea Ice Types in ENVISAT Synthetic Aperture Radar Images," Geoscience and Remote Sensing, IEEE Transactions on , vol.51, no.5, pp.2587,2600, May 2013
doi: 10.1109/TGRS.2012.2212445

is allegedly an "open access" paper, but the IEEE Xplore web page gives no indication of whether it is actually being made available under a Budapest Open Access Initiative-compliant license (e.g. CC-BY), and an exploration of the pages linked from the its web page leaves me none the wiser.

Could you please improve the IEEE Xplore website to display article licensing information much more clearly, especially in the case of your "open access" products?

This then got passed on to the IEEE's "open access team" who then in turn attempted to pass it on to the IPR office to be ignored again. However, I now had an e-mail address to e-mail with a more specific request:

Thank you for forwarding this query on. Needless to say, the IEEE IPR have not responded to the question, just the same as when I contacted them directly a few months ago.

Surely, as the IEEE Open Access team, you and your colleagues must have some idea of what level of openness IEEE are aiming for with their open access initiatives, especially given that you've just launched a new "open" megajournal! Your competitor OA megajournals make their licensing information really easy to find, and I don't understand why IEEE Publishing seems to be having a big problem with this.

As an IEEE member the lack of clarity here is really quite concerning.

Finally, I received a moderately-illuminating reply.

I will pass on your feedback that OA copyright information needs to be easier to find in Xplore.

The IEEE continues to review legal instruments that may be used to authorize publication of open access articles. The OACF now in use is a specially modified version of the IEEE Copyright Form that allows users to freely access the authors’ content in Xplore, and it allows authors to post the final, published versions of their papers on their own and their employers’ websites. The OACF also allows IEEE to protect the content by giving IEEE the legal authority to resolve any complaints of abuse of the authors’ content, such as infringement or plagiarism.

Some funding agencies have begun to require their research authors to use specific publication licenses in place of copyright transfer if their grants are used to pay article processing charges (APCs). Two examples are the UK's Wellcome Trust and the Research Councils of the UK., both of which this month began to require authors to use the Creative Commons Attribution License (CC BY). In cases like these, IEEE is willing to work with authors to help them comply with their funder requirements. If you have questions or concerns about the OACF, or are required to submit any publication document other than the OACF, please contact the Intellectual Property Rights Office at 732-562-3966 or at copyrights@ieee.org.

The IEEE IPR office has additional information about the OACF, including an FAQ, on our web site at http://www.ieee.org/publications_standards/publications/rights/oacf.html.

From this e-mail, it is clear that paying an APC for the IEEE's "open access" publishing options normally provides very little real benefit over simply self-archiving the accepted version of the manuscript. Either way, tools such as Google Scholar will allow readers to find a free-to-read version of the paper; if you are using the IEEE journals LaTeX templates, this version will be almost indistinguishable from the final version as distributed in printed form.

Furthermore, the IEEE APC-supported "open access" publishing option is not Open Access, by either the BOAI or RCUK definitions of the term, because re-use is forbidden. Gold OA is clearly also not normally an option when publishing with the IEEE.

The only exception to this is if you have a mandate from a funding body that says your publications must be distributed under a certain licence, in which case you may be able to persuade the IEEE to provide "real" Gold OA: the ability for the public to read and re-use your research at no cost and with no restrictive licensing terms. This would apply, for example, if you were funded by RCUK; in that case you should not sign the IEEE Copyright Form, and should contact the IEEE IPR office before submitting your manuscript in order to argue it out with them.

Conclusions

The IEEE claims to offer "fully Open Access" publishing options to all of their authors. In fact, they offer no such thing. Open Access means the ability to both read and re-use the products of research, and the IEEE's "open access" options prohibit re-use.

Self-archiving is allowed by the IEEE, but only with a copyright statement that forbids re-use. Paying an enormous APC to make your paper "open access" merely allows people to read it for free on IEEE Xplore. True Gold OA is only available if your funding body mandates real Open Access.

For the majority of researchers (in industry or funded by bodies without OA mandates in place), the IEEE provides no Open Access publishing option at all. The half-hearted and incomplete "open access" options that the IEEE provides can only be interpreted as a cynical attempt to both dilute the BOAI definition and to extract vastly-inflated APCs from authors who fail to read the fine print.

Labels:

Wednesday, May 08, 2013

New projects, new software and a finished thesis

It's been a while since I last posted about my research, so I felt that it might be time for a bit of an update. I've been at Surrey Space Centre for almost four years now, and my PhD studentship is most definitely drawing to a close.

Most importantly, I finally managed to complete and submit my thesis, Urban Damage Detection in High Resolution SAR Images and my viva voce examination will take place on 21st June. After having spent so long fretting about whether my research was "good enough", it's bizarre to find myself actually feeling quietly confident about the exam. On the other hand, I don't know how long that strange feeling of confidence will last!

My supervisor advised me not to publish the submitted version of my thesis, on the basis that the exam is quite soon and it would be better to take the opportunity incorporate any requested corrections before publication (and that it would be embarrassing if I fail the exam and the examiners ask me to submit a new thesis). However, I will definitely be making sure that I make it available online as soon as I have the final version ready.

On the other hand, I have already published the source code for the software developed during my PhD and described in my thesis. The git repositories have been publicly accessible on github for some time, and I've also more recently uploaded release tarballs to figshare. I've published three software packages:

  • ssc-ridge-tools (git repo) contains the ridgetool program for extracting bright curvilinear features from TIFF images, and a bunch of general tools for working with them (e.g. exporting them to graphical file formats, manually classifying them, or printing statistics).
  • ssc-ridge-classifiers (git repo) contains two different tools for classifying the bright lines extracted by ridgetool. They are designed for the task of identifying which bright lines look like the double reflection lines that are characteristic of SAR images of urban buildings.
  • ssc-urban-change (git repo) contains a tool for using curvilinear features and pre- and post-event SAR images to plot change maps.

All the programs in the packages contain manpages, README files, etc. Note that they require x86 or x86-64 Linux (they just won't work on Windows). If you wish to understand what the various algorithms are and (probably more importantly) how they can be used, you should probably read Earthquake Damage Detection in Urban Areas using Curvilinear Features.

In a follow-on from my main PhD research, Astrium GEO have very kindly agreed to give me some TerraSAR-X images of the city of Khash, Iran, where there was a very big earthquake about a month ago on April 16th. Hopefully, I'll be able to publish some preliminary results of applying my tools to that data shortly (it depends heavily on when I actually receive the image products)! The acquisition had been scheduled for 7th May, so hopefully I will be hearing from them soon. The current plan is to publish a short research report in PLoS Currents Disasters, even if the results are negative.

I've recently been working on a side project using multispectral imagery from the UK-DMC2 satellite to try and detect water quality changes in Lake Chilwa, Malawi during January 2013. It's been nice to have a change from staring at SAR data, and I've also had the opportunity to learn some new skills. This was particularly interesting, as it forms part of a MILES multidisciplinary project involving people from all over the University of Surrey. One of the things that I produced for this project was an image showing the change in Normalised Difference Vegetation Index between 3rd January and 17th January. Later this month, I'm also hoping to publish some brief reports describing the exact processing steps used: I'm not sure how much immediate use they will be, but might provide some pointers to other people trying to use DMC data in the future.

The only thing that I'm feeling particularly concerned about at the moment is the status of my IEEE Transactions journal paper, which seems to be taking forever to get through its peer review process. It's almost 11 months since I submitted it, and I really hope that it's at least accepted for publication by the time I have my viva.

All in all, though, my PhD research is more-or-less tied up, and I've produced a bunch of potentially interesting/useful outputs. Does that make it a success?

Labels:

Saturday, December 22, 2012

Christmas 2012: Chorizo and roasted pumpkin risotto

In my quest to find interesting things to do with pumpkin, I came up with this chorizo-flavoured pumpkin risotto. Chorizo in a risotto base is something that I've been doing for about 3 years, but I found that the contrast between creamy risotto, smooth pumpkin, and tart lemon works remarkably well in this dish. Serves 4 6 as a main course.

  • 1 kg pumpkin
  • 2 sticks celery
  • 2 medium onions
  • 3 cloves garlic
  • 100 g unsliced chorizo sausage
  • 750 mm hot chicken stock
  • 150 mm white wine
  • 50 g Parmesan (or similar hard cheese)
  • 50 g butter
  • 1 lemon
  • Parsley
  • Olive oil

Preheat the oven to 200 °C (185 °C fan). Dice the pumpkin into 2–3 cm cubes. Spread the cubed pumpkin out on a baking sheet, use a pastry brush to roughly coat them with olive oil, and season generously. Put in the oven to roast for 35–40 min.

Finely chop the onions, celery, garlic and chorizo. In a wide-bottomed, covered pan, gently fry the onions, celery and chorizo in about 2 tbsp of the olive oil until very soft.

Next add the risotto rice and garlic, and fry for further 3 min. Now turn up the heat, and add the white wine to the pan. Keep stirring the risotto and gradually adding the hot stock until the risotto is cooked. It's okay not to use all of the stock; if you find that you need more liquid, just use boiling water.

Remove from the heat and stir in the cheese and butter. Gently stir in the roast pumpkin cubes, and allow the risotto to rest for at least a minute. Serve garnished with lemon wedges and chopped parsley.

Labels:

Friday, December 21, 2012

Christmas 2012: Spicy pumpkin and carrot soup

This Christmas, I'm in charge of the menu (and the cooking) at home, and I'll be posting recipes for some of the food I cook. First up is a lovely warm and spicy vegetable soup that's delicious and quick to cook, and makes a great lunch. This recipe serves 3&ndash6 people depending on how hungry they are!

  • 2 medium onions
  • 2 cloves garlic
  • 1 stick celery
  • 4 carrots
  • 600 g pumpkin (approx)
  • 1 chili
  • 1/2 tsp paprika
  • 1 tsp cumin seed
  • 2 tbsp olive oil
  • 3 tsp vegetable bouillon powder
  • Large handful of red lentils

The key here is to chop the vegetables to appropriate sizes so that everything is ready to eat at the same time. Heat the olive oil in a large, heavy-based saucepan over a medium heat. Finely chop the onion, garlic and chili, and cut the celery into pieces about 1 cm on a side, and gently fry them in the oil with the cumin seed for about 5 minutes, stirring occasionally, until soft and clear.

Meanwhile, boil a kettle. Dice the carrots into pieces about 5 mm in size, and add to the pan. Next, cube the pumpkin to about 15–20 mm and add to the pan. Add the paprika, and continue to fry the vegetables together for another 2–3 min.

Add about 750 ml of the boiling water from the kettle to the pan along with the bouillon powder, and season with salt and pepper to taste (the liquid should be just enough to cover the vegetables). Bring to the boil, and sprinkle the lentils in. Finally, cover the pan, and simmer for about 30 min until ready to serve — preferably with some crusty bread and a wedge of cheddar cheese.

Labels:

Monday, December 03, 2012

Making schematics look good with "gaf export"

<CareBear\> peterbrett : hey. gaf export is f-ing awesome!

People who've been testing the gEDA "master" branch over the last few hours will have got a sneak preview of a cool new tool that will be arriving in gEDA/gaf 1.9.0. The new gaf export command-line utility lets you quickly and easily export your schematics and symbols to a variety of image formats.

I've been wanting to introduce a tool like this for a while, but it's only become possible thanks to finally finishing a couple of big features that have been cooking in my personal branches for a couple of years: a new Cairo-based rendering library for gEDA designed to be used for both rendering in gschem and for printing/exporting graphics, called "libgedacairo"; and a new gEDA configuration subsystem, which I'll write about in more detail another time.

To get started, suppose I want to create a PDF from a schematic called grey_counter_1.sch. It's very straightforward!

gaf export -o grey_counter_1.pdf grey_counter_1.sch

From the output filename that I passed to the "-o" option, gaf export will detect that I want a PDF. It'll detect the size of the drawing, centre it in the default paper (choosing some suitable margins) and generate a PDF file.

Batch generation of PostScript files

Many people previously used gschem along with the (relatively obscure) print.scm script for batch generation of PostScript files. Usually the command looked something like:

gschem -o grey_counter_1.ps -s /usr/share/gEDA/scheme/print.scm grey_counter_1.sch

Don't do this any more. It is slow (because it needs to load all of gschem's configuration), requires an graphical desktop to be running (because gschem can't start without trying to display its windows) and doesn't provide any way to directly customise formatting options without fiddling with Scheme scripts. Also, gaf export generates much nicer PDF output than PS, especially if you want to do anything with the output file other than printing. You could directly replaced the gschem command above with something like:

gaf export -o grey_counter_1.pdf grey_counter_1.sch

A Makefile rule for creating PDF files from schematic files might look like:

%.pdf: %.sch
	gaf export -o $@ -- $<

Of course, one advantage of the new tool is that it can do multi-page output. So rather than generating a whole bunch of separate PDF or PostScript files and stitching them together, you could directly generate a single PDF file with the whole of your design in it:

gaf export -o schematics.pdf grey_counter_1.sch filter_1.sch

Tweaking the output

gaf export also lets you tweak the output for different applications. Suppose I want to produce the PNG file displayed in this blog post. First, I don't care about paper sizes; I want the output file to be sized according to how large the drawing is. To do this, I can use -s auto. I can also set the margin on the output with -m 5px. I also want to print in colour (-c). So the overall command is:

gaf export -c -s auto -m 5px -o gaf_export__40160-1.png 40160-1.sym

It can also be useful to set the paper size (for example, to get suitable margins for larger paper sizes). By default, gaf export uses whatever GTK thinks the default paper size is on your system. For most people, this will be ISO A4. In addition to providing measurements directly via the -s option, the -p option lets you specify a PWG 5101.1-2002 paper name. For example, to use US "D" size paper:

gaf export -p na_d -o grey_counter_1.pdf grey_counter_1.sch

Changing default settings

The default settings for gaf export can be modified using the new gaf config command. For example, to set the default paper size for all your projects to US "Letter":

gaf config --user export paper na_letter

Or to make sure that all printing for a particular project is in colour:

gaf config -p /path/to/project/directory/ export monochrome false

Conclusion

gaf export is a fast, easy-to-use way of generating graphics files from your gEDA/gaf schematics and symbols. Along with several other new features, it will be available in the upcoming unstable gEDA/gaf 1.9.0 release.

Labels:

Monday, May 14, 2012

Remote Sensing journals and open access

The Remote Sensing Applications Research Group at Surrey Space Centre is in the first stages of thinking about the new Research Excellence Framework (REF) system that will be used to assess the quality of our research.

We've been told by the University that we each need to demonstrate four "research outcomes" for REF. Initially, we've been given the advice that an appropriate "outcome" would be a journal paper published in one of the "top five journals in our field", as determined by various arbitrary and generally misleading journal metrics. Unfortunately, at a recent meeting to discuss this, we realised that there were a few problems with this; for example, the list of "remote sensing" journals as categorised by the ISI Web of Knowledge Journal Citation Reports included quite a few journals that would have been completely inappropriate for our work, while some highly relevant and high-profile journals such as IEEE J-STARS were a long way down the list due to being newer and not yet having had time to accrue high-scoring metrics.

However, I noted and was asked to further investigate another potential problem with our list of target journals: the problem of up-and-coming open access mandates from our UK funding bodies.

The 2001 Budapest Open Access Initiative defined open access as:

Free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of... articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.

The policy that the UK Research Councils (RCUK) are proposing to adopt in the near future would make it mandatory to publish results from research that is wholly or partially funded by the research councils in journals that meet RCUK standards for open access. This is a significant departure from the previous position, where open access publishing even of research council-funded results has been effectively optional. The key points from the draft policy seem to be:

  • A user must be able to access, read and re-use papers free of charge under an extremely permissive licence. RCUK explicitly identify the Creative Commons CC-BY licence as a model.
  • Open access to the paper may be provided directly by the publisher via the journal's website at the time of publication ("Gold OA"; publishers may charge the authors for this), or the author can archive the final version of the paper as accepted for publication in an online repository other than one run by the publisher ("Green OA"). Surrey Research Insight is an example of such a repository. Journals are allowed to impose an embargo of at most 6 months.
  • RCUK grant funding can be used to pay publishers for Gold OA publication, and researchers are recommended to request funding for this in grant applications.

The question, therefore, is: to what extent do "remote sensing" journals comply with this policy? To answer this, I examined the publication policies of all English-language journals in this category with respect to self-archiving of the accepted version of a paper (Green OA), the "normal" published paper, and (if applicable) paid-for open access publication (Gold OA), using the SHERPA RoMEO database, Ross Mounce's publisher licence spreadsheet, and publishers' websites. My results are shown in Table 1.

Table 1. Remote sensing journal compliance with proposed RCUK open access rules, sorted in descending order of impact factor. "R" indicates restrictions on "open access" options that prevent full compliance. Minimum publication fees are shown in brackets.
NamePublisherRegularGreen OAGold OA
Remote Sens. Environ.ElsevierNoRR ($3000)
IEEE Trans. Geosci. Remote Sens.IEEENoRR ($3000)
ISPRS J. Photogramm. Remote Sens.ElsevierNoRR ($3000)
J. GeodesySpringerNoRYes ($3000)
Int. J. Appl. Earth Obs. Geoinf.ElsevierNoRR ($3000)
GPS Solut.SpringerNoRYes ($3000)
Int. J. Digit. EarthTaylor & FrancisNoRR ($3250)
IEEE Trans. Geosci. Remote Lett.IEEENoRR ($3000)
Int. J. Remote Sens.Taylor & FrancisNoRR ($3250)
IEEE J. STARSIEEENoRR ($3000)
GISci. Remote Sens.BellwetherNoNoNo
J. Appl. Remote Sens.SPIENoRUnclear ($1500)
J. Spat. Sci.Taylor & FrancisNoRNo
Can. J. Remote Sens.Can. Aeronautics and Space Inst.NoNoNo
Radio Sci.AGUR ($1000)RR ($3500)
Photogramm. Eng. Remote Sens.ASPRSNoUnclearNo
Photogramm. Rec.Wiley-BlackwellNoRR ($3000)
Mar. Geod.Taylor & FrancisNoRNo
Surv. Rev.ManeyNoRR ($2000)
Eur. J. Remote Sens.Assoc. Ital. TelerilvamentoYesN/AN/A

The most common restrictions encountered on Gold OA content were prohibition of commercial use (e.g. via explicit Creative Commons CC-BY-NC licensing), prohibition of redistribution, and field-of-use restrictions such as prohibition of text-mining. In addition to these restrictions, in several cases self-archiving was only permitted with an embargo period of more than 6 months. One somewhat bizarrely convoluted rule for Elsevier journals can be boiled down to: "You may archive the accepted version of your paper in your funding body's repository, but only if you don't have to archive it in your funding body's repository."

At this stage, Springer's recent change to CC-BY licensing of papers in their "Open Choice" system is particularly notable. It's also clear our current target journals (IEEE Trans. Geosci. Remote Sens. and IEEE J-STARS) still have some way to go before they will be BOAI-compliant or compliant with the proposed RCUK publication requirements. In my opinion, over the next few years a good outcome would be for publishers like IEEE and Elsevier to standardise on CC-BY publication for Gold OA publications.

In the short term, I will be recommending to my group that we should consider submitting to open access megajournals such as PLoS ONE, many of which have considerably higher journal metrics than any of the dedicated remote sensing journals. Adding PLoS ONE to the Space Centre's list of preferred journals should not be particularly controversial, as it is already listed as a preferred journal for other research centres in the faculty.

In conclusion, I have demonstrated that the open access publishing options available in the field of remote sensing are limited, and that this may become a problem if stricter rules, similar to those set out by the Budapest Open Access initiative, are laid down by the UK Research Councils. Either journal publishers will have to change their policies, or research groups in this field will need to consider different publishing strategies.

This post is made available under a Creative Commons Attribution (CC-BY) licence.

Labels:

Friday, May 04, 2012

Planning the Guildford Cycle Network

Since soon after arriving at the University of Surrey to begin my PhD studentship and discovering the terrible state of cycling infrastructure in Guildford, I started attending Guildford Cycle Forum meetings to try and discuss what could actually be done about it. For most of that time, the meetings have had a rather predictable format: a chorus of Forum members pointing out problems experienced by cyclists and opportunities to fix them, countered by County and Borough Council officials explaining either that no budget exists for cycling improvements, or that the changes requested weren't in their department and they couldn't address them.

Recently, however, some money has finally become available via the government's Local Sustainable Transport Fund (LSTF), and Guildford is hoping to receive approx. £900,000 of it, a large chunk of which is intended to fund cycling improvements.

A major component of Guildford's bid is the establishment of a network of cycle routes within Guildford. On Thursday 3rd May, Alan Fordham, the "Sustainability Programme Delivery Officer", hosted a Guildford Cycle Forum meeting at the Guildford Borough Council offices to present and discuss the routes that are currently planned for the network.

A total of fourteen local cycle routes are planned to be defined, mostly radial routes fanning out to the north from the town centre, which actually lies in the southern part of the town. Unfortunately, the route maps that Alan handed round at the meeting aren't available online anywhere yet; I asked him to circulate some digital copies by e-mail, but in case he doesn't get time to do so I will try to copy them onto Google Maps or something.

In following blog posts, I plan to discuss the routes and the weaknesses that I see in them based on my experiences cycling to and from Guildford every day. However, in this post, I want to discuss some more general points about the plans.

The most important point which I haven't seen addressed is what the overall objective of the project is, and how it will be assessed. In my opinion, the logical objective is modal shift, where journeys currently made by car are transferred to other forms of transport, and both the design of the network and its performance should be judged by how well it achieves that. I think this is supported by the purpose of the LSTF, which is to help promote the use of and migration to sustainable transport.

Tying into this point, one of the things that really isn't clear to me is what type of cyclist the routes are intended for.

  • Are they intended to be used by regular cycle commuters? Many of this class of cyclists will be aiming to cycle quite quickly and travel at all times of year in all weather conditions. Quite often they will be travelling at rush hour, and given rush-hour congestion and aggressive driving, would likely welcome the addition of good new cycle routes. These cyclists desire routes that have good sight lines, are no more obstructed than roads, and facilitate bidirectional flow well. Unfortunately, these kind of requirements can be very difficult to accommodate well without new, purpose-built segregated cycle facilities or the provision of mandatory on-road cycle lanes. I am one of these users.
  • Are they intended as an 'easy option' to attract occasional cycle commuters? The provision of signposted routes might be the key to persuading people to take up cycling to work, but if the routes are too much slower than driving, or have significant sections that put them in conflict with rush hour traffic, they might be put off. To me, this is a core target group, as moving them to cycling will often directly replace a single-occupant car journey, and I suspect that the problem may not be getting them cycling as keeping them cycling.
  • What about parents taking their children to and from school? When I was in Cambridge, I used to see a couple who would cycle to work at the university on their tandem, taking their children with them in a trailer and dropping them off at primary school on the way. For this kind of user, the routes really need to be accessible either when towing a trailer or when using one of the Dutch-style family carrier bikes with a bay in the front (these are also really good for shopping, or so I hear). For these cyclists, who are often heavily laden, it is important to provide facilities that are wide enough to accommodate them and have few sharp corners. Even a single chicane like this one can make a route impassible.
  • Are the routes intended to be used for school travel by children old enough to ride their own bikes, but not experienced or confident enough to fall into one of the first two categories? For these users, good segregation of cycle routes from traffic is important, because they will commonly want to ride with their friends and might be easily distracted from paying attention to other vehicles. Another factor is that, unfortunately, many of these users will be using equipment that is incomplete or in poor condition (e.g. bad brakes, or no lights), and once again, good segregation may be key in keeping them out of danger.
  • Or are the routes intended for casual cyclists and cycle tourists? These users, who will usually be travelling with a flexible itinerary, in favourable weather conditions, and at times of day when traffic is relatively light, can be accommodated much more easily than any of the other types of user described above.

In following blog posts, I will try to consider the proposed routes with reference to how suitable they are for each of above types of user. Unfortunately, one of my biggest worries about the network as a whole as it is currently envisaged is that it accommodates the last of those classes of user really well, but that many of the routes are fatally flawed for any of the other groups to depend on. Because of that, I worry that the objective of getting many people living in Guildford to change to cycling might be compromised.

Another problem is that there is very little money actually allocated under the plan to major improvement works (such as altering junctions to make them safer for cyclists, or changing road layouts to add cycle lanes of appropriate width), and that the main Surrey County Council highway planning department doesn't seem to be involved in the process. As far as I can tell, this seems limit the project to mostly an exercise in putting up signposts to direct cyclists onto the least inadequate of the existing routes (and even then, one of the Cycle Forum members raised the "environmental concern" that "ugly" signs were "unnecessary"). Fortunately, however, there are a few improvements being made to some of the most obviously hopeless spots.

Overall, I think that just the fact that this project is taking place is a major step forward for cycling in Guildford, finally making a move onto the long road towards making Guildford a town that's genuinely accessible by bicycle.

In my next post, I will investigate Route 4: Wooden Bridge to Jacobs Well, and how well it holds up during rush hour.

Labels:

Page Menu