Tuesday, September 29, 2015

Using C library functions from LiveCode Builder

This blog post is part of an ongoing series about writing LiveCode Builder applications without the LiveCode engine.

Currently, the LiveCode Builder (LCB) standard library is fairly minimal. This means that there are some types of task for which you'll want to go beyond the standard library.

In a previous post, I described how to use LiveCode's foundation library. This lets you access plenty of built-in LiveCode functionality that isn't directly exposed to LCB code yet.

Someone else's problem

Often someone's already wrapped the functions that you need in another program, especially on Linux. You can run that program as a subprocess to access it. In LiveCode Script, you could use the shell function to run an external program. Unfortunately, the LCB standard library doesn't have an equivalent feature yet!

On the other hand, the standard C library's system(3) function can be used to run a shell command. Its prototype is:

int system(const char *command);

In this post, I'll describe how LCB's foreign function interface lets you call it.

Declaring a foreign handler

As last time, you can use the foreign handler syntax to declare the C library function. The com.livecode.foreign provides some important C types.

use com.livecode.foreign

foreign handler _system(in pCommand as ZStringNative) \
      returns CInt binds to "system"

Some things to bear in mind here:

  • I've named the foreign handler _system because the all-lowercase identifier system is reserved for syntax tokens
  • The ZStringNative type automatically converts a LCB string into a null-terminated string into whatever encoding LiveCode thinks is the system's "native" encoding.
  • Because the C library is always linked into the LiveCode program when it's started, you don't need to specify a library name in the binds to clause; you can just use the name of the system(3) function.

Understanding the results

So, now you've declared the foreign handler, that's it! You can now just _system("rm -rf /opt/runrev") (or some other helpful operation). Right?

Well, not quite. If you want to know whether the shell command succeeded, you'll need to interpret the return value of the _system handler, and unfortunately, this isn't just the exit status of the command. From the system(3) man page:

The value returned is -1 on error (e.g., fork(2) failed), and the return status of the command otherwise. This latter return status is in the format specified in wait(2). Thus, the exit code of the command will be WEXITSTATUS(status). In case /bin/sh could not be executed, the exit status will be that of a command that does exit(127).

So if the _system handler returns -1, then an error occurred. Otherwise, it's necessary to do something equivalent to the WIFEXITED C macro to check if the command ran normally. If it didn't, then some sort of abnormal condition occurred in the command (e.g. it was killed). Finally, the actual exit status is extracted by doing something equivalent to the WEXITSTATUS C macro.

On Linux, these two macros are defined as follows:

#define WIFEXITED(status)     __WIFEXITED (__WAIT_INT (status))
#define WEXITSTATUS(status)   __WEXITSTATUS (__WAIT_INT (status))
#define __WIFEXITED(status)   (__WTERMSIG(status) == 0)
#define __WEXITSTATUS(status) (((status) & 0xff00) >> 8)
#define __WTERMSIG(status)    ((status) & 0x7f)
#define __WAIT_INT(status)    (status)

Or, more succinctly:

#define WIFEXITED(status)   (((status) & 0x7f) == 0)
#define WEXITSTATUS(status) (((status) & 0xff00) >> 8)

This is enough to be able to fully define a function that runs a shell command and returns its exit status.

module org.example.system

use com.livecode.foreign

private foreign handler _system(in pCommand as ZStringNative) \
      returns CInt binds to "system"

Run the shell command  and wait for it to finish.
Returns the exit status of if the command completed, and nothing
if an error occurred or the command exited abnormally.
public handler System(in pCommand as String) \
      returns optional Number

   variable tStatus as Number
   put _system(pCommand) into tStatus

   -- Check for error
   if tStatus is -1 then
      return nothing
   end if

   -- Check for abnormal exit
   if (127 bitwise and tStatus) is not 0 then
      return nothing
   end if

   -- Return exit status
   return 255 bitwise and (tStatus shifted right by 8 bitwise)

end module

Tip of the iceberg

This post has hopefully demonstrated the potential of LiveCode Builder's FFI. Even if you use only the C standard library's functions, you gain access to almost everything that the operating system is capable of!

Using a C function from LCB involves reading the manual pages to find out how the function should be used, and how best to map its arguments and return values onto LCB types; often, reading C library header files to understand how particular values should be encoded or decoded; and finally, binding the library function and providing a wrapper that makes it comfortable use from LCB programs.

LiveCode Builder can do a lot more than just making widgets and — as I hope I've demonstrated — can do useful things without the rest of the LiveCode engine. Download LiveCode 8 and try some things out!

Wednesday, September 23, 2015

Roasted vegetable and chickpea tagine

It's been a while since I last posted a recipe here! Recently I've been having quite a lot of success with this Morrocan-inspired vegetarian recipe.

This recipe makes 6 portions.


For the roasted vegetables:

  • 350 g new potatoes, halved
  • 1 fennel bulb, trimmed & cut into batons
  • 1 medium carrot, cut into chunks
  • 1 large red pepper, cut into chunks
  • 1 large red onion, cut into chunks
  • 3 tbsp exra-virgin olive oil
  • 1 tsp cumin seeds
  • 1 tsp fennel seeds
  • 1 tsp coriander seeds, crushed

For the sauce:

  • 4 garlic cloves, chopped
  • 400 g canned chopped tomatoes
  • 400 g canned chickpeas, drained and rinsed
  • 250 ml red wine
  • 1 pickled lemon, finely chopped
  • 0.5 tbsp harissa paste
  • 1 tsp ras el hanout
  • 1 cinnamon stick
  • 40 g whole almonds
  • 10 dried apricots, halved

To serve:

  • Greek-style yoghurt
  • 2 tbsp coriander, finely chopped


Preheat the oven to 200 °C fan. Put all the ingredients for the roasted vegetables into a large, heavy roasting tin, season to taste, and toss together to coat the vegetables in oil and spices. Roast for 30 minutes until the potatoes are cooked through and the vegetables generally have a nice roasted tinge.

While the vegetables are roasting, heat a large pan over a medium heat. Fry the garlic for 20–30 seconds until fragrant. Add the remaining ingredients, bring to the boil, and simmer while the vegetables roast.

When the vegetables are roasted, add them to the sauce and stir. Return the sauce to the simmer for another 15–20 minutes.

Serve in bowls, topped with a dollop of yoghurt and some chopped coriander. Couscous makes a good accompaniment to this dish if you want to make it go further.

Monday, September 14, 2015

Compiling multi-module LiveCode Builder programs

This blog post is part of an ongoing series about writing LiveCode Builder applications without the LiveCode engine.

Multi-module programs

When writing a large program, it's often useful to break it down into more than one module. For example, you might want to make a module that's dedicated to loading and saving the program's data, which has quite a lot of internal complexity but exposes a very simple API with Load() and Save() handlers. This is handy for making sure that it's easy to find the source file where each piece of functionality is located.

However, it can become tricky to compile the program. Each module may depend on any number of other modules, and you have to compile them in the correct order or the compilation result may be incorrect. Also, if one module changes, you have to recompile all of the modules that depend on it. If you tried to do this all by hand, it would be nigh-on impossible to correctly compile your program once you got above about 10 source files.

Fortunately, there are two really useful tools that can make it all rather easy. GNU Make (the make command) can perform all the required build steps in the correct order (and even in parallel!). And to help you avoid writing Makefiles by hand, lc-compile has a useful --deps mode.

Most of the remainder of this blog post will assume some familiarity with make and common Unix command-line tools.

The --deps option for lc-compile

make lets you express dependencies between files. However, you already express the dependencies between LCB source files when you write a use declaration. For example:

use com.livecode.foreign

says that your module depends on the .lci (LiveCode Interface) file for the com.livecode.foreign module.

So, the LCB compiler (a) already knows all the dependencies between the source files of your project and (b) already knows how to find the files. To take advantage of this and to massively simplify the process of creating a Makefile for a LCB project, lc-compile provides a --deps mode. In --deps mode, lc-compile doesn't do any of the normal compilation steps; instead, it outputs a set of Make rules on standard output.

Consider the following trivial two-file program.

-- org.example.numargs.lcb

module org.example.numargs

public handler NumArgs()
   return the number of elements in the command arguments
end handler

end module
-- org.example.countargs.lcb

module org.example.countargs

use org.example.numargs

public handler Main()
   quit with status NumArgs()
end handler

end module

To generate the dependency rules, you run lc-compile with almost a normal command line — but you specify --deps make instead of an --output argument, and you list all of your source files instead of just one of them. See also my previous blog post about compiling and running pure LCB programs. For the "countargs" example program you could run:

$TOOLCHAIN/lc-compile --modulepath . --modulepath $TOOLCHAIN/modules/lci --deps make org.example.numargs.lcb org.example.countargs.lcb

This would print the following rules:

org.example.countargs.lci: org.example.numargs.lci org.example.countargs.lcb
org.example.numargs.lci: org.example.numargs.lcb

Integrating with make

You can integrate this info into a Makefile quite easily. There are two pieces that you need: 1) tell make to load the extra rules, and 2) tell make how to generate them. In particular, it's important to regenerate the rules whenever the Makefile itself is modified (e.g. to add an additional source file).

# List of source code files
SOURCES = org.example.countargs.lcb org.example.numargs.lcb

# Include all the generated dependency rules
include deps.mk

# Rules for regenerating dependency rules whenever
# the source code changes
deps.mk: $(SOURCES) Makefile
 $(TOOLCHAIN)/lc-compile --modulepath . --modulepath $(TOOLCHAIN)/modules/lci --deps make -- $(SOURCES) > $@

A complete Makefile

Putting this all together, I've created a complete Makefile for the example multi-file project. It has the usual make compile and make clean targets, and places all of the built artefacts in a subdirectory called _build.

# Parameters

# Tools etc.
LC_SRC_DIR ?= ../livecode
LC_BUILD_DIR ?= $(LC_SRC_DIR)/build-linux-x86_64/livecode/out/Debug
LC_LCI_DIR = $(LC_BUILD_DIR)/modules/lci
LC_COMPILE ?= $(LC_BUILD_DIR)/lc-compile
LC_RUN ?= $(LC_BUILD_DIR)/lc-run

BUILDDIR = _build

LC_COMPILE_FLAGS += --modulepath $(BUILDDIR) --modulepath $(LC_LCI_DIR)

# List of source code files.
SOURCES = org.example.countargs.lcb org.example.numargs.lcb

# List of compiled module filenames.
MODULES = $(patsubst %.lcb,$(BUILDDIR)/%.lcm,$(SOURCES))

# Top-level targets
all: compile

compile: $(MODULES)

 -rm -rf $(BUILDDIR)

.PHONY: all compile clean

# Build dependencies rules
include $(BUILDDIR)/deps.mk

 mkdir -p $(BUILDDIR)

$(BUILDDIR)/deps.mk: $(SOURCES) Makefile | $(BUILDDIR)
 $(LC_COMPILE) $(LC_COMPILE_FLAGS) --deps make -- $(SOURCES) > $@

# Build rules
$(BUILDDIR)/%.lcm $(BUILDDIR)/%.lci: %.lcb | $(BUILDDIR)
 $(LC_COMPILE) $(LC_COMPILE_FLAGS) --output $@ -- $<

You should be able to use this directly in your own projects. All you need to do is to modify the list of source files in the SOURCES variable!

Note that you need to name your source files exactly the same as the corresponding interface files in order for this Makefile to work correctly. I'll leave adapting to the case where the source file and interface file are named differently as an exercise to the reader…

I hope you find this useful as a basis for writing new LiveCode Builder projects! Let me know how you get on.

Sunday, September 06, 2015

Accessing the Foundation library with LiveCode Builder

This blog post is part of an ongoing series about writing LiveCode Builder applications without the LiveCode engine.

The LiveCode Foundation library

LiveCode includes a "foundation" library (called, unsurprisingly, libfoundation) which provides a lot of useful functions that work on all the platforms that LiveCode supports. This is used to make sure that LiveCode works in the same way no matter which operating system or processor you're using. libfoundation is compiled into both the LiveCode engine and LiveCode Builder's lc-run tool, so it's always available.

libfoundation is written in C and C++. The functions available in the library are declared in the foundation.h header file.

Among other capabilities, libfoundation handles encoding and decoding text. This provides an opportunity to fix one of the problems with the "hello world" program I described in a previous post.

Foreign function access to libfoundation

The "hello world" program read in a file and wrote it out to the standard output stream. Unlike "hello world" programs seen elsewhere, it *didn't* write out a string, e.g.:

write "Hello World!" to the output stream

This doesn't work because write needs to receive Data, and converting a String to Data requires encoding (using a suitable string encoding). And unfortunately, the LiveCode Builder library doesn't supply any text encoding/decoding syntax, although I'm working on it.

However, and fortunately for this blog post, libfoundation supplies a suitable function, MCStringEncode. Its C++ declaration looks like:

bool MCStringEncode(MCStringRef string, MCStringEncoding encoding, bool is_external_rep, MCDataRef& r_data);

You can use it in a LiveCode Builder program by declaring it as a foreign handler. The com.livecode.foreign module provides some helpful declarations for C and C++ types.

use com.livecode.foreign

foreign handler MCStringEncode(in Source as String, \
      in Encoding as CInt, in IsExternalRep as CBool, \
      out Encoded as Data) returns CBool binds to "<builtin>"

CInt and CBool are C & C++'s int and bool types, respectively.

Encoding a string with UTF-8

Next, you can write a LiveCode Builder handler that encodes a string using UTF-8 (an 8-bit Unicode encoding). Almost every operating system will Do The Right Thing if you write UTF-8 encoded text to standard output; the only ones that might complain are some versions of Windows and some weirdly-configured Linux systems.

handler EncodeUTF8(in pString as String) returns Data
   variable tEncoded as Data
   MCStringEncode(pString, 4 /* UTF-8 */, false, tEncoded)
   return tEncoded
end handler

The "4" in there is a magic number that comes from libfoundation's kMCStringEncodingUTF8 constant. Also, you should always pass false to the IsExternalRep argument (for historical reasons).

A better "hello world" program

Putting this all together, you can now write an improved "hello world" program that doesn't get its text from an external file.

module org.example.helloworld2

use com.livecode.foreign

foreign handler MCStringEncode(in Source as String, \
      in Encoding as CInt, in IsExternalRep as CBool, \
      out Encoded as Data) returns CBool binds to "<builtin>"

handler EncodeUTF8(in pString as String) returns Data
   variable tEncoded as Data
   MCStringEncode(pString, 4 /* UTF-8 */, false, tEncoded)
   return tEncoded
end handler

public handler Main()
   write EncodeUTF8("Hello World!\n") to the output stream
end handler

end module

If you compile and run this program, you'll now get the same "Hello World!" message -- but this time, it's taking some text, turning it into Data by encoding it, and writing it out, rather than just regurgitating some previously-encoded data.

Other neat stuff

There's other cool (and, often, terribly unsafe) stuff you can do with direct access to libfoundation functions, like allocate Pointers to new memory buffers and directly manipulate LiveCode types & values. However, most of libfoundation's capabilities are already available using normal LiveCode Builder syntax.

The real power of foreign handler declarations becomes apparent when accessing functions that aren't in libfoundation — and this may be the subject of a future blog post!

Sunday, August 30, 2015

LiveCode Builder without the LiveCode bit

Since my last post almost two years ago, I've moved to Edinburgh. I now work for LiveCode as an open source software engineer.

Introducing LiveCode Builder

LiveCode 8, the upcoming release of the LiveCode HyperCard-like application development environment, introduces a new xTalk-like language for writing LiveCode extensions. It's called LiveCode Builder (or LCB). It shares much of the same syntax as the original LiveCode scripting language, but it's a compiled, strongly-typed language.

Most of the public discussion about LiveCode Builder has revolved around using it to extend LiveCode — either by creating new widgets to display in the user interface, or by writing libraries that add new capabilities to the scripting language. However, one topic that *hasn't* been discussed much is the fact that you can write complete applications using only LCB, and compile and run them without using the main LiveCode engine at all.

LiveCode Builder without the engine

This is actually pretty useful when writing simple command-line tools or services that don't need a user interface and for which the main LiveCode engine provides little value (for example, if you need your tool to start up really quickly). There are a couple of good examples that I've written during the last few months.

The LCB standard library's test suite uses a test runner written in LCB. This is quite a useful "smoke test" for the compiler, virtual machine, and standard library -- if any of them break, the test suite won't run at all!

More recently, I've written a bot that connects our GitHub repositories to our BuildBot continuous integration system. Every few minutes, it checks the status of all the outstanding pull requests, and either submits new build jobs or reports on completed ones. This is also written entirely in LCB. One of main advantages of using LCB for this were that LCB has a proper List type that can contain arrays as elements.

"Hello World" in LCB

A pure LCB program looks like this:

module org.example.helloworld

public handler Main()
   write the contents of file "hello.txt" to the output stream
end handler

end module

It has a top-level module, that contains a public handler called Main. Note that unlike in C or C++, the Main handler doesn't take any arguments (you can access the command-line arguments using `the command arguments`).

Next, you need to compile your application using the lc-compile tool. To do this, you need to locate the directory from the LiveCode installation that contains the `.lci` files -- these are LiveCode's equivalent to C or C++'s header files. For example, on my system, I could compile the example above using (let's assume I've saved it to a file called hello.lcb:

$ export TOOLCHAIN='/opt/runrev/livecodecommunity-8.0.0-dp-3 (x86_64)/Toolchain/
$ "$TOOLCHAIN/lc-compile" --modulepath . --modulepath "$TOOLCHAIN/modules/lci" --output hello.lcm hello.lcb

These commands generate two files: hello.lcm, containing LCB bytecode, and org.example.helloworld.lci containing the interface.

Finally, you can run the program using lc-run. This is a really minimal tool that provides only the LCB virtual machine and standard library.

$ echo "Hello world!" > hello.txt
$ "$TOOLCHAIN/lc-run" hello.lcm
Hello world!

Finding out more

To more information on the standard library syntax available in LCB, visit the "LiveCode Builder" section of the dictionary in the LiveCode IDE. Note that the "widget", "engine" and "canvas" syntax isn't currently available to pure LCB programs. You should also check out the "Extending LiveCode" guide.

Tuesday, December 10, 2013

Chilli and lime dark chocolate tarts

In the second round of the baking competition at work, I baked another invention of mine: sweet pastry tarts, filled with a dark chocolate ganache flavoured with chilli and lime, and decorated with candied chillies.

They didn't do very well with the judges — they thought there was too much chocolate filling and/or it was too rich, and they found the candied chillies too spicy. On the other hand, the whole batch got eaten, so it's not all bad news.

This time-consuming and labour-intensive recipe makes 8 tarts.


For the candied chillies:

  1. 1/2 cup water
  2. 1/2 cup sugar
  3. 1 lime
  4. 2 mild chillies

For the pastry cases:

  1. 250 g plain flour
  2. 35 g icing sugar
  3. 140 g cold unsalted butter
  4. 2 egg yolks
  5. 1.5 tbsp cold water

For the chilli and lime dark chocolate ganache filling:

  1. 100 ml double cream
  2. 25 g caster sugar
  3. 100 g dark chocolate
  4. 12 g butter
  5. 2 limes
  6. 2 bird's eye chillies

Candied chillies

Make the candied chillies first — they keep for ages, so you can make them a good while in advance.

Cut the chillies into thin, circular slices, and remove the seeds (tweezers are useful). Take the peel of about a quarter of a lime, and slice it into strips as thinly as possible.

In a heavy-bottomed saucepan, heat the water and sugar to make a syrup. When it gets to the boil, carefully add the lime peel and chilli slices and simmer for 20 mins.

Strain the sugar syrup to remove the chilli and lime — save the syrup for later — and lay the pieces out on a silicone baking sheet. Bake in the oven for an hour at about 90 °C, until they are dry to the touch.

Sweet pastry cases

Put the the flour, icing sugar and butter in a food processor and pulse a few times until the mixture becomes about the consistency of breadcrumbs. Add the yolks and cold water and pulse until the mixture comes together. You may need to add a tiny bit more water. Knead the pastry a couple of times — literally only enough that it comes together into a ball — then wrap it in clingfilm and put it in the fridge to chill for about an hour.

Clear a shelf in the fridge and prepare 8 individual-size pastry tins (about 7.5–8 cm diameter).

Divide the dough into 8 equal portions. Roll each piece out to about 15 cm diameter and carefully place them in the pastry tins, pushing it out to fill the corners. If any holes appear, push them back together again. There should be 2&ndash cm of excess pastry protruding from the edges of the tin; trim back any much more than this.

Prick the bottom of each case with a fork and place them in the fridge to chill for at least an hour. By making sure that the cases are well rested you will avoid the need to use baking beans.

Preheat the oven to 180 °C (fan) and place a baking train the oven to heat. When the pastry cases are rested, place them directly onto the hot baking tray and into the oven, and bake for approx. 12 min until golden. Be very careful that the pastry doesn't catch!

When pastry cases come out of the oven, immediately trim the excess pastry from the cases before they become brittle, using a sharp knife. Leave them to cool in the tins on a cooling rack.

Chilli and lime chocolate ganache filling

Finely chop the chillies and zest the limes.

Place the cream, sugar, chillies and half the lime zest in a saucepan. Warm over a low heat. (The longer you infuse the cream, the stronger the filling will be).

Meanwhile, break the chocolate into pieces. Put the chocolate, butter and remaining lime zest in a mixing bowl.

When the cream is almost at boiling point, strain it onto the chocolate and butter. Whisk the mixture slowly until the chocolate and butter has melted and the ganache is smooth and glossy. If the chocolate doesn't quite melt, heat the mixing bowl over a pan of hot water (but make sure the bowl doesn't touch the water!)

If the filling isn't strong enough, you can add a couple of teaspoons of the chilli sugar syrup left over from making the candied chillies earlier.

While the ganache is still warm, carefully spoon it into the pastry cases. Decorate with the candied chillies.

N.b. the ganache will take at least a couple of hours to set; you can put it in the fridge to help it along, but it may make the top lose its glossy finish.

Sunday, December 01, 2013

Stripy chocolate, vanilla and coffee cake

At Sharp Labs we're having a baking competition going on to raise money for Helen & Douglas House. I foolishly decided to enter it.

There are three rounds. The first round, which took place on the 25th November, was sponge cakes. I invented a variation on a coffee cake. It's made up of six alternating layers of chocolate and vanilla sponge, bound together and coated with a coffee buttercream icing. This recipe is for a large cake which will happily make 16 slices.


For the vanilla sponge:

  • 165 g unsalted butter (at room temperature)
  • 165 g caster sugar
  • 3 large eggs
  • 165 g self raising flour, sifted
  • 1.5 tsp vanilla essence
  • Hot water (if required)

For the chocolate sponge:

  • 165 g unsalted butter (at room temperature)
  • 165 g caster sugar
  • 3 large eggs
  • 155 g self raising flour, sifted
  • 1 heaped tbsp cocoa powder, sifted
  • Hot water (if required)

For the coffee buttercream:

  • 600 g icing sugar
  • 375 g unsalted butter (at room temperature)
  • 150 ml strong espresso coffee (about 3 shots)


Preheat the oven to 155 ℃ (fan). Position a shelf near the middle of the oven for the cakes. Line the bottoms of two deep 20 cm springform or sandwich tins with baking parchment.

Each of the sponge batters is prepared in the same way (it's best to do prepare them in parallel in two bowls so that you can bake the cakes simultaneously):

  1. Cream butter and sugar together using an electric hand mixer until light and fluffy.
  2. In a measuring jug, beat the eggs. Then add them little by little to the butter & sugar mixture, making sure to fully combine each addition before the next. For the vanilla sponge, add the vanilla essence at this stage.
  3. Sift about a quarter of the flour (or flour and cocoa mixture) into the mixture, from a height of about 50 cm so as to air the flour well. Carefully and gently fold the flour in (you want to trap as much air as possible at this stage). Repeat until all the flour has been combined.

Transfer the sponge batters into the tins, and place the tins at mid-level of the oven near the front. Bake for 25-30 mins. When they are cooked, they'll (1) make a popping sound like rice crispies, (2) feel springy when lightly touched near the centre with a fingertip and (3) a sharp knife inserted all the way through will come out clean.

About 1-2 mins after removing the cakes from the oven, turn them out, carefully peel off the baking parchment, and leave them to cool for about half an hour.

Carefully slice each of the cakes into three horizontal slices, approximately 1 cm in thickness. I found that a very very sharp knife and a lot of patience was more successful than using a cake wire.

Make the buttercream by putting the butter and icing sugar into a bowl and beating them with an electric hand mixer while slowly adding the espresso.

Assemble the cake by putting a vanilla slice of sponge on a turntable, adding a thin layer of butter cream and levelling it off, then adding a chocolate slice on top, and continuing until all six slices are built up. Make sure on each layer to spread the buttercream all the way to the edge.

Use the remaining buttercream icing to smoothly coat the exterior of the cake. Use a side scraper and a turntable to get vertical sides and horizontal top! You should have some icing leftover.

Finally, you can optionally use cocoa powder and/or walnuts to decorate the finished cake.

Saturday, November 23, 2013

Black onion seed and rye crackers

Here's a recipe for some nice crunchy rye crackers. I adapted it from a rosemary cracker recipe that my father figured out. It makes about 24 large crackers, but it very much depends on how you cut them.

  • 160 g plain flour
  • 120 g rye flour
  • 80 ml cold water
  • 60 ml olive oil (+ extra for brushing)
  • 1 tsp baking powder
  • 0.5 tsp baking salt
  • 1.5 tsp black onion seeds
  • Crystal salt
  • Black pepper
  • Crushed, dried seaweed
  • Za'atar

Pre-heat the oven to 230 ℃ fan. Put baking sheets into the oven to preheat.

In a mixing bowl, combine the flours, baking powder, baking salt and black onion seeds. Add the water and olive oil and knead briefly to form a smooth dough. Do not overwork the dough; you do not want gluten strands to form.

Divide the mixture into three parts. Wrap two in clingfilm while you work with the third.

Using a rolling pin, roll one third of the dough out as thinly as possible onto a silicone sheet. Using a dough blade or palette knife, gently score across to divide the sheet into crackers.

Sprinkle the top with salt crystals, seaweed, coarsely-ground black pepper and a generous sprinkle of za'atar. Gently pass the rolling pin over the sheet again to press the toppings into the dough.

Transfer to the oven and bake for roughly ten minutes, or until the top begins to darken at the edges.

Sunday, May 12, 2013

The IEEE does not do Open Access

Summary: By the commonly-accepted definition of the term, IEEE journals offer real Open Access (OA) publishing options if and only if your funding body mandates Open Access publishing.


This time last year, I posted a survey of journals and Open Access in the field of remote sensing. As I have been being encouraged by my department to publish in the IEEE Transactions on Geoscience and Remote Sensing (where I currently have a paper going through its second review stage), over the last year I have been trying to determine what, exactly, IEEE Publishing means when it claims to offer "open access".

What is Open Access (OA)?

As I mentioned in my previous post, most people who are interested in widening the general public's access to scientific literature understand "Fully Open Access" to mean compliance with the Budapest Open Access Initiative definition (BOAI):

Free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of... articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.

The subject of OA publication of research results is topic of quite a lot of public debate in the UK at the moment, due to the UK Research Councils (RCUK) issuing new guidelines and requirements on the topic. The new RCUK Policy on Open Access came into force on 1st April 2013, and contains a definition of OA.

RCUK defines Open Access as unrestricted, on-line access to peer-reviewed and published research papers. Specifically a user must be able to do the following free of any access charge:

  • Read published papers in an electronic format;
  • Search for and re-use the content of published papers both manually and using automated tools (such as those for text and data mining) provided that any such re-use is subject to full and proper attribution and does not infringe any copyrights to third-party material included in the paper.

Furthermore, RCUK clearly express a preference for publication using a Creative Commons Attribution (CC-BY) licence, and require such a licence to be used when RCUK funds are used to pay an Article Processing Charge (APC) for an OA paper. Specifically, they say that:

Crucially, the CC-BY licence removes any doubt or ambiguity as to what may be done with papers, and allows re-use without having to go back to the publisher to check conditions or ask for specific conditions.

As a researcher funded by EPSRC, I was of course very keen to determine whether the IEEE's "open access" publishing options comply with the new policy.

"Open access" at the IEEE

The IEEE claim to offer three options for OA publishing: hybrid journals, a new IEEE Access mega journal, and "fully OA" journals. One the bright side, the IEEE seems to treat all three the same way in terms of the general process, fees, etc., so I will not discuss the differences between them here.

Some aspects of the IEEE's approach to OA are quite clearly explained in the FAQ, and provide an interesting contrast with the the policies at unambiguously fully OA journals such as PLOS ONE. The IEEE charge an APC of $1750 per paper; PLOS ONE charges $1350. The IEEE requires copyright assignment; PLOS ONE allows authors to retain their copyrights. The IEEE's licencing of APC-paid OA articles is almost impossible to determine; PLOS ONE is unambiguously CC-BY.

But what is that licence? Exactly how open are "OA" articles published in IEEE journals? With reference to RCUK's definition of OA, the first point is clearly satisfied — users can read the paper free of charge on IEEE Xplore. Trying to pin the second point down has been quite a quest.

The IEEE allows authors to distribute a "post-print" (the accepted version of a manuscript, i.e. their final draft of a paper after peer review but before it goes through the IEEE's editing process and is prepared for printing). This can be placed on a personal website and/or uploaded to an institutional repository. At the University of Surrey, for example, papers can be placed on Surrey Research Insight. Unfortunately, this "Green OA" approach does not satisfy the RCUK's requirement to enable re-use; the licence is very explicit. As per the IEEE PSPB Operations Manual, the IEEE requires the following notice to be displayed with post-prints:

© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

With Green OA clearly ruled out as an option, what about when an APC is paid (also known as "Gold OA")? This is option preferred by RCUK. I initially tried to figure this out by e-mailing the IEEE intellectual property rights office, but I never received any reply. I also e-mailed the editor of TGRS, and this also elicited no response.

My last and most recent attempt involved e-mailing IEEE Xplore tech support, asking where on the website I could find licence information for a specific recent "open access" TGRS paper that I had downloaded.

I have been unsuccessfully attempting to determine the license under which "Open Access" journal articles from IEEE journals are available from IEEE Xplore.

For example, the following paper:

Zakhvatkina, N.Y.; Alexandrov, V.Y.; Johannessen, O.M.; Sandven, S.; Frolov, I.Y., "Classification of Sea Ice Types in ENVISAT Synthetic Aperture Radar Images," Geoscience and Remote Sensing, IEEE Transactions on , vol.51, no.5, pp.2587,2600, May 2013
doi: 10.1109/TGRS.2012.2212445

is allegedly an "open access" paper, but the IEEE Xplore web page gives no indication of whether it is actually being made available under a Budapest Open Access Initiative-compliant license (e.g. CC-BY), and an exploration of the pages linked from the its web page leaves me none the wiser.

Could you please improve the IEEE Xplore website to display article licensing information much more clearly, especially in the case of your "open access" products?

This then got passed on to the IEEE's "open access team" who then in turn attempted to pass it on to the IPR office to be ignored again. However, I now had an e-mail address to e-mail with a more specific request:

Thank you for forwarding this query on. Needless to say, the IEEE IPR have not responded to the question, just the same as when I contacted them directly a few months ago.

Surely, as the IEEE Open Access team, you and your colleagues must have some idea of what level of openness IEEE are aiming for with their open access initiatives, especially given that you've just launched a new "open" megajournal! Your competitor OA megajournals make their licensing information really easy to find, and I don't understand why IEEE Publishing seems to be having a big problem with this.

As an IEEE member the lack of clarity here is really quite concerning.

Finally, I received a moderately-illuminating reply.

I will pass on your feedback that OA copyright information needs to be easier to find in Xplore.

The IEEE continues to review legal instruments that may be used to authorize publication of open access articles. The OACF now in use is a specially modified version of the IEEE Copyright Form that allows users to freely access the authors’ content in Xplore, and it allows authors to post the final, published versions of their papers on their own and their employers’ websites. The OACF also allows IEEE to protect the content by giving IEEE the legal authority to resolve any complaints of abuse of the authors’ content, such as infringement or plagiarism.

Some funding agencies have begun to require their research authors to use specific publication licenses in place of copyright transfer if their grants are used to pay article processing charges (APCs). Two examples are the UK's Wellcome Trust and the Research Councils of the UK., both of which this month began to require authors to use the Creative Commons Attribution License (CC BY). In cases like these, IEEE is willing to work with authors to help them comply with their funder requirements. If you have questions or concerns about the OACF, or are required to submit any publication document other than the OACF, please contact the Intellectual Property Rights Office at 732-562-3966 or at copyrights@ieee.org.

The IEEE IPR office has additional information about the OACF, including an FAQ, on our web site at http://www.ieee.org/publications_standards/publications/rights/oacf.html.

From this e-mail, it is clear that paying an APC for the IEEE's "open access" publishing options normally provides very little real benefit over simply self-archiving the accepted version of the manuscript. Either way, tools such as Google Scholar will allow readers to find a free-to-read version of the paper; if you are using the IEEE journals LaTeX templates, this version will be almost indistinguishable from the final version as distributed in printed form.

Furthermore, the IEEE APC-supported "open access" publishing option is not Open Access, by either the BOAI or RCUK definitions of the term, because re-use is forbidden. Gold OA is clearly also not normally an option when publishing with the IEEE.

The only exception to this is if you have a mandate from a funding body that says your publications must be distributed under a certain licence, in which case you may be able to persuade the IEEE to provide "real" Gold OA: the ability for the public to read and re-use your research at no cost and with no restrictive licensing terms. This would apply, for example, if you were funded by RCUK; in that case you should not sign the IEEE Copyright Form, and should contact the IEEE IPR office before submitting your manuscript in order to argue it out with them.


The IEEE claims to offer "fully Open Access" publishing options to all of their authors. In fact, they offer no such thing. Open Access means the ability to both read and re-use the products of research, and the IEEE's "open access" options prohibit re-use.

Self-archiving is allowed by the IEEE, but only with a copyright statement that forbids re-use. Paying an enormous APC to make your paper "open access" merely allows people to read it for free on IEEE Xplore. True Gold OA is only available if your funding body mandates real Open Access.

For the majority of researchers (in industry or funded by bodies without OA mandates in place), the IEEE provides no Open Access publishing option at all. The half-hearted and incomplete "open access" options that the IEEE provides can only be interpreted as a cynical attempt to both dilute the BOAI definition and to extract vastly-inflated APCs from authors who fail to read the fine print.

Wednesday, May 08, 2013

New projects, new software and a finished thesis

It's been a while since I last posted about my research, so I felt that it might be time for a bit of an update. I've been at Surrey Space Centre for almost four years now, and my PhD studentship is most definitely drawing to a close.

Most importantly, I finally managed to complete and submit my thesis, Urban Damage Detection in High Resolution SAR Images and my viva voce examination will take place on 21st June. After having spent so long fretting about whether my research was "good enough", it's bizarre to find myself actually feeling quietly confident about the exam. On the other hand, I don't know how long that strange feeling of confidence will last!

My supervisor advised me not to publish the submitted version of my thesis, on the basis that the exam is quite soon and it would be better to take the opportunity incorporate any requested corrections before publication (and that it would be embarrassing if I fail the exam and the examiners ask me to submit a new thesis). However, I will definitely be making sure that I make it available online as soon as I have the final version ready.

On the other hand, I have already published the source code for the software developed during my PhD and described in my thesis. The git repositories have been publicly accessible on github for some time, and I've also more recently uploaded release tarballs to figshare. I've published three software packages:

  • ssc-ridge-tools (git repo) contains the ridgetool program for extracting bright curvilinear features from TIFF images, and a bunch of general tools for working with them (e.g. exporting them to graphical file formats, manually classifying them, or printing statistics).
  • ssc-ridge-classifiers (git repo) contains two different tools for classifying the bright lines extracted by ridgetool. They are designed for the task of identifying which bright lines look like the double reflection lines that are characteristic of SAR images of urban buildings.
  • ssc-urban-change (git repo) contains a tool for using curvilinear features and pre- and post-event SAR images to plot change maps.

All the programs in the packages contain manpages, README files, etc. Note that they require x86 or x86-64 Linux (they just won't work on Windows). If you wish to understand what the various algorithms are and (probably more importantly) how they can be used, you should probably read Earthquake Damage Detection in Urban Areas using Curvilinear Features.

In a follow-on from my main PhD research, Astrium GEO have very kindly agreed to give me some TerraSAR-X images of the city of Khash, Iran, where there was a very big earthquake about a month ago on April 16th. Hopefully, I'll be able to publish some preliminary results of applying my tools to that data shortly (it depends heavily on when I actually receive the image products)! The acquisition had been scheduled for 7th May, so hopefully I will be hearing from them soon. The current plan is to publish a short research report in PLoS Currents Disasters, even if the results are negative.

I've recently been working on a side project using multispectral imagery from the UK-DMC2 satellite to try and detect water quality changes in Lake Chilwa, Malawi during January 2013. It's been nice to have a change from staring at SAR data, and I've also had the opportunity to learn some new skills. This was particularly interesting, as it forms part of a MILES multidisciplinary project involving people from all over the University of Surrey. One of the things that I produced for this project was an image showing the change in Normalised Difference Vegetation Index between 3rd January and 17th January. Later this month, I'm also hoping to publish some brief reports describing the exact processing steps used: I'm not sure how much immediate use they will be, but might provide some pointers to other people trying to use DMC data in the future.

The only thing that I'm feeling particularly concerned about at the moment is the status of my IEEE Transactions journal paper, which seems to be taking forever to get through its peer review process. It's almost 11 months since I submitted it, and I really hope that it's at least accepted for publication by the time I have my viva.

All in all, though, my PhD research is more-or-less tied up, and I've produced a bunch of potentially interesting/useful outputs. Does that make it a success?