tag:blogger.com,1999:blog-51443792024-02-20T22:05:06.796+00:00Peter Brett's BlogPeter Brett writes about food and software, mostly.Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.comBlogger263125tag:blogger.com,1999:blog-5144379.post-8089023125268892892017-02-22T16:30:00.000+00:002017-02-22T16:30:15.621+00:00Deploying buildbot workers on Windows Server 2016<p>At LiveCode, we use a <a href="https://buildbot.net/">buildbot</a> system to perform our continuous integration and release builds. Recently, we moved from building our Windows binaries in a Linux container using <a href="https://www.winehq.org/">Wine</a> to building on a native Windows system running in an <a href="https://azure.microsoft.com/en-gb/">Azure</a> virtual machine.</p>
<p>Deploying buildbot on Windows is not totally straightforward, and the <a href="http://trac.buildbot.net/wiki/RunningBuildbotOnWindows">documentation for installing it</a> is quite hard to follow. It's quite important to us that our build infrastructure is reproducible, so we wanted to have a procedure that could bring up a buildbot worker on a newly-allocated server quickly and with as little manual intervention as possible.</p>
<p>This blog post provides step-by-step instructions for installing buildbot 0.8.12 on Windows Server 2016 Datacenter Edition, with explanations of what's going on at each step. The target configuration is a buildbot worker that runs as an unprivileged user and communicates with the buildbot master over an SSL tunnel. All of the commands are written using PowerShell. It's recommended to run them via the 'PowerShell ISE' application, running as a user in the 'Administrators' group. The <a href="https://gist.github.com/peter-b/5448b8121c020e34a886a91f9d80bf20">full script</a> is available as a GitHub Gist.</p>
<p>Although this describes installing buildbot 0.8.12, there's no reason it shouldn't work for buildbot 0.9.x. If you try it, please let me know how you get on in the comments.</p>
<p><strong>Note:</strong> Don't run these commands unless you've checked them very carefully first. They're adapted from the scripts used for our buildbot deployment, and may not work as you expect. You should use them as the basis of your own installation script and test it thoroughly before using it in production.</p>
<h3>Support functions</h3>
<p>First, ensure that the script stops immediately if any error is thrown, and that "verbose" messages are displayed.</p>
<pre class="code">
$VerbosePreference = 'Continue'
$ErrorActionPreference = 'Stop'
</pre>
<p>By default, PowerShell doesn't convert non-zero exit codes from subprocess into errors, so define a helper function that you can use to accomplish this. By default, <tt>CheckLastExitCode</tt> will throw an error on a non-zero exit code, but if there are other exit codes that should be considered successful, you can pass in an array of permitted exit codes, e.g. <tt>CheckLastExitCode(@(0,10))</tt>.</p>
<pre class="code">
function CheckLastExitCode {
param ([int[]]$SuccessCodes = @(0))
if ($SuccessCodes -notcontains $LASTEXITCODE) {
throw "Command failed (exit code $LASTEXITCODE)"
}
}
</pre>
<p>For this to work, you'll need to implement a <tt>Fetch-BuildbotResource</tt> function that obtains a named resource file and places it in a given output location. Fill in the blanks (possibly with some sort of <a href="https://msdn.microsoft.com/powershell/reference/5.1/microsoft.powershell.utility/Invoke-WebRequest"><tt>Invoke-WebRequest</tt></a>):</p>
<pre class="code">
function Fetch-BuildbotResource {
param([string]$Path,
[string]$OutFile)
# Your code goes here
}
</pre>
<p>It's also a good idea to activate Windows. The virtual machines provisioned by Azure may not have been activated; this command will do so automatically.</p>
<pre class="code">
cscript.exe C:\Windows\System32\slmgr.vbs /ato
</pre>
<p>Finally, define variables with the root path for the buildbot installation and the IP or DNS address of the buildbot master, and create the buildbot worker's root directory<p>
<pre class="code">
$k_buildbot_root = 'C:\buildbot'
$k_buildbot_master = 'buildbot.example.org'
New-Item -Path $k_buildbot_root -ItemType Container -Force | Out-Null
</pre>
<h3>Installing programs with Chocolatey</h3>
<p><a href="https://chocolatey.org/">Chocolatey</a> is a package manager for Windows that can automatically install a variety of applications and services in much the same way as the Linux `apt-get`, `dnf` or `yum` programs. Here, you can use it for installing Python (for running buildbot) and for installing the <tt>stunnel</tt> SSL tunnel service.</p>
<p>Install Chocolatey by the time-honoured process of "downloading a random script from the Internet and running it as a superuser".</p>
<pre class="code">
$env:ChocolateyInstall = 'C:\ProgramData\chocolatey'
# Install Chocolatey, if not already present
if (!(Test-Path -LiteralPath $env:ChocolateyInstall -PathType Container)) {
Invoke-WebRequest 'https://chocolatey.org/install.ps1' -UseBasicParsing | Invoke-Expression
}
</pre>
<p>Next, use Chocolatey to install <tt>stunnel</tt> and Python 2.7:</p>
<pre class="code">
Write-Verbose 'Installing Python and stunnel'
choco install --yes stunnel python2
CheckLastExitCode
</pre>
<h3>Installing Python modules and buildbot</h3>
<p>It's easiest to install buildbot and its dependencies using the <tt>pip</tt> Python package manager.</p>
<pre class="code">
Write-Verbose 'Installing Python modules'
$t_pip = 'C:\Python27\Scripts\pip.exe'
& $t_pip install pypiwin32 buildbot-slave==0.8.12
CheckLastExitCode
</pre>
<p>The <tt>pypiwin32</tt> package installs some DLLs that are required for buildbot to run as a service, but when installed with <tt>pip</tt>, these DLLs are not automatically registered in the Windows registry. This caused me at least a day of wondering why my buildbot service was failing to start with the super informative message:</p>
<blockquote class="twitter-tweet" data-lang="en"><p lang="en" dir="ltr">'Buildbot Worker (Buildbot)' cannot be started due to the following error: Cannot start service Buildbot on computer '.'.<br><br>(╯°□°)╯︵ ┻━┻)</p>— Dr Peter Brett (@PeterTBBrett) <a href="https://twitter.com/PeterTBBrett/status/831122700202569728">February 13, 2017</a></blockquote>
<script async src="//platform.twitter.com/widgets.js" charset="utf-8"></script>
<p>Luckily, <tt>pypiwin32</tt> installs a script that will set everything up properly.</p>
<pre class="code">
Write-Verbose 'Registering pywin32 DLLs'
$t_python = C:\Python27\python.exe
& $t_python C:\Python27\Scripts\pywin32_postinstall.py -install
</pre>
<h3>SSL tunnel service</h3>
<p>You'll need to configure <tt>stunnel</tt> to run on your buildbot master, and listen on port 9988. I recommend configuring the buildbot master's <tt>stunnel</tt> with a certificate, and then making sure workers always fully authenticate the certificate when connecting to it. This will prevent people from obtaining your workers' login credentials by impersonating the buildbot master machine.</p>
<pre class="code">
Write-Verbose 'Installing buildbot-stunnel service'
$t_stunnel = 'C:\Program Files (x86)\stunnel\bin\stunnel.exe'
$t_stunnel_conf = Join-Path $k_buildbot_root 'stunnel.conf'
$t_stunnel_crt = Join-Path $k_buildbot_root 'buildbot.crt'
# Fetch the client certificate that will be used to authenticate
# the buildbot master
Fetch-BuildbotResource `
-Path 'buildbot/stunnel/master.crt' -Outfile $t_stunnel_crt
# Create the stunnel configuration file
Set-Content -Path $t_stunnel_conf -Value @"
[buildbot]
client = yes
accept = 127.0.0.1:9989
cafile = $t_stunnel_crt
verify = 3
connect = $k_buildbot_master:9988
"@
# Register the stunnel service, if not already present
if (!(Get-Service buildbot-stunnel -ErrorAction Ignore)) {
New-Service -Name buildbot-stunnel `
-BinaryPathName "$t_stunnel -service $t_stunnel_conf" `
-DisplayName 'Buildbot Secure Tunnel' `
-StartupType Automatic
}
</pre>
<h3>The buildbot worker instance</h3>
<p>Creating and configuring the worker instance, and setting up buildbot to run as a Windows service, are the most complicated part of the installation process. Before dealing with the Windows service, instantiate a worker with the info it needs to connect to the buildbot master.</p>
<p>First, set up a bunch of values that will be needed later. The worker's name will just be the name of the server it's running on, and it will be configured to use a randomly-generated password.</p>
<pre class="code">
Write-Verbose 'Initialising buildbot worker'
# Needed for password generation
Add-Type -AssemblyName System.Web
$t_buildbot_worker_script = 'C:\Python27\Scripts\buildslave'
$t_worker_dir = Join-Path $k_buildbot_root worker
$t_worker_name = "$env:COMPUTERNAME-$_"
$t_worker_password = `
[System.Web.Security.Membership]::GeneratePassword(12,0)
$t_worker_admin = 'Example Organisation'
</pre>
<p>Run buildbot to actually instantiate the worker. We have to manually check the contents of the standard output from the setup process, because the exit status isn't a reliable indicator of success.</p>
<pre class="code">
$t_log = Join-Path $k_buildbot_root setup.log
Start-Process -Wait -NoNewWindow -FilePath $t_python `
-ArgumentList @($t_buildbot_worker_script, 'create-slave', `
$t_worker_dir, 127.0.0.1, $t_worker_name,
$t_worker_password) `
-RedirectStandardOutput $t_log
# Check log file contents
$t_expected = "buildslave configured in $t_worker_dir"
if ((Get-Content $t_log)[-1] -ne $t_expected) {
Get-Content $t_log | Write-Error
throw "Build worker setup failed (exit code $LASTEXITCODE)"
}
</pre>
<p>It's helpful to provide some information about the host and who administrates it.</p>
<pre class="code">
Set-Content -Path (Join-Path $t_worker_dir 'info\admin') `
-Value $t_worker_admin
Set-Content -Path (Join-Path $t_worker_dir 'info\host') `
-Value (Get-WmiObject -Class Win32_OperatingSystem).Caption
</pre>
<p>While testing our Windows-based buildbot workers, I found that I was getting "slave lost" errors during many build steps. I found that getting the workers to send really frequent "keep alive" messages to the build master prevented this from happening almost entirely. I used a 10 second period, but you might find that unnecessarily frequent.</p>
<pre class="code">
$t_config = Join-Path $t_worker_dir buildbot.tac
Get-Content $t_config | `
ForEach {$_ -replace '^keepalive\s*=\s*.*$', 'keepalive = 10'} | `
Set-Content "$t_config.new"
Remove-Item $t_config
Move-Item "$t_config.new" $t_config
</pre>
<h3>Configuring the buildbot service</h3>
<p>Now for the final part: getting buildbot to run as a Windows service. It's a bad idea to run the worker as a privileged user, so this will create a 'BuildBot' user with a randomly-generated password, configure the service to use that account, and make sure it has full access to the worker's working directory.</p>
<p>Some of the commands used in this section expect passwords to be handled in the form of "secure strings" and some expect them to be handled in the clear. There's a fair degree of shuttling between the two representations.</p>
<p>Once again, begin by setting up some variables to use during these steps.</p>
<pre class="code">
Write-Verbose 'Installing buildbot service'
$t_buildbot_service_script = 'C:\Python27\Scripts\buildbot_service.py'
$t_service_name = 'BuildBot'
$t_user_name = $t_service_name
$t_full_user_name = "$env:COMPUTERNAME\$t_service_name"
$t_user_password_clear = `
[System.Web.Security.Membership]::GeneratePassword(12,0)
$t_user_password = `
ConvertTo-SecureString $t_user_password_clear -AsPlainText -Force
</pre>
<p>Create the 'BuildBot' user:</p>
<pre class="code">
$t_user = New-LocalUser -AccountNeverExpires `
-PasswordNeverExpires `
-UserMayNotChangePassword `
-Name $t_user_name `
-Password $t_user_password
</pre>
<p>You need to create the buildbot service by running the installation script provided by buildbot. Although there's a <tt>New-Service</tt> command in PowerShell, the <tt>pywin32</tt> support for services written in Python expects a variety of registry keys to be set up correctly, and it won't work properly if they're not.</p>
<pre class="code">
& $t_python $t_buildbot_service_script `
--username $t_full_user_name `
--password $t_user_password_clear `
--startup auto install
CheckLastExitCode
</pre>
<p>It's still necessary to tell the service where to find the worker directory. You can do this by creating a special registry that the service checks on startup to discover its workers.</p>
<pre class="code">
$t_parameters_key = "HKLM:\SYSTEM\CurrentControlSet\Services\$t_service_name\Parameters"
New-Item -Path $t_parameters_key -Force
Set-ItemProperty -Path $t_parameters_key -Name "directories" `
-Value $t_worker_dir
</pre>
<p>Although the service is configured to start as the 'BuildBot' user, that user doesn't yet have the permissions required to read and write in the worker directory.</p>
<pre class="code">
$t_acl = Get-Acl $t_worker_dir
$t_access_rule = New-Object `
System.Security.AccessControl.FileSystemAccessRule `
-ArgumentList @($t_full_user_name, 'FullControl', `
'ContainerInherit,ObjectInherit', 'None', 'Allow')
$t_acl.SetAccessRule($t_access_rule)
Set-Acl $t_worker_dir $t_acl
</pre>
<h3>Granting 'Log on as a service' rights</h3>
<p>Your work is nearly done! However, there's one task that I have not yet worked out how to automate, and still requires manual intervention: <a href="https://technet.microsoft.com/en-gb/library/cc794944(v=ws.10).aspx">granting the 'Buildbot' user the right to log on as a service</a>. Without granting this right, the buildbot service will fail to start with a permissions error.</p>
<ol>
<li>Open the 'Local Security Policy' tool</li>
<li>Choose 'Local Policies' -> 'User Rights Assignment' in the tree</li>
<li>Double-click on 'Log on as a service' in the details pane</li>
<li>Click 'Add User or Group', and add 'BuildBot' to the list of accounts</li>
</ol>
<h3>Time to launch</h3>
<p>Everything should now be correctly configured!</p>
<p>There's one final bit of work required: you need to add the worker's username and password to the buildbot master's list of authorised workers. If you need it, you can obtain the username and password for the worker using PowerShell:</p>
<pre class="code">
Get-Content C:\buildbot\worker\buildbot.tac | `
Where {$_ -match '^(slavename|passwd)' }
</pre>
<p>You can use the `Start-Service` command to start the <tt>stunnel</tt> and buildbot services:</p>
<pre class="code">
Start-Service buildbot-stunnel
Start-Service buildbot
</pre>
<h3>Conclusions</h3>
<p>You can view the <a href="https://gist.github.com/peter-b/5448b8121c020e34a886a91f9d80bf20">full script</a> described in this blog post as a GitHub Gist.</p>
<p>On top of installing buildbot itself, you'll need to install the various toolchains that you require. If you're using Microsoft Visual Studio, the "build tools only" installers provided by Microsoft for MSVC 2010 and MSVC 2015 are really useful. Many other dependencies can be installed using Chocolatey.</p>
<p>Installing buildbot on Windows is currently a pain, and I hope that someone who knows more about Windows development than I do can help the buildbot team make it easier to get started.</p>
Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0Edinburgh, UK55.953252 -3.188266999999996255.810968 -3.5109904999999961 56.095535999999996 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-22760972048539118142017-02-21T16:30:00.000+00:002017-02-21T16:30:00.270+00:00How to stop mspdbsrv from breaking your continuous integration system<p>Over the last month, I've been working on getting the LiveCode build cluster to do Windows builds using Visual Studio 2015. We've been using Visual Studio 2010 since I originally set up the build service in mid-2015. This upgrade was prompted by needing support for some C++ language features used by the latest version of <tt>libskia</tt>.</p>
<p>Once the new Windows Server <a href="https://buildbot.net">buildbot</a> workers had their tools installed and were connected to the build service, I noticed a couple of pretty weird things going on:</p>
<ul>
<li>after one initial build, the build workers were repeated failing to clean the build tree in preparation for for the next build</li>
<li>builds were getting "stuck" after completing successfully, and were then being detected as timed out and forcibly killed</li>
</ul>
<h3>Blocked build tree cleanup</h3>
<p>The first problem was easy to track down. I guessed that the clean step was failing because some process still had an open file handle to one of the files or directories that the worker was trying to delete. I used the Windows 'Resource Monitor' application (<tt>resmon.exe</tt>), which can be launched from the 'Start' menu or from 'Task Manager', to find the offending process. The 'CPU' tab lets you search all open file handles on the system by filename, and I quickly discovered that <tt>mspdbsrv.exe</tt> was holding a file handle to one of the build directories.</p>
<h3>What is <tt>mspdbsrv</tt>?</h3>
<p><tt>mspdbsrv</tt> is a helper service used by the Visual Studio C and C++ compiler, <tt>cl.exe</tt>; it collects debugging information for code that's being compiled and writes out <tt>.pdb</tt> databases. CL automatically spawns <tt>mspdbsrv</tt> if debug info is being generated and it connect to an existing instance. When the build completes, CL doesn't clean up any <tt>mspdbsrv</tt> that it spawned; it just leaves it running. There's no way to prevent CL from doing this.</p>
<p>So, it looked like the abandoned <tt>mspdbsrv</tt> instance had its current working directory set to one of the directories that the build worker was trying to delete, and on Windows you can't delete a directory if there's a process running there. So much for the first problem.</p>
<h3>Build step timeouts</h3>
<p>The second issue was more subtle -- but it also appeared to be due to the lingering <tt>mspdbsrv</tt> process! I noticed that <tt>mspdbsrv</tt> was actually holding a file handle to one of the buildbot worker's internal log files. It appears that buildbot doesn't close file handles when starting build processes, and these handles were being inherited by <tt>mspbdsrv</tt>, which was holding them open. As result, the buildbot worker (correctly) inferred that there were still unfinished build job processes running, and didn't report the build as completed.<p>
<h3>Mismatched MSVC versions</h3>
<p>When I thought through this a bit further, I realised there was another problem being caused by lingering <tt>mspdbsrv</tt> instances. Some of the builds being handled by the Windows build workers need to use MSVC 2015, and some still need to use MSVC 2010. Each type of build should use the corresponding version of <tt>mspdbsrv</tt>, but by default CL always connects to any available service process.</p>
<h3>Steps towards a fix</h3>
<p>So, what was the solution?</p>
<ol>
<li>Run <tt>mspdbsrv</tt> explicitly as part of the build setup, and keep a handle to the process so that it can be terminated once the build has finished.</p>
<li>Launch <tt>mspdbsrv</tt> with a current working directory outside the build tree.</li>
<li>Force CL to use a specific <tt>mspdbsrv</tt> instance rather than just picking any available one.</li>
</ol>
<p>LiveCode CI builds are now performed using a <a href="https://github.com/livecode/livecode/blob/develop/buildbot.py">Python helper script</a>. Here's a snippet that implements all of these requirements (note that it hardcodes the path to the MSVC 2010 <tt>mspbdsrv.exe</tt>:</p>
<pre class="code">
import os
import subprocess
import uuid
# Find the 32-bit program files directory
def get_program_files_x86():
return os.environ.get('ProgramFiles(x86)',
os.environ.get('ProgramFiles',
'C:\\Program Files\\'))
# mspdbsrv is the service used by Visual Studio to collect debug
# data during compilation. One instance is shared by all C++
# compiler instances and threads. It poses a unique challenge in
# several ways:
#
# - If not running when the build job starts, the build job will
# automatically spawn it as soon as it needs to emit debug symbols.
# There's no way to prevent this from happening.
#
# - The build job _doesn't_ automatically clean it up when it finishes
#
# - By default, mspdbsrv inherits its parent process' file handles,
# including (unfortunately) some log handles owned by Buildbot. This
# can prevent Buildbot from detecting that the compile job is finished
#
# - If a compile job starts and detects an instance of mspdbsrv already
# running, by default it will reuse it. So, if you have a compile
# job A running, and start a second job B, job B will use job A's
# instance of mspdbsrv. If you kill mspdbsrv when job A finishes,
# job B will die horribly. To make matters worse, the version of
# mspdbsrv should match the version of Visual Studio being used.
#
# This class works around these problems:
#
# - It sets the _MSPDBSRV_ENDPOINT_ to a value that's probably unique to
# the build, to prevent other builds on the same machine from sharing
# the same mspdbsrv endpoint
#
# - It launches mspdbsrv with _all_ file handles closed, so that it
# can't block the build from being detected as finished.
#
# - It explicitly kills mspdbsrv after the build job has finished.
#
# - It wraps all of this into a context manager, so mspdbsrv gets killed
# even if a Python exception causes a non-local exit.
class UniqueMspdbsrv(object):
def __enter__(self):
os.environ['_MSPDBSRV_ENDPOINT_'] = str(uuid.uuid4())
mspdbsrv_exe = os.path.join(get_program_files_x86(),
'Microsoft Visual Studio 10.0\\Common7\\IDE\\mspdbsrv.exe')
args = [mspdbsrv_exe, '-start', '-shutdowntime', '-1']
print(' '.join(args))
self.proc = subprocess.Popen(args, cwd='\\', close_fds=True)
return self
def __exit__(self, type, value, traceback):
self.proc.terminate()
return False
</pre>
You can then use this when implementing a build step:
<pre class="code">
with UniqueMspdbsrv() as mspdbsrv:
# Do your build steps here (e.g. msbuild invocation)
pass
# mspdbsrv automatically cleaned up by context manager
</pre>
<p>It took me a couple of days to figure out what was going on and to find an adequate solution. A lot of very tedious trawling through obscure bits of the Internet were required to find all of the required pieces; for example, Microsoft do not document arguments to <tt>mspdbsrv</tt> or the environment variables that it understands anywhere on MSDN.</p>
<p>Hopefully, if you are running into problems with your Jenkins or buildbot workers interacting weirdly with Microsoft Visual Studio C or C++ builds, this will save you some time!</p>
Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com2Edinburgh, UK55.953252 -3.188266999999996255.810968 -3.5109904999999961 56.095535999999996 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-62214649199335314962016-12-05T14:59:00.000+00:002016-12-05T15:39:48.869+00:00When C uninitialised variables and misleading whitespace combine<p>Recently, LiveCode Builder has gained a namespace resolution operator <tt>.</tt>. It allows LCB modules to declare functions, constants, and variables which have the same name, by providing a way for modules that import them to distinguish between them.</p>
<p>During this work, we ran into a problem: the modified LCB compiler (<tt>lc-compile</tt>) worked correctly in "Debug" builds, but reliably crashed in "Release" builds. More peculiarly, we found that <tt>lc-compile</tt> crashes depended on which compiler was used: some builds using certain versions of GCC crashed reliably, while some builds using clang worked fine. We spent a lot of time staring at output from gdb and Valgrind, and came to the conclusion that maybe it was a compiler bug in GCC.</p>
<p>It turned out that we were wrong. When we switched to using clang to build full LiveCode releases, the mysterious crashes popped up again. Since this had now become a problem that was breaking the build, I decided to dig into it again. Originally, we'd not been able to duplicate the crash in very recent versions of GCC and clang, so my first step was to try and make <tt>lc-compile</tt> crash when compiled with GCC 6.</p>
<p>The problem seemed to revolve around some code in the following form:</p>
<pre class="code">
class T;
typedef T* TPtr;
// (1) function returning true iff r_value was set
bool maybeFetch(TPtr& r_value);
void f()
{
TPtr t_value;
if (maybeFetch(t_value))
{
// (2) dereference t_value
}
}
</pre>
<p><tt>lc-compile</tt> was sometimes, but not reliably, crashing at point (2).</p>
<p>Initially, when I compiled with GCC 6, I was not able to induce a crash. However, I did receive a warning that <tt>t_value</tt> might be used without being initialised. I therefore modified the implementation of <tt>f()</tt> to initialise <tt>t_value</tt> at its declaration:</p>
<pre class="code">
void f()
{
TPtr t_value = nullptr;
// ...
}
</pre>
<p>With that modification, the crash became reliably reproducible in all build modes using all of the compilers I had available. This drew my suspicion to the <tt>maybeFetch()</tt> function (1). The function's API contract requires it to return <tt>true</tt> if (and only if) it sets its out parameter <tt>r_value</tt>, and return <tt>false</tt> otherwise.</p>
<p>So, I had a look at it, and it looked fine. What else could be going wrong?</p>
<p>Much of <tt>lc-compile</tt> is implemented using a domain-specific language called Gentle, which generates <tt>bison</tt> and <tt>flex</tt> grammars, which are in turn used to generate some megabytes of C code that's hard to read and harder to debug.</p>
<p>I disappeared into this code for quite a while, and couldn't find anything to suggest that the Gentle grammar was wrong, or that the generated code was the cause of the segfault. What I <emph>did<emph> find suggested that there were problems with the values being provided by the <tt>maybeFetch()</tt> function.</tt>
<p>Because explicit initialisation made the crashes reliable and reproducible, I came to the conclusion that <tt>maybeFetch()</tt> was sometimes returning <tt>true</tt> <emph>without</emph> setting its out parameter. So, what was <tt>maybeFetch()</tt> doing?</p>
<p>A simplified form of <tt>maybeFetch()</tt> as I found it was:</p>
<pre class="code">
bool maybeFetch(TPtr& r_value)
{
for (TPtr t_loop_var = /* loop form ... */)
{
if (condition(t_loop_var))
*r_value = t_loop_var;
return true;
}
return false;
}
</pre>
<p>Needless to say, when I saw the problem it was moment of slightly bemused hilarity. This function had been reviewed several times by various team members, and all of us had missed the missing block braces <tt>{ ... }</tt> hidden by misleading indentation.</p>
<pre class="code">
if (condition(t_loop_var))
{ // missing open brace
*r_value = t_loop_var;
return true;
} // missing close brace
</pre>
<p>Once these braces had been inserted, all of the problems went away.</p>
<p>What lessons could be taken away from this?</p>
<ol>
<li>The bug itself eluded review because of misleading indentation. GCC 6 provides a "misleading indentation" warning which would have immediately flagged up this warning if it had been enabled. We do not use GCC 6 for LiveCode builds; even if we did, we wouldn't be able to enable the "misleading indentation" warning to good effect because the LiveCode C++ sources don't currently use a consistent indentation style. This problem could maybe be avoided if LiveCode builds enforced a specific indentation style (in which case the bug would have been obvious in review), or if we regularly did builds with GCC 6 and <tt>-Werror=misleading-indentation</tt>.</li>
<li>The effect of the bug was an API contract violation, where the relationship between the return value and the value of an out parameter wasn't satisfied. The problem could have been avoided if the API contract was expressed in a way that the compiler could check. C++17 adds <a href="http://en.cppreference.com/w/cpp/utility/optional"><tt>std::optional<T></tt></a>, which combines the idea of "is there a value or not" with returning the value itself. If the function took the form <tt>std::optional<TPtr> maybeFetch()</tt> then it would have been impossible for it to claim to return a value without actually returning one.</li>
<li>Finally, the problem was obfuscated by failing to initialise stack variables. Although, if <tt>maybeFetch()</tt> was working correctly, the pointer on the stack <emph>would</emph> get initialised before use, in this case, it didn't. Diagnosing the problem may have been much easier if we routinely initialised stack variables to suitable uninformative values at the point of declaration, even if we think they _should_ get initialised via a function's out parameter before use.</li>
</ol>
<p>This was a very easy mistake to make, and an easy issue to miss in a code review, but it was very costly to clean up. I hope that we'll be able to make some changes to our development processes and our coding style to try and avoid things like this happening in the future.</p>
<p><b>Update:</b> My colleague points out another contributing factor to making the error hard to spot: the <tt>condition(t_loop_var)</tt> was a composite condition spread across multiple lines, i.e.</p>
<pre class="code">
if (conditionA(t_loop_var) ||
(conditionB(t_loop_var) &&
conditionC(t_loop_var)))
*r_value = t_loop_var;
return true;
</pre>
<p>This code layout maybe makes it slightly less obvious where the body of the <tt>if</tt> lies.</p>
Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-78124197891669432592016-10-27T20:37:00.000+01:002016-10-27T20:37:43.453+01:00Playing with Bus1<p>David Herrmann and Tom Gundersen have been working on new, performant Linux interprocess communication (IPC) proposals for a few years now. First came their proposed kdbus system, which would have provided a DBus-compatible IPC system, but this didn't actually get merged because of several design issues that couldn't be worked around (mostly security-related).</p>
<p>So, they went back to the drawing board, and now have come back with a new IPC system called <a href="http://www.bus1.org/">Bus1</a>, which was described in a <a href="https://lwn.net/Articles/697191/">LWN article</a> back in August. Yesterday, they posted <a href="https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1258805.html">draft patches</a> to the Linux kernel mailing list, and the kernel module and userspace libraries are <a href="https://github.com/bus1">available on GitHub</a> for your convenience.</p>
<p>I decided to find out what's involved in getting the experimental Bus1 code up and running on my system. I run Fedora Linux, but broadly similar steps can be used on other Linux distributions.</p>
<h3>Installing tools</h3>
<p>The first thing to do is to install some development tools and headers.</p>
<pre class="code">
dnf install git kernel-devel
dnf builddep kernel
</pre>
<p>I'm going to need <tt>git</tt> for getting the source code, and the <tt>kernel-devel</tt> development headers for compiling the Bus1 kernel module. The special <tt>dnf builddep</tt> command automatically fetches all of the packages needed for compiling a particular package — in this case, we're compiling a kernel module, so just grabbing the tools needed for compiling the kernel should include everything necessary.</p>
<h3>Building the kernel module</h3>
<p>I need to get the <a href="https://github.com/bus1/bus1">Bus1 kernel module's source code</a> using git:</p>
<pre class="code">
mkdir ~/git
cd ~/git
git clone https://github.com/bus1/bus1.git
cd bus1
</pre>
<p>With all of the tools I need already installed, I can very simply run</p>
<pre class="code">
make
</pre>
<p>to compile the Bus1 module.</p>
<p>Finally, the Bus1 <tt>Makefile</tt> provides an all-in-one solution for running the module's tests and loading it into the running kernel:</p>
<pre class="code">
make tt
</pre>
<p>After several seconds of testing and benchmarking, I get some messages like:</p>
<pre class="code">
[ 1555.889884] bus1: module verification failed: signature and/or required key missing - tainting kernel
[ 1555.891534] bus1: run selftests..
[ 1555.893530] bus1: loaded
</pre>
<p>Success! Now my Linux system has Bus1 loaded into its kernel! But what can be done with it? I need some userspace code that understands how to use Bus1 IPC.</p>
<h3>Building the userspace library</h3>
<p>The Bus1 authors have provided a basic <a href="https://github.com/bus1/libbus1">userspace library</a> for use when writing programs that use Bus1. How about building it and running its tests to check that Bus1 is actually usable?</p>
<p>Some additional tools are needed for compiling <tt>libbus1</tt>, because it uses GNU Autotools rather than the kernel build system:</p>
<pre class="code">
sudo dnf install autoconf automake
</pre>
<p>As before, I need to checkout the source code:</p>
<pre class="code">
cd ~/git
git clone https://github.com/bus1/libbus1.git
</pre>
<p>I can then set up its build system and configure the build by running:</p>
<pre class="code">
./autogen.sh
./configure
</pre>
<p>But there's a problem! I need to install a couple of obscure dependencies: David Herrmann's <a href="https://github.com/c-util/c-sundry">c-sundry</a> and <a href="https://github.com/c-util/c-rbtree">c-rbtree</a> libraries.</p>
<p>This is accomplished by something along the lines of:</p>
<pre class="code">
cd ~/git
git clone https://github.com/c-util/c-sundry.git
git clone https://github.com/c-util/c-rbtree
# Install c-sundry
cd ~/git/c-sundry
./autogen.sh
./configure
make
sudo make install
# Install c-rbtree
cd ~/git/c-rbtree
./autogen.sh
./configure
make
sudo make install
</pre>
<p>So, with dependency libraries installed, it's now possible to build <tt>libbus1</tt>. Note that the <tt>configure</tt> script won't pick up the dependencies installed because on Fedora it doesn't scan the <tt>/usr/local/lib/pkgconfig</tt> directory by default, so I have to give it a bit of help.</p>
<pre class="code">
cd ~/git/libbus1
./autogen.sh
PKG_CONFIG_PATH=/usr/local/lib/pkgconfig ./configure
make
</pre>
<p>Amusingly, this failed the first time <a href="https://github.com/bus1/libbus1/issues/4">due to a bug</a> for which <a href="https://github.com/c-util/c-sundry/pull/1">I submitted a patch</a>. However, with the patch applied to <tt>c-sundry</tt>, I've got a successful build of <tt>libbus1</tt>!</p>
<p>I also ended up having to add <tt>/usr/local/lib</tt> to <tt>/etc/ld.so.conf</tt> so that the <tt>c-rbtree</tt> library got detected properly when running the <tt>libbus1</tt> test suite.</p>
<p>Even after that, unfortunately the test suite failed. Clearly the Bus1 userspace libraries aren't as well-developed as the kernel module! Maybe someone could do something about that...?</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-5845111447977669452015-11-04T22:03:00.000+00:002015-11-04T22:03:49.350+00:00Japanese Shioyaki-style mackerel<i>This is a guest post written by <a href="https://twitter.com/KateRGrant">Kathryn Grant</a>, who has a knack for picking out exotic yet easy-to-cook recipes!</i>
<p>This is a quick version of 鯖の塩焼き (<i>saba no shioyaki</i>), or salt-grilled mackerel, served with cucumber pickle and toasted sesame seeds. This recipe serves 2 people.</p>
<h4>Ingredients</h4>
<p>For the cucumber pickle:</p>
<ul>
<li>½ cucumber, halved lengthways and sliced</li>
<li>1 tsp cooking salt</li>
<li>50 ml rice wine (or white wine) vinegar</li>
<li>3 tbsp dark soy sauce</li>
<li>1 tbsp toasted sesame seed oil</li>
<li>1 tsp sugar</li>
<li>¼–½ tsp chilli powder</li>
<li>3 spring onions</li>
</ul>
<p>For the grilled mackerel:</p>
<ul>
<li>3 tbsp soy sauce</li>
<li>1 tbsp rice wine (or white wine) vinegar</li>
<li>1 tsp toasted sesame oil</li>
<li>2 fresh mackerel fillets</li>
<li>Sea salt</li>
<li>Vegetable oil</li>
</ul>
<p>For the rice:</p>
<ul>
<li>120-180 g rice (depending on hunger levels)</li>
<li>1 tbsp toasted sesame oil</li>
<li>Sea salt</li>
</ul>
<p>To serve:</p>
<ul>
<li>2 tbsp black sesame seeds</li>
<li>Lemon wedges</li>
<li>Finely sliced daikon radish or other radish (optional)</li>
</ul>
<h4>Method</h4>
<ol>
<li>Chop the cucumber and place in a bowl. Sprinkle 1 tsp cooking salt over the cucumber and leave for 5 minutes. Meanwhile, mix the marinade ingredients together: vinegar, soy sauce, toasted sesame oil, sugar and chilli powder. Chop the spring onion. Once the 5 minutes is up, rinse the cucumber thoroughly with cold water to remove the salt, drain and place back into the bowl. Pour over the marinade, add in the spring onions, cover with clingfilm and set aside somewhere cool.</li>
<li>Mix the marinade for the mackerel: soy sauce, vinegar and toasted sesame oil. Pour into a shallow dish. Wash the fish and place, skin-side up, in the shallow dish. Leave to marinate for 10 minutes.</li>
<li>Pre-heat the grill to a high heat.</li>
<li>Toast the black sesame seeds in the bottom of a dry pan for around 2 minutes, taking care not to burn them. Remove from the heat and set aside.</li>
<li>Shred the radish, if using.</li>
<li>Boil a kettle of water. Heat 1 tbsp of toasted sesame oil in a saucepan. Wash the rice thoroughly, until the water runs clear, then add to the saucepan. Fry for 1 minute, stirring continuously to make sure the rice does not burn. Cover the rice with water, season with a pinch of salt and simmer for approximately 12 minutes (check packet instructions).</li>
<li>Whilst the rice is cooking, remove the mackerel from the marinade and pat dry with paper towels to remove excess moisture. Sprinkle the non-skin side with sea salt and let the fish rest for 5 minutes. </li>
<li>Prepare a tray for grilling: line a baking tray with foil and grease with about 1 tbsp of vegetable oil.</li>
<li>After the fish has rested, place onto the baking tray (skin-side down) and grill for 5 minutes until the fish is cooked. The skin should be crispy and surface lightly browned. </li>
<li>Serve the cucumber pickle and rice sprinkled with the toasted sesame seeds. The excess cucumber marinade makes an excellent sauce for the rice. Serve the fish with lemon (or lime) wedges and shredded radish. The lemon/lime wedges really brings out the flavour of the fish. </li>
</ol>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-42576377075728625612015-10-19T20:21:00.000+01:002015-10-19T20:21:00.132+01:00Beetroot risotto<p>One of my most popular dishes is beetroot risotto. It's both the recipe that I get asked for most often, and the recipe that people go out of their way to tell me that they enjoyed making. Here's the (quite simple!) recipe so that you can enjoy it too!</p>
<p>This recipe serves two, and is especially good with some slices of pan-roasted duck breast on top. Yum.</p>
<h4>Ingredients</h4>
<ul>
<li>1 beetroot</li>
<li>1 large carrot</li>
<li>1 small onion</li>
<li>Half a celery stick</li>
<li>1 garlic clove</li>
<li>1 tbsp olive oil</li>
<li>100 g risotto rice</li>
<li>30 g Parmesan cheese</li>
<li>Butter</li>
<li>Fresh parsley</li>
<li>Salt & pepper</li>
</ul>
<h4>Method</h4>
<p>First, peel the beetroot and carrot, and cut them into 1 cubes. Put them in a saucepan with a pinch of salt and enough water to cover them, bring them to the boil, and let them simmer for about 20 minutes.</p>
<p>While they're cooking, finely chop the onion, garlic and celery. Heat the olive oil in a large frying pan, and saute the chopped vegetables gently in the olive oil until they're soft and translucent. Also, grate the Parmesan, chop the parsley, and boil a full kettle.</p>
<p>Once the beetroot and carrot are cooked, strain off the liquid <b>into a jug</b> and set the vegetables to one side.</p>
<p>Turn up the heat in the frying pan, and gently fry the rice with the onion, garlic and celery for 1–2 minutes. Then add a little of the stock from cooking the beetroot and carrot (that you saved earlier in a jug), and stir the rice until almost all the liquid has been absorbed. Repeat until you run out of liquid. Add the root vegetables into the pan, and continue to gradually add hot water (from the kettle) while gently stirring until the rice is cooked.</p>
<p>Take the risotto of the heat, and stir in the Parmesan, the parsley, and a knob of butter. Let it rest for a minute, and serve in bowles with some freshly-ground black pepper on top!</p>
Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-66050121258977746372015-10-12T23:06:00.000+01:002015-11-04T21:51:39.249+00:00Pan-roast venison haunch with pumpkin risotto<p>The rather awesome K. and I have been going out for three years! We made a special dinner to celebrate.</p>
<p>This recipe, unsurprisingly, serves two. Best accompanied by a nice Pinot Noir!</p>
<h4>Ingredients</h4>
<p>For the venison:</p>
<ul>
<li>12 oz (350 g) venison haunch, in one piece</li>
<li>1 tbsp sunflower oil</li>
<li>30 g butter</li>
<li>25 ml gin</li>
<li>1 tsp plain flour</li>
<li>150 ml red wine</li>
<li>300 ml lamb stock</li>
<li>1 bay leaf</li>
<li>1 small sprig rosemary</li>
<li>5 juniper berries</li>
<li>Salt & pepper</li>
</ul>
<p>For the risotto:</p>
<ul>
<li>1 tbsp olive oil</li>
<li>1 onion</li>
<li>2 cloves garlic</li>
<li>1 celery stick</li>
<li>300 g pumpkin</li>
<li>Some kale (a generous handful)</li>
<li>100 g risotto rice</li>
<li>150 ml white wine</li>
<li>500 ml vegetable stock</li>
<li>30 g Parmesan cheese</li>
<li>Butter</li>
<li>Salt & pepper</li>
</ul>
<p>To serve:</p>
<ul>
<li>Parsley leaves</li>
<li>Parmesan shavings</li>
</ul>
<p><b>You will need</b> a digital kitchen thermometer.
<h4>Method</h4>
<p>I'm listing the two methods separately, but you'll need to do them simultaneously. Make sure you have all the equipment and ingredients ready before you start!</p>
<p>For the venison:</p>
<ol>
<li>At least an hour in advance, remove the venison from the fridge, remove all packaging, and pat dry with a clean paper towel. Place it on a clean chopping board and leave to dry in the air.</li>
<li>Put a roasting tin in the oven and preheat to 120 °C fan. Heat the sunflower oil in a heavy-based frying pan over a high heat.</li>
<li>Season the venison with salt and pepper. Fry the venison for 1–2 minutes on each side until sealed and browned. Add the butter to the the pan and baste continuously for 3 minutes, turning occasionally, then transfer it to the preheated roasting tin in the oven.</li>
<li>While the venison is in the oven, make sure to <b>check it periodically with the thermometer</b> — the aim is to reach 63 °C in the centre of the meat [1], but don't let it get any hotter than that, or it'll dry out! It'll need about 15–20 minutes in the oven.</li>
<li>Deglaze the frying pan with the gin, then add the flour and mix to a paste. Add the red wine and herbs, and simmer over a high heat until reduced by half.</li>
<li>Remove the rosemary (because otherwise it can overpower the other flavours), and add the lamb stock. Continue reducing until a sauce-like consistency is achieved. Sieve the gravy and set aside (but keep it warm!)</li>
<li>Once the venison reaches the target temperature, remove it from the oven and cover it in foil to rest. Make sure to rest it for <em>at least</em> 5 minutes.</li>
</ol>
<p>For the risotto:</p>
<ol>
<li>Finely chop the onion and celery, and crush the garlic. Dice the pumpkin into 1 cm cubes, and finely shred the kale. Grate the Parmesan.</li>
<li>Heat the olive oil over a medium heat in a large, non-stick pan, and add the onion and celery. Saute the vegetables gently for about 5 minutes until they are soft but not browning.</li>
<li>Add the garlic and rice, and continue to cook for 2–3 minutes.</li>
<li>Turn the heat up to high, and add the white wine and some salt. Continue to cook, while stirring regularly and adding stock when the risotto starts to dry out.</li>
<li>When the rice is starting to soften, add the pumpkin and kale. Continue to cook the risotto, adding liquid when needed, until the rice is soft but not mushy.</li>
<li>Stir in the Parmesan and a generous knob of butter, and leave the risotto to rest for at least a minute.</li>
</ol>
<p>To serve, carve the venison into thick, even rounds. Arrange the risotto and venison on pre-heated plates. Spoon a little of the gravy onto the venison, and top the risotto with freshly-ground black pepper, parsley leaves and Parmesan shavings.</p>
<p>[1] Getting the centre of the venison to 63 °C is recommended if you want to make sure that any bacteria or other nasties are fully killed off, and will result in having venison that's "done" — with a centre that's slightly pink but still deliciously tender. If you'd like medium-rare, aim for 57 °C.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-91604302068649689962015-09-29T22:07:00.000+01:002015-09-29T22:20:03.396+01:00Using C library functions from LiveCode Builder<p><i>This blog post is part of <a href="http://blog.peter-b.co.uk/search/label/LiveCode">an ongoing series</a> about writing <a href="https://livecode.com/">LiveCode</a> Builder applications without the LiveCode engine.</i></p>
<p>Currently, the LiveCode Builder (LCB) standard library is fairly minimal. This means that there are some types of task for which you'll want to go beyond the standard library.</p>
<p>In a previous post, I described <a href="http://blog.peter-b.co.uk/2015/09/foundation-library-livecode-builder.html">how to use LiveCode's foundation library</a>. This lets you access plenty of built-in LiveCode functionality that isn't directly exposed to LCB code yet.</p>
<h3>Someone else's problem</h3>
<p>Often someone's already wrapped the functions that you need in another program, especially on Linux. You can run that program as a subprocess to access it. In LiveCode Script, you could use the <tt>shell</tt> function to run an external program. Unfortunately, the LCB standard library doesn't have an equivalent feature yet!</p>
<p>On the other hand, the standard C library's <a href="http://linux.die.net/man/3/system"><b>system(3)</b></a> function can be used to run a shell command. Its prototype is:</p>
<pre class="code">
int system(const char *command);
</pre>
<p>In this post, I'll describe how LCB's foreign function interface lets you call it.</p>
<h3>Declaring a foreign handler</h3>
<p>As last time, you can use the <tt>foreign handler</tt> syntax to declare the C library function. The <tt>com.livecode.foreign</tt> provides some important C types.</p>
<pre class="code">
use com.livecode.foreign
foreign handler _system(in pCommand as ZStringNative) \
returns CInt binds to "system"
</pre>
<p>Some things to bear in mind here:</p>
<ul>
<li>I've named the foreign handler <tt>_system</tt> because the all-lowercase identifier <tt>system</tt> is reserved for syntax tokens</li>
<li>The <tt>ZStringNative</tt> type automatically converts a LCB string into a null-terminated string into whatever encoding LiveCode thinks is the system's "native" encoding.</li>
<li>Because the C library is always linked into the LiveCode program when it's started, you don't need to specify a library name in the <tt>binds to</tt> clause; you can just use the name of the <b>system(3)</b> function.</li>
</ul>
<h3>Understanding the results</h3>
<p>So, now you've declared the foreign handler, that's it! You can now just <tt>_system("rm -rf /opt/runrev")</tt> (or some other helpful operation). Right?</p>
<p>Well, not quite. If you want to know whether the shell command succeeded, you'll need to interpret the return value of the <tt>_system</tt> handler, and unfortunately, this isn't just the exit status of the command. From the <b>system(3)</b> man page:</p>
<blockquote>
The value returned is -1 on error (e.g., <a href="http://linux.die.net/man/2/fork"><b>fork(2)</b></a> failed), and the return status of the command otherwise. This latter return status is in the format specified in <a href="http://linux.die.net/man/2/wait"><b>wait(2)</b></a>. Thus, the exit code of the command will be <tt>WEXITSTATUS(status)</tt>. In case <tt>/bin/sh</tt> could not be executed, the exit status will be that of a command that does <tt>exit(127)</tt>.
</blockquote>
<p>So if the <tt>_system</tt> handler returns -1, then an error occurred. Otherwise, it's necessary to do something equivalent to the <tt>WIFEXITED</tt> C macro to check if the command ran normally. If it didn't, then some sort of abnormal condition occurred in the command (e.g. it was killed). Finally, the actual exit status is extracted by doing something equivalent to the <tt>WEXITSTATUS</tt> C macro.</p>
<p>On Linux, these two macros are defined as follows:</p>
<pre class="code">
#define WIFEXITED(status) __WIFEXITED (__WAIT_INT (status))
#define WEXITSTATUS(status) __WEXITSTATUS (__WAIT_INT (status))
#define __WIFEXITED(status) (__WTERMSIG(status) == 0)
#define __WEXITSTATUS(status) (((status) & 0xff00) >> 8)
#define __WTERMSIG(status) ((status) & 0x7f)
#define __WAIT_INT(status) (status)
</pre>
<p>Or, more succinctly:</p>
<pre class="code">
#define WIFEXITED(status) (((status) & 0x7f) == 0)
#define WEXITSTATUS(status) (((status) & 0xff00) >> 8)
</pre>
<p>This is enough to be able to fully define a function that runs a shell command and returns its exit status.</p>
<pre class="code">
module org.example.system
use com.livecode.foreign
private foreign handler _system(in pCommand as ZStringNative) \
returns CInt binds to "system"
/*
Run the shell command <pCommand> and wait for it to finish.
Returns the exit status of if the command completed, and nothing
if an error occurred or the command exited abnormally.
*/
public handler System(in pCommand as String) \
returns optional Number
variable tStatus as Number
put _system(pCommand) into tStatus
-- Check for error
if tStatus is -1 then
return nothing
end if
-- Check for abnormal exit
if (127 bitwise and tStatus) is not 0 then
return nothing
end if
-- Return exit status
return 255 bitwise and (tStatus shifted right by 8 bitwise)
end module
</pre>
<h3>Tip of the iceberg</h3>
<p>This post has hopefully demonstrated the potential of LiveCode Builder's FFI. Even if you use only the C standard library's functions, you gain access to almost everything that the operating system is capable of!</p>
<p>Using a C function from LCB involves reading the manual pages to find out how the function should be used, and how best to map its arguments and return values onto LCB types; often, reading C library header files to understand how particular values should be encoded or decoded; and finally, binding the library function and providing a wrapper that makes it comfortable use from LCB programs.</p>
<p>LiveCode Builder can do a lot more than just making widgets and — as I hope I've demonstrated — can do useful things without the rest of the LiveCode engine. Download <a href="https://downloads.livecode.com/">LiveCode 8</a> and try some things out!</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com11Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-66158333406159202522015-09-23T21:11:00.000+01:002015-10-08T07:13:15.052+01:00Roasted vegetable and chickpea tagine<p>It's been a while since I last posted a recipe here! Recently I've been having quite a lot of success with this Morrocan-inspired vegetarian recipe.</p>
<p>This recipe makes 6 portions.</p>
<h4>Ingredients</h4>
<p>For the roasted vegetables:</p>
<ul>
<li>350 g new potatoes, halved</li>
<li>1 fennel bulb, trimmed & cut into batons</li>
<li>1 medium carrot, cut into chunks</li>
<li>1 large red pepper, cut into chunks</li>
<li>1 large red onion, cut into chunks</li>
<li>3 tbsp exra-virgin olive oil</li>
<li>1 tsp cumin seeds</li>
<li>1 tsp fennel seeds</li>
<li>1 tsp coriander seeds, crushed</li>
</ul>
<p>For the sauce:</p>
<ul>
<li>4 garlic cloves, chopped</li>
<li>400 g canned chopped tomatoes</li>
<li>400 g canned chickpeas, drained and rinsed</li>
<li>250 ml red wine</li>
<li>1 pickled lemon, finely chopped</li>
<li>0.5 tbsp harissa paste</li>
<li>1 tsp ras el hanout</li>
<li>1 cinnamon stick</li>
<li>40 g whole almonds</li>
<li>10 dried apricots, halved</li>
</ul>
<p>To serve:</p>
<ul>
<li>Greek-style yoghurt
<li>2 tbsp coriander, finely chopped</li>
</ul>
<h4>Method</h4>
<p>Preheat the oven to 200 °C fan. Put all the ingredients for the roasted vegetables into a large, heavy roasting tin, season to taste, and toss together to coat the vegetables in oil and spices. Roast for 30 minutes until the potatoes are cooked through and the vegetables generally have a nice roasted tinge.</p>
<p>While the vegetables are roasting, heat a large pan over a medium heat. Fry the garlic for 20–30 seconds until fragrant. Add the remaining ingredients, bring to the boil, and simmer while the vegetables roast.</p>
<p>When the vegetables are roasted, add them to the sauce and stir. Return the sauce to the simmer for another 15–20 minutes.</p>
<p>Serve in bowls, topped with a dollop of yoghurt and some chopped coriander. Couscous makes a good accompaniment to this dish if you want to make it go further.</li>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com3Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-65458825547859495042015-09-14T21:59:00.000+01:002015-09-15T08:48:23.546+01:00Compiling multi-module LiveCode Builder programs<p><i>This blog post is part of <a href="http://blog.peter-b.co.uk/search/label/LiveCode">an ongoing series</a> about writing <a href="https://livecode.com/">LiveCode</a> Builder applications without the LiveCode engine.</i></p>
<h3>Multi-module programs</h3>
<p>When writing a large program, it's often useful to break it down into more than one module. For example, you might want to make a module that's dedicated to loading and saving the program's data, which has quite a lot of internal complexity but exposes a very simple API with <tt>Load()</tt> and <tt>Save()</tt> handlers. This is handy for making sure that it's easy to find the source file where each piece of functionality is located.</p>
<p>However, it can become tricky to compile the program. Each module may depend on any number of other modules, and you have to compile them in the correct order or the compilation result may be incorrect. Also, if one module changes, you have to recompile all of the modules that depend on it. If you tried to do this all by hand, it would be nigh-on impossible to correctly compile your program once you got above about 10 source files.</p>
<p>Fortunately, there are two really useful tools that can make it all rather easy. GNU Make (the <b>make</b> command) can perform all the required build steps in the correct order (and even in parallel!). And to help you avoid writing Makefiles by hand, <b>lc-compile</b> has a useful <tt>--deps</tt> mode.</p>
<p>Most of the remainder of this blog post will assume some familiarity with <b>make</b> and common Unix command-line tools.</p>
<h3>The --deps option for lc-compile</h3>
<p><b>make</b> lets you express dependencies between files. However, you <em>already</em> express the dependencies between LCB source files when you write a <tt>use</tt> declaration. For example:</p>
<pre class="code">
use com.livecode.foreign
</pre>
<p>says that your module depends on the <tt>.lci</tt> (LiveCode Interface) file for the <tt>com.livecode.foreign</tt> module.</p>
<p>So, the LCB compiler (a) already knows all the dependencies between the source files of your project and (b) already knows how to find the files. To take advantage of this and to <em>massively</em> simplify the process of creating a Makefile for a LCB project, <b>lc-compile</b> provides a <tt>--deps</tt> mode. In <tt>--deps</tt> mode, <b>lc-compile</b> doesn't do any of the normal compilation steps; instead, it outputs a set of Make rules on standard output.</p>
<p>Consider the following trivial two-file program.</p>
<pre class="code">
-- org.example.numargs.lcb
module org.example.numargs
public handler NumArgs()
return the number of elements in the command arguments
end handler
end module
</pre>
<pre class="code">
-- org.example.countargs.lcb
module org.example.countargs
use org.example.numargs
public handler Main()
quit with status NumArgs()
end handler
end module
</pre>
<p>To generate the dependency rules, you run <b>lc-compile</b> with <em>almost</em> a normal command line — but you specify <tt>--deps make</tt> instead of an <tt>--output</tt> argument, and you list all of your source files instead of just one of them. See also my previous blog post about <a href="http://blog.peter-b.co.uk/2015/08/livecode-builder-without-livecode-bit.html">compiling and running pure LCB programs</a>. For the "countargs" example program you could run:</p>
<pre class="code">
$TOOLCHAIN/lc-compile --modulepath . --modulepath $TOOLCHAIN/modules/lci --deps make org.example.numargs.lcb org.example.countargs.lcb
</pre>
<p>This would print the following rules:</p>
<pre class="code">
org.example.countargs.lci: org.example.numargs.lci org.example.countargs.lcb
org.example.numargs.lci: org.example.numargs.lcb
</pre>
<h3>Integrating with make</h3>
<p>You can integrate this info into a Makefile quite easily. There are two pieces that you need: 1) tell <b>make</b> to load the extra rules, and 2) tell <b>make</b> how to generate them. In particular, it's important to regenerate the rules whenever the Makefile itself is modified (e.g. to add an additional source file).</p>
<pre class="code">
# List of source code files
SOURCES = org.example.countargs.lcb org.example.numargs.lcb
# Include all the generated dependency rules
include deps.mk
# Rules for regenerating dependency rules whenever
# the source code changes
deps.mk: $(SOURCES) Makefile
$(TOOLCHAIN)/lc-compile --modulepath . --modulepath $(TOOLCHAIN)/modules/lci --deps make -- $(SOURCES) > $@
</pre>
<h3>A complete Makefile</h3>
<p>Putting this all together, I've created a complete Makefile for the example multi-file project. It has the usual <tt>make compile</tt> and <tt>make clean</tt> targets, and places all of the built artefacts in a subdirectory called <tt>_build</tt>.</p>
<pre class="code">
################################################################
# Parameters
# Tools etc.
LC_SRC_DIR ?= ../livecode
LC_BUILD_DIR ?= $(LC_SRC_DIR)/build-linux-x86_64/livecode/out/Debug
LC_LCI_DIR = $(LC_BUILD_DIR)/modules/lci
LC_COMPILE ?= $(LC_BUILD_DIR)/lc-compile
LC_RUN ?= $(LC_BUILD_DIR)/lc-run
BUILDDIR = _build
LC_COMPILE_FLAGS += --modulepath $(BUILDDIR) --modulepath $(LC_LCI_DIR)
# List of source code files.
SOURCES = org.example.countargs.lcb org.example.numargs.lcb
# List of compiled module filenames.
MODULES = $(patsubst %.lcb,$(BUILDDIR)/%.lcm,$(SOURCES))
################################################################
# Top-level targets
all: compile
compile: $(MODULES)
clean:
-rm -rf $(BUILDDIR)
.DEFAULT: all
.PHONY: all compile clean
################################################################
# Build dependencies rules
include $(BUILDDIR)/deps.mk
$(BUILDDIR):
mkdir -p $(BUILDDIR)
$(BUILDDIR)/deps.mk: $(SOURCES) Makefile | $(BUILDDIR)
$(LC_COMPILE) $(LC_COMPILE_FLAGS) --deps make -- $(SOURCES) > $@
################################################################
# Build rules
$(BUILDDIR)/%.lcm $(BUILDDIR)/%.lci: %.lcb | $(BUILDDIR)
$(LC_COMPILE) $(LC_COMPILE_FLAGS) --output $@ -- $<
</pre>
<p> You should be able to use this directly in your own projects. All you need to do is to modify the list of source files in the <tt>SOURCES</tt> variable!</p>
<p>Note that you need to name your source files exactly the same as the corresponding interface files in order for this Makefile to work correctly. I'll leave adapting to the case where the source file and interface file are named differently as an exercise to the reader…</p>
<p>I hope you find this useful as a basis for writing new LiveCode Builder projects! Let me know how you get on.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com2Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-42642748275605106502015-09-06T17:38:00.000+01:002015-09-06T17:38:00.235+01:00Accessing the Foundation library with LiveCode Builder<p><i>This blog post is part of <a href="http://blog.peter-b.co.uk/search/label/LiveCode">an ongoing series</a> about writing <a href="https://livecode.com/">LiveCode</a> Builder applications without the LiveCode engine.</i></p>
<h3>The LiveCode Foundation library</h3>
<p>LiveCode includes a "foundation" library (called, unsurprisingly, <b>libfoundation</b>) which provides a lot of useful functions that work on all the platforms that LiveCode supports. This is used to make sure that LiveCode works in the same way no matter which operating system or processor you're using. libfoundation is compiled into both the LiveCode engine and LiveCode Builder's <b>lc-run</b> tool, so it's always available.</p>
<p>libfoundation is written in C and C++. The functions available in the library are declared in the <a href="https://github.com/runrev/livecode/blob/develop/libfoundation/include/foundation.h"><tt>foundation.h</tt></a> header file.</p>
<p>Among other capabilities, libfoundation handles encoding and decoding text. This provides an opportunity to fix one of the problems with the "hello world" program I described in <a href="http://blog.peter-b.co.uk/2015/08/livecode-builder-without-livecode-bit.html">a previous post</a>.</p>
<h3>Foreign function access to libfoundation</h3>
<p>The "hello world" program read in a file and wrote it out to the standard output stream. Unlike "hello world" programs seen elsewhere, it *didn't* write out a string, e.g.:</p>
<pre class="code">
write "Hello World!" to the output stream
</pre>
<p>This doesn't work because <tt>write</tt> needs to receive <b>Data</b>, and converting a <b>String</b> to <b>Data</b> requires encoding (using a suitable string encoding). And unfortunately, the LiveCode Builder library doesn't supply any text encoding/decoding syntax, although <a href="https://github.com/runrev/livecode/pull/1754">I'm working on it</a>.</p>
<p>However, and fortunately for this blog post, libfoundation supplies a suitable function, <a href="https://github.com/runrev/livecode/blob/develop/libfoundation/include/foundation.h#L1927">MCStringEncode</a>. Its C++ declaration looks like:</p>
<pre class="code">bool MCStringEncode(MCStringRef string, MCStringEncoding encoding, bool is_external_rep, MCDataRef& r_data);</pre>
<p>You can use it in a LiveCode Builder program by declaring it as a <b>foreign handler</b>. The <tt>com.livecode.foreign</tt> module provides some helpful declarations for C and C++ types.</p>
<pre class="code">
use com.livecode.foreign
foreign handler MCStringEncode(in Source as String, \
in Encoding as CInt, in IsExternalRep as CBool, \
out Encoded as Data) returns CBool binds to "<builtin>"
</pre>
<p><b>CInt</b> and <b>CBool</b> are C & C++'s <tt>int</tt> and <tt>bool</tt> types, respectively.</p>
<h3>Encoding a string with UTF-8</h3>
<p>Next, you can write a LiveCode Builder handler that encodes a string using UTF-8 (an 8-bit Unicode encoding). Almost every operating system will Do The Right Thing if you write UTF-8 encoded text to standard output; the only ones that might complain are some versions of Windows and some weirdly-configured Linux systems.</p>
<pre class="code">
handler EncodeUTF8(in pString as String) returns Data
variable tEncoded as Data
MCStringEncode(pString, 4 /* UTF-8 */, false, tEncoded)
return tEncoded
end handler
</pre>
<p>The "4" in there is a magic number that comes from libfoundation's <a href="https://github.com/runrev/livecode/blob/develop/libfoundation/include/foundation.h#L1815"><tt>kMCStringEncodingUTF8</tt></a> constant. Also, you should always pass <tt>false</tt> to the <tt>IsExternalRep</tt> argument (for historical reasons).</p>
<h3>A better "hello world" program</h3>
<p>Putting this all together, you can now write an improved "hello world" program that doesn't get its text from an external file.</p>
<pre class="code">
module org.example.helloworld2
use com.livecode.foreign
foreign handler MCStringEncode(in Source as String, \
in Encoding as CInt, in IsExternalRep as CBool, \
out Encoded as Data) returns CBool binds to "<builtin>"
handler EncodeUTF8(in pString as String) returns Data
variable tEncoded as Data
MCStringEncode(pString, 4 /* UTF-8 */, false, tEncoded)
return tEncoded
end handler
public handler Main()
write EncodeUTF8("Hello World!\n") to the output stream
end handler
end module
</pre>
<p>If you compile and run this program, you'll now get the same "Hello World!" message -- but this time, it's taking some text, turning it into <b>Data</b> by encoding it, and writing it out, rather than just regurgitating some previously-encoded data.</p>
<h3>Other neat stuff</h3>
<p>There's other cool (and, often, terribly unsafe) stuff you can do with direct access to libfoundation functions, like allocate <b>Pointer</b>s to new memory buffers and directly manipulate LiveCode types & values. However, most of libfoundation's capabilities are already available using normal LiveCode Builder syntax.</p>
<p>The real power of <tt>foreign handler</tt> declarations becomes apparent when accessing functions that <i>aren't</i> in libfoundation — and this may be the subject of a future blog post!</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com1Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-7082815729958947252015-08-30T12:09:00.000+01:002015-08-31T21:47:38.845+01:00LiveCode Builder without the LiveCode bit<p><i>Since my last post almost two years ago, I've moved to Edinburgh. I now work for <a href="https://livecode.com/">LiveCode</a> as an open source software engineer.</i></p>
<h3>Introducing LiveCode Builder</h3>
<p>LiveCode 8, the upcoming release of the LiveCode HyperCard-like application development environment, introduces a new xTalk-like language for writing LiveCode extensions. It's called <b>LiveCode Builder</b> (or LCB). It shares much of the same syntax as the original LiveCode scripting language, but it's a compiled, strongly-typed language.</p>
<p>Most of the public discussion about LiveCode Builder has revolved around using it to extend LiveCode — either by creating new widgets to display in the user interface, or by writing libraries that add new capabilities to the scripting language. However, one topic that *hasn't* been discussed much is the fact that you can write complete applications using only LCB, and compile and run them without using the main LiveCode engine at all.</p>
<h3>LiveCode Builder without the engine</h3>
<p>This is actually pretty useful when writing simple command-line tools or services that don't need a user interface and for which the main LiveCode engine provides little value (for example, if you need your tool to start up really quickly). There are a couple of good examples that I've written during the last few months.</p>
<p>The LCB standard library's test suite uses <a href="https://github.com/runrev/livecode/blob/develop/tests/lcb/_testrunner.lcb">a test runner</a> written in LCB. This is quite a useful "smoke test" for the compiler, virtual machine, and standard library -- if any of them break, the test suite won't run at all!</p>
<p>More recently, I've written a bot that connects our GitHub repositories to our BuildBot continuous integration system. Every few minutes, it checks the status of all the outstanding pull requests, and either submits new build jobs or reports on completed ones. This is also written entirely in LCB. One of main advantages of using LCB for this were that LCB has a proper <tt>List</tt> type that can contain arrays as elements.</p>
<h3>"Hello World" in LCB</h3>
<p>A pure LCB program looks like this:</p>
<pre class="code">
module org.example.helloworld
public handler Main()
write the contents of file "hello.txt" to the output stream
end handler
end module
</pre>
<p>It has a top-level module, that contains a public handler called <tt>Main</tt>. Note that unlike in C or C++, the <tt>Main</tt> handler doesn't take any arguments (you can access the command-line arguments using `the command arguments`).</p>
<p>Next, you need to compile your application using the <a href="https://github.com/runrev/livecode/blob/develop/toolchain/lc-compile.1.md">lc-compile</a> tool. To do this, you need to locate the directory from the LiveCode installation that contains the `.lci` files -- these are LiveCode's equivalent to C or C++'s header files. For example, on my system, I could compile the example above using (let's assume I've saved it to a file called <tt>hello.lcb</tt>:</p>
<pre class="code">
$ export TOOLCHAIN='/opt/runrev/livecodecommunity-8.0.0-dp-3 (x86_64)/Toolchain/
$ "$TOOLCHAIN/lc-compile" --modulepath . --modulepath "$TOOLCHAIN/modules/lci" --output hello.lcm hello.lcb
</pre>
<p>These commands generate two files: <tt>hello.lcm</tt>, containing LCB bytecode, and <tt>org.example.helloworld.lci</tt> containing the interface.</p>
<p>Finally, you can run the program using <a href="https://github.com/runrev/livecode/blob/develop/toolchain/lc-run.1.md">lc-run</a>. This is a really minimal tool that provides only the LCB virtual machine and standard library.</p>
<pre class="code">
$ echo "Hello world!" > hello.txt
$ "$TOOLCHAIN/lc-run" hello.lcm
Hello world!
</pre>
<h3>Finding out more</h3>
<p>To more information on the standard library syntax available in LCB, visit the "LiveCode Builder" section of the dictionary in the LiveCode IDE. Note that the "widget", "engine" and "canvas" syntax isn't currently available to pure LCB programs. You should also check out the "<a href="https://github.com/runrev/livecode-ide/blob/develop/Documentation/guides/Extending%20LiveCode.md">Extending LiveCode</a>" guide.</p>
Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com7Edinburgh, Edinburgh, UK55.953252 -3.188266999999996255.8109675 -3.5109904999999961 56.0955365 -2.8655434999999962tag:blogger.com,1999:blog-5144379.post-20448573468957629812013-12-10T21:44:00.003+00:002013-12-10T21:44:34.619+00:00Chilli and lime dark chocolate tarts<p>In the second round of the baking competition at work, I baked another invention of mine: sweet pastry tarts, filled with a dark chocolate ganache flavoured with chilli and lime, and decorated with candied chillies.<p>
<p>They didn't do very well with the judges — they thought there was too much chocolate filling and/or it was too rich, and they found the candied chillies too spicy. On the other hand, the whole batch got eaten, so it's not all bad news.</p>
<p>This time-consuming and labour-intensive recipe makes 8 tarts.</p>
<h4>Ingredients</h4>
<p>For the candied chillies:</p>
<ol>
<li>1/2 cup water</li>
<li>1/2 cup sugar</li>
<li>1 lime</li>
<li>2 mild chillies</li>
</ol>
<p>For the pastry cases:</p>
<ol>
<li>250 g plain flour</li>
<li>35 g icing sugar</li>
<li>140 g cold unsalted butter</li>
<li>2 egg yolks</li>
<li>1.5 tbsp cold water</li>
</ol>
<p>For the chilli and lime dark chocolate ganache filling:</p>
<ol>
<li>100 ml double cream</li>
<li>25 g caster sugar</li>
<li>100 g dark chocolate</li>
<li>12 g butter</li>
<li>2 limes</li>
<li>2 bird's eye chillies</li>
</ol>
<h4>Candied chillies</h4>
<p>Make the candied chillies first — they keep for ages, so you can make them a good while in advance.</p>
<p>Cut the chillies into thin, circular slices, and remove the seeds (tweezers are useful). Take the peel of about a quarter of a lime, and slice it into strips as thinly as possible.</p>
<p>In a heavy-bottomed saucepan, heat the water and sugar to make a syrup. When it gets to the boil, carefully add the lime peel and chilli slices and simmer for 20 mins.</p>
<p>Strain the sugar syrup to remove the chilli and lime — save the syrup for later — and lay the pieces out on a silicone baking sheet. Bake in the oven for an hour at about 90 °C, until they are dry to the touch.</p>
<h4>Sweet pastry cases</h4>
<p>Put the the flour, icing sugar and butter in a food processor and pulse a few times until the mixture becomes about the consistency of breadcrumbs. Add the yolks and cold water and pulse until the mixture comes together. You may need to add a tiny bit more water. Knead the pastry a couple of times — literally only enough that it comes together into a ball — then wrap it in clingfilm and put it in the fridge to chill for about an hour.</p>
<p>Clear a shelf in the fridge and prepare 8 individual-size pastry tins (about 7.5–8 cm diameter).</p>
<p>Divide the dough into 8 equal portions. Roll each piece out to about 15 cm diameter and carefully place them in the pastry tins, pushing it out to fill the corners. If any holes appear, push them back together again. There should be 2&ndash cm of excess pastry protruding from the edges of the tin; trim back any much more than this.</p>
<p>Prick the bottom of each case with a fork and place them in the fridge to chill for at least an hour. By making sure that the cases are well rested you will avoid the need to use baking beans.</p>
<p>Preheat the oven to 180 °C (fan) and place a baking train the oven to heat. When the pastry cases are rested, place them directly onto the hot baking tray and into the oven, and bake for approx. 12 min until golden. Be very careful that the pastry doesn't catch!</p>
<p>When pastry cases come out of the oven, <em>immediately</em> trim the excess pastry from the cases before they become brittle, using a sharp knife. Leave them to cool in the tins on a cooling rack.</p>
<h4>Chilli and lime chocolate ganache filling</h4>
<p>Finely chop the chillies and zest the limes.</p>
<p>Place the cream, sugar, chillies and half the lime zest in a saucepan. Warm over a low heat. (The longer you infuse the cream, the stronger the filling will be).</p>
<p>Meanwhile, break the chocolate into pieces. Put the chocolate, butter and remaining lime zest in a mixing bowl.</p>
<p>When the cream is almost at boiling point, strain it onto the chocolate and butter. Whisk the mixture slowly until the chocolate and butter has melted and the ganache is smooth and glossy. If the chocolate doesn't quite melt, heat the mixing bowl over a pan of hot water (but make sure the bowl doesn't touch the water!)</p>
<p>If the filling isn't strong enough, you can add a couple of teaspoons of the chilli sugar syrup left over from making the candied chillies earlier.</p>
<p>While the ganache is still warm, carefully spoon it into the pastry cases. Decorate with the candied chillies.</p>
<p>N.b. the ganache will take at least a couple of hours to set; you can put it in the fridge to help it along, but it may make the top lose its glossy finish.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-28402847978155323172013-12-01T09:41:00.000+00:002013-12-01T09:41:56.008+00:00Stripy chocolate, vanilla and coffee cake<p>At <a href="http://www.sle.sharp.co.uk/">Sharp Labs</a> we're having a baking competition going on to raise money for <a href="http://www.helenanddouglas.org.uk/">Helen & Douglas House</a>. I foolishly decided to enter it.</p>
<p>There are three rounds. The first round, which took place on the 25th November, was sponge cakes. I invented a variation on a coffee cake. It's made up of six alternating layers of chocolate and vanilla sponge, bound together and coated with a coffee buttercream icing. This recipe is for a large cake which will happily make 16 slices.</p>
<h4>Ingredients</h4>
<p>For the vanilla sponge:</p>
<ul>
<li>165 g unsalted butter (at room temperature)</li>
<li>165 g caster sugar</li>
<li>3 large eggs</li>
<li>165 g self raising flour, sifted</li>
<li>1.5 tsp vanilla essence</li>
<li>Hot water (if required)</li>
</ul>
<p>For the chocolate sponge:</p>
<ul>
<li>165 g unsalted butter (at room temperature)</li>
<li>165 g caster sugar</li>
<li>3 large eggs</li>
<li>155 g self raising flour, sifted</li>
<li>1 heaped tbsp cocoa powder, sifted</li>
<li>Hot water (if required)</li>
</ul>
<p>For the coffee buttercream:</p>
<ul>
<li>600 g icing sugar</li>
<li>375 g unsalted butter (at room temperature)</li>
<li>150 ml strong espresso coffee (about 3 shots)</li>
</ul>
<h4>Method</h4>
<p>Preheat the oven to 155 ℃ (fan). Position a shelf near the middle of the oven for the cakes. Line the bottoms of two deep 20 cm springform or sandwich tins with baking parchment.</p>
<p>Each of the sponge batters is prepared in the same way (it's best to do prepare them in parallel in two bowls so that you can bake the cakes simultaneously):</p>
<ol>
<li>Cream butter and sugar together using an electric hand mixer until light and fluffy.</li>
<li>In a measuring jug, beat the eggs. Then add them little by little to the butter & sugar mixture, making sure to fully combine each addition before the next. For the vanilla sponge, add the vanilla essence at this stage.</li>
<li>Sift about a quarter of the flour (or flour and cocoa mixture) into the mixture, from a height of about 50 cm so as to air the flour well. Carefully and gently fold the flour in (you want to trap as much air as possible at this stage). Repeat until all the flour has been combined.</li>
</ol>
<p>Transfer the sponge batters into the tins, and place the tins at mid-level of the oven near the front. Bake for 25-30 mins. When they are cooked, they'll (1) make a popping sound like rice crispies, (2) feel springy when lightly touched near the centre with a fingertip and (3) a sharp knife inserted all the way through will come out clean.</p>
<p>About 1-2 mins after removing the cakes from the oven, turn them out, carefully peel off the baking parchment, and leave them to cool for about half an hour.</p>
<p>Carefully slice each of the cakes into three horizontal slices, approximately 1 cm in thickness. I found that a very very sharp knife and a lot of patience was more successful than using a cake wire.</p>
<p>Make the buttercream by putting the butter and icing sugar into a bowl and beating them with an electric hand mixer while slowly adding the espresso.</p>
<p>Assemble the cake by putting a vanilla slice of sponge on a turntable, adding a <em>thin</em> layer of butter cream and levelling it off, then adding a chocolate slice on top, and continuing until all six slices are built up. Make sure on each layer to spread the buttercream all the way to the edge.</p>
<p>Use the remaining buttercream icing to smoothly coat the exterior of the cake. Use a side scraper and a turntable to get vertical sides and horizontal top! You should have some icing leftover.</p>
<p>Finally, you can optionally use cocoa powder and/or walnuts to decorate the finished cake.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-24412454827365910502013-11-23T12:06:00.000+00:002013-11-23T12:06:01.701+00:00Black onion seed and rye crackers<p>Here's a recipe for some nice crunchy rye crackers. I adapted it from a rosemary cracker recipe that my father figured out. It makes about 24 large crackers, but it very much depends on how you cut them.</p>
<ul>
<li>160 g plain flour</li>
<li>120 g rye flour</li>
<li>80 ml cold water</li>
<li>60 ml olive oil (+ extra for brushing)</li>
<li>1 tsp baking powder</li>
<li>0.5 tsp baking salt</li>
<li>1.5 tsp black onion seeds</li>
<li>Crystal salt</li>
<li>Black pepper</li>
<li>Crushed, dried seaweed</li>
<li>Za'atar</li>
</ul>
<p>Pre-heat the oven to 230 ℃ fan. Put baking sheets into the oven to preheat.</p>
<p>In a mixing bowl, combine the flours, baking powder, baking salt and black onion seeds. Add the water and olive oil and knead briefly to form a smooth dough. Do not overwork the dough; you do not want gluten strands to form.</p>
<p>Divide the mixture into three parts. Wrap two in clingfilm while you work with the third.</p>
<p>Using a rolling pin, roll one third of the dough out <em>as thinly as possible</em> onto a silicone sheet. Using a dough blade or palette knife, gently score across to divide the sheet into crackers.</p>
<p>Sprinkle the top with salt crystals, seaweed, coarsely-ground black pepper and a generous sprinkle of za'atar. Gently pass the rolling pin over the sheet again to press the toppings into the dough.</p>
<p>Transfer to the oven and bake for roughly ten minutes, or until the top begins to darken at the edges.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-89859286766334349002013-05-12T09:36:00.001+01:002013-05-12T09:36:54.085+01:00The IEEE does not do Open Access<p><b>Summary:</b> By the commonly-accepted definition of the term, IEEE journals offer real Open Access (OA) publishing options if and only if your funding body mandates Open Access publishing.</p>
<h4>Introduction</h4>
<p>This time last year, I posted <a href="http://blog.peter-b.co.uk/2012/05/remote-sensing-journals-and-open-access.html">a survey of journals and Open Access</a> in the field of remote sensing. As I have been being encouraged by my department to publish in the <em>IEEE Transactions on Geoscience and Remote Sensing</em> (where I currently have a paper going through its second review stage), over the last year I have been trying to determine what, exactly, IEEE Publishing means when it claims to offer "open access".</p>
<h4>What is Open Access (OA)?</h4>
<p>As I mentioned in my previous post, most people who are interested in widening the general public's access to scientific literature understand "Fully Open Access" to mean compliance with the Budapest Open Access Initiative definition (BOAI):</p>
<blockquote>Free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of... articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.</blockquote>
<p>The subject of OA publication of research results is topic of quite a lot of public debate in the UK at the moment, due to the UK Research Councils (RCUK) issuing new guidelines and requirements on the topic. The new <a href="http://www.rcuk.ac.uk/research/Pages/outputs.aspx">RCUK Policy on Open Access</a> came into force on 1st April 2013, and contains a definition of OA.</p>
<blockquote>
<p>RCUK defines Open Access as unrestricted, on-line access to peer-reviewed and published research papers. Specifically a user must be able to do the following free of any access charge:</p>
<ul>
<li>Read published papers in an electronic format;</li>
<li>Search for and re-use the content of published papers both manually and using automated tools (such as those for text and data mining) provided that any such re-use is subject to full and proper attribution and does not infringe any copyrights to third-party material included in the paper.</li>
</ul>
</blockquote>
<p>Furthermore, RCUK clearly express a preference for publication using a Creative Commons Attribution (CC-BY) licence, and require such a licence to be used when RCUK funds are used to pay an Article Processing Charge (APC) for an OA paper. Specifically, they say that:</p>
<blockquote>Crucially, the CC-BY licence removes any doubt or ambiguity as to what may be done with papers, and allows re-use without having to go back to the publisher to check conditions or ask for specific conditions.</blockquote>
<p>As a researcher funded by <a href="http://www.epsrc.ac.uk">EPSRC</a>, I was of course very keen to determine whether the IEEE's "open access" publishing options comply with the new policy.</p>
<h4>"Open access" at the IEEE</h4>
<p>The IEEE claim to offer three <a href="http://www.ieee.org/publications_standards/publications/authors/open_access.html">options for OA publishing</a>: hybrid journals, a new <em>IEEE Access</em> mega journal, and "fully OA" journals. One the bright side, the IEEE seems to treat all three the same way in terms of the general process, fees, etc., so I will not discuss the differences between them here.</p>
<p>Some aspects of the IEEE's approach to OA are quite clearly explained in the FAQ, and provide an interesting contrast with the the policies at unambiguously fully OA journals such as <a href="http://www.plosone.org/">PLOS ONE</a>. The IEEE charge an APC of $1750 per paper; PLOS ONE charges $1350. The IEEE requires copyright assignment; PLOS ONE allows authors to retain their copyrights. The IEEE's licencing of APC-paid OA articles is almost impossible to determine; PLOS ONE is unambiguously CC-BY.</p>
<p>But what is that licence? Exactly how open <em>are</em> "OA" articles published in IEEE journals? With reference to RCUK's definition of OA, the first point is clearly satisfied — users can read the paper free of charge on IEEE Xplore. Trying to pin the second point down has been quite a quest.</p>
<p>The IEEE allows authors to distribute a "post-print" (the accepted version of a manuscript, i.e. their final draft of a paper after peer review but before it goes through the IEEE's editing process and is prepared for printing). This can be placed on a personal website and/or uploaded to an institutional repository. At the University of Surrey, for example, papers can be placed on <a href="http://epubs.surrey.ac.uk/">Surrey Research Insight</a>. Unfortunately, this "Green OA" approach does <em>not</em> satisfy the RCUK's requirement to enable re-use; the licence is very explicit. As per the <a href="http://www.ieee.org/publications_standards/publications/rights/rights_policies.html">IEEE PSPB Operations Manual</a>, the IEEE requires the following notice to be displayed with post-prints:</p>
<blockquote>© 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.</blockquote>
<p>With Green OA clearly ruled out as an option, what about when an APC is paid (also known as "Gold OA")? This is option preferred by RCUK. I initially tried to figure this out by e-mailing the IEEE intellectual property rights office, but I never received any reply. I also e-mailed the editor of <em>TGRS</em>, and this also elicited no response.</p>
<p>My last and most recent attempt involved e-mailing IEEE Xplore tech support, asking where on the website I could find licence information for a specific recent "open access" <em>TGRS</em> paper that I had downloaded.</p>
<blockquote>
<p>I have been unsuccessfully attempting to determine the license under which "Open Access" journal articles from IEEE journals are available from IEEE Xplore.</p>
<p>For example, the following paper:</p>
<blockquote>
Zakhvatkina, N.Y.; Alexandrov, V.Y.; Johannessen, O.M.; Sandven, S.; Frolov, I.Y., "Classification of Sea Ice Types in ENVISAT Synthetic Aperture Radar Images," Geoscience and Remote Sensing, IEEE Transactions on , vol.51, no.5, pp.2587,2600, May 2013<br />
doi: 10.1109/TGRS.2012.2212445
</blockquote>
<p>is allegedly an "open access" paper, but the IEEE Xplore web page gives no indication of whether it is actually being made available under a Budapest Open Access Initiative-compliant license (e.g. CC-BY), and an exploration of the pages linked from the its web page leaves me none the wiser.</p>
<p>Could you please improve the IEEE Xplore website to display article licensing information much more clearly, especially in the case of your "open access" products?</p>
</blockquote>
<p>This then got passed on to the IEEE's "open access team" who then in turn attempted to pass it on to the IPR office to be ignored again. However, I now had an e-mail address to e-mail with a more specific request:</p>
<blockquote>
<p>Thank you for forwarding this query on. Needless to say, the IEEE IPR have not responded to the question, just the same as when I contacted them directly a few months ago.</p>
<p>Surely, as the IEEE Open Access team, you and your colleagues must have some idea of what level of openness IEEE are aiming for with their open access initiatives, especially given that you've just launched a new "open" megajournal! Your competitor OA megajournals make their licensing information really easy to find, and I don't understand why IEEE Publishing seems to be having a big problem with this.</p>
<p>As an IEEE member the lack of clarity here is really quite concerning.</p>
</blockquote>
<p>Finally, I received a moderately-illuminating reply.</p>
<blockquote>
<p>I will pass on your feedback that OA copyright information needs to be
easier to find in Xplore.</p>
<p>The IEEE continues to review legal instruments that may be used to
authorize publication of open access articles. The OACF now in use is a
specially modified version of the IEEE Copyright Form that allows users to
freely access the authors’ content in Xplore, and it allows authors to post
the final, published versions of their papers on their own and their
employers’ websites. The OACF also allows IEEE to protect the content by
giving IEEE the legal authority to resolve any complaints of abuse of the
authors’ content, such as infringement or plagiarism.</p>
<p>Some funding agencies have begun to require their research authors to use
specific publication licenses in place of copyright transfer if their
grants are used to pay article processing charges (APCs). Two examples are
the UK's Wellcome Trust and the Research Councils of the UK., both of which
this month began to require authors to use the Creative Commons Attribution
License (CC BY). In cases like these, IEEE is willing to work with authors
to help them comply with their funder requirements. If you have questions
or concerns about the OACF, or are required to submit any publication
document other than the OACF, please contact the Intellectual Property
Rights Office at 732-562-3966 or at <a href="mailto:copyrights@ieee.org">copyrights@ieee.org</a>.</p>
<p>The IEEE IPR office has additional information about the OACF, including an
FAQ, on our web site at
<a href="http://www.ieee.org/publications_standards/publications/rights/oacf.html">http://www.ieee.org/publications_standards/publications/rights/oacf.html</a>.</p>
</blockquote>
<p>From this e-mail, it is clear that paying an APC for the IEEE's "open access" publishing options normally provides very little real benefit over simply self-archiving the accepted version of the manuscript. Either way, tools such as Google Scholar will allow readers to find a free-to-read version of the paper; if you are using the IEEE journals LaTeX templates, this version will be almost indistinguishable from the final version as distributed in printed form.</p>
<p>Furthermore, the IEEE APC-supported "open access" publishing option is <em>not Open Access</em>, by either the BOAI or RCUK definitions of the term, because re-use is forbidden. Gold OA is clearly also not normally an option when publishing with the IEEE.</p>
<p>The only exception to this is if you have a mandate from a funding body that says your publications must be distributed under a certain licence, in which case you may be able to persuade the IEEE to provide "real" Gold OA: the ability for the public to read and re-use your research at no cost and with no restrictive licensing terms. This would apply, for example, if you were funded by RCUK; in that case <em>you should not sign the IEEE Copyright Form</em>, and should contact the IEEE IPR office before submitting your manuscript in order to argue it out with them.</p>
<h4>Conclusions</h4>
<p>The IEEE claims to offer "fully Open Access" publishing options to all of their authors. In fact, they offer no such thing. Open Access means the ability to both read <em>and re-use</em> the products of research, and the IEEE's "open access" options prohibit re-use.</p>
<p>Self-archiving is allowed by the IEEE, but only with a copyright statement that forbids re-use. Paying an enormous APC to make your paper "open access" merely allows people to read it for free on IEEE Xplore. True Gold OA is only available if your funding body mandates real Open Access.</p>
<p>For the majority of researchers (in industry or funded by bodies without OA mandates in place), the IEEE provides no Open Access publishing option at all. The half-hearted and incomplete "open access" options that the IEEE provides can only be interpreted as a cynical attempt to both dilute the BOAI definition and to extract vastly-inflated APCs from authors who fail to read the fine print.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-63192741676802006722013-05-08T21:04:00.000+01:002013-05-08T21:04:33.307+01:00New projects, new software and a finished thesis<p>It's been a while since I last posted about my research, so I felt that it might be time for a bit of an update. I've been at Surrey Space Centre for almost four years now, and my PhD studentship is most definitely drawing to a close.</p>
<p>Most importantly, I finally managed to complete and submit my thesis, <em>Urban Damage Detection in High Resolution SAR Images</em> and my <em>viva voce</em> examination will take place on 21st June. After having spent so long fretting about whether my research was "good enough", it's bizarre to find myself actually feeling quietly confident about the exam. On the other hand, I don't know how long that strange feeling of confidence will last!</p>
<p>My supervisor advised me not to publish the submitted version of my thesis, on the basis that the exam is quite soon and it would be better to take the opportunity incorporate any requested corrections before publication (and that it would be embarrassing if I fail the exam and the examiners ask me to submit a new thesis). However, I will definitely be making sure that I make it available online as soon as I have the final version ready.</p>
<p>On the other hand, I have already published the source code for the software developed during my PhD and described in my thesis. The git repositories have been publicly accessible on <a href="https://github.com/peter-b">github</a> for some time, and I've also more recently uploaded release tarballs to <a href="http://figshare.com/authors/Peter%20Brett/99155">figshare</a>. I've published three software packages:</p>
<ul>
<li><a href="http://dx.doi.org/10.6084/m9.figshare.695957"><strong>ssc-ridge-tools</strong></a> (<a href="https://github.com/peter-b/ssc-ridge-tools">git repo</a>) contains the <tt>ridgetool</tt> program for extracting bright curvilinear features from TIFF images, and a bunch of general tools for working with them (e.g. exporting them to graphical file formats, manually classifying them, or printing statistics).</li>
<li><a href="http://dx.doi.org/10.6084/m9.figshare.695958"><strong>ssc-ridge-classifiers</strong></a> (<a href="https://github.com/peter-b/ssc-ridge-classifiers">git repo</a>) contains two different tools for classifying the bright lines extracted by <tt>ridgetool</tt>. They are designed for the task of identifying which bright lines look like the double reflection lines that are characteristic of SAR images of urban buildings.</li>
<li><a href="http://dx.doi.org/10.6084/m9.figshare.698224"><strong>ssc-urban-change</strong></a> (<a href="https://github.com/peter-b/ssc-urban-change">git repo</a>) contains a tool for using curvilinear features and pre- and post-event SAR images to plot change maps.</li>
</ul>
<p>All the programs in the packages contain manpages, README files, etc. Note that they require x86 or x86-64 Linux (they just won't work on Windows). If you wish to understand what the various algorithms are and (probably more importantly) how they can be used, you should probably read <em><a href="http://peter-b.co.uk/downloads/brett_guida_2012a.revised.pdf">Earthquake Damage Detection in Urban Areas using Curvilinear Features</a></em>.</p>
<p>In a follow-on from my main PhD research, Astrium GEO have very kindly agreed to give me some TerraSAR-X images of the city of Khash, Iran, where there was a very big earthquake about a month ago on April 16th. Hopefully, I'll be able to publish some preliminary results of applying my tools to that data shortly (it depends heavily on when I actually receive the image products)! The acquisition had been scheduled for 7th May, so hopefully I will be hearing from them soon. The current plan is to publish a short research report in <a href="http://currents.plos.org/disasters/">PLoS Currents Disasters</a>, even if the results are negative.</p>
<p>I've recently been working on a side project using multispectral imagery from the UK-DMC2 satellite to try and detect water quality changes in Lake Chilwa, Malawi during January 2013. It's been nice to have a change from staring at SAR data, and I've also had the opportunity to learn some new skills. This was particularly interesting, as it forms part of a <a href="http://www.miles.surrey.ac.uk/">MILES</a> multidisciplinary project involving people from all over the University of Surrey. One of the things that I produced for this project was an image showing the <a href="http://dx.doi.org/10.6084/m9.figshare.686058">change in Normalised Difference Vegetation Index</a> between 3rd January and 17th January. Later this month, I'm also hoping to publish some brief reports describing the exact processing steps used: I'm not sure how much immediate use they will be, but might provide some pointers to other people trying to use DMC data in the future.</p>
<p>The only thing that I'm feeling particularly concerned about at the moment is the status of my IEEE Transactions journal paper, which seems to be taking forever to get through its peer review process. It's almost 11 months since I submitted it, and I really hope that it's at least accepted for publication by the time I have my viva.</p>
<p>All in all, though, my PhD research is more-or-less tied up, and I've produced a bunch of potentially interesting/useful outputs. Does that make it a success?</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-70828603350891009132012-12-22T18:30:00.000+00:002015-09-29T20:18:48.357+01:00Christmas 2012: Chorizo and roasted pumpkin risotto<p>In my quest to find interesting things to do with pumpkin, I came up with this chorizo-flavoured pumpkin risotto. Chorizo in a risotto base is something that I've been doing for about 3 years, but I found that the contrast between creamy risotto, smooth pumpkin, and tart lemon works remarkably well in this dish. Serves 4–6 as a main course.</p>
<ul>
<li>1 kg pumpkin</li>
<li>2 sticks celery</li>
<li>2 medium onions</li>
<li>3 cloves garlic</li>
<li>100 g unsliced chorizo sausage</li>
<li>750 mm hot chicken stock</li>
<li>150 mm white wine</li>
<li>50 g Parmesan (or similar hard cheese)</li>
<li>50 g butter</li>
<li>1 lemon</li>
<li>Parsley</li>
<li>Olive oil</li>
</ul>
<p>Preheat the oven to 200 °C (185 °C fan). Dice the pumpkin into 2–3 cm cubes. Spread the cubed pumpkin out on a baking sheet, use a pastry brush to roughly coat them with olive oil, and season generously. Put in the oven to roast for 35–40 min.</p>
<p>Finely chop the onions, celery, garlic and chorizo. In a wide-bottomed, covered pan, gently fry the onions, celery and chorizo in about 2 tbsp of the olive oil until very soft.</p>
<p>Next add the risotto rice and garlic, and fry for further 3 min. Now turn up the heat, and add the white wine to the pan. Keep stirring the risotto and gradually adding the hot stock until the risotto is cooked. It's okay not to use all of the stock; if you find that you need more liquid, just use boiling water.</p>
<p>Remove from the heat and stir in the cheese and butter. Gently stir in the roast pumpkin cubes, and allow the risotto to rest for at least a minute. Serve garnished with lemon wedges and chopped parsley.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-49316227136370795012012-12-21T14:00:00.000+00:002012-12-23T19:52:52.923+00:00Christmas 2012: Spicy pumpkin and carrot soup<p>This Christmas, I'm in charge of the menu (and the cooking) at home, and I'll be posting recipes for some of the food I cook. First up is a lovely warm and spicy vegetable soup that's delicious and quick to cook, and makes a great lunch. This recipe serves 3&ndash6 people depending on how hungry they are!<p>
<ul>
<li>2 medium onions</li>
<li>2 cloves garlic</li>
<li>1 stick celery</li>
<li>4 carrots</li>
<li>600 g pumpkin (approx)</li>
<li>1 chili </li>
<li>1/2 tsp paprika</li>
<li>1 tsp cumin seed</li>
<li>2 tbsp olive oil</li>
<li>3 tsp vegetable bouillon powder</li>
<li>Large handful of red lentils</li>
</ul>
<p>The key here is to chop the vegetables to appropriate sizes so that everything is ready to eat at the same time. Heat the olive oil in a large, heavy-based saucepan over a medium heat. Finely chop the onion, garlic and chili, and cut the celery into pieces about 1 cm on a side, and gently fry them in the oil with the cumin seed for about 5 minutes, stirring occasionally, until soft and clear.</p>
<p>Meanwhile, boil a kettle. Dice the carrots into pieces about 5 mm in size, and add to the pan. Next, cube the pumpkin to about 15–20 mm and add to the pan. Add the paprika, and continue to fry the vegetables together for another 2–3 min.</p>
<p>Add about 750 ml of the boiling water from the kettle to the pan along with the bouillon powder, and season with salt and pepper to taste (the liquid should be just enough to cover the vegetables). Bring to the boil, and sprinkle the lentils in. Finally, cover the pan, and simmer for about 30 min until ready to serve — preferably with some crusty bread and a wedge of cheddar cheese.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-20768762036837323392012-12-03T11:23:00.001+00:002012-12-04T13:19:42.974+00:00Making schematics look good with "gaf export"<blockquote><CareBear\> peterbrett : hey. gaf export is f-ing awesome!</blockquote>
<p><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="http://www.peter-b.co.uk/blog/uploaded_images/gaf_export__40160-1.png"><img style="float:right; margin:0 10px 10px 0;" src="http://www.peter-b.co.uk/blog/uploaded_images/gaf_export__40160-1.png" border="0" alt="" /></a>People who've been testing the gEDA "master" branch over the last few hours will have got a sneak preview of a cool new tool that will be arriving in gEDA/gaf 1.9.0. The new <strong>gaf export</strong> command-line utility lets you quickly and easily export your schematics and symbols to a variety of image formats.</p>
<p>I've been wanting to introduce a tool like this for a while, but it's only become possible thanks to finally finishing a couple of big features that have been cooking in my personal branches for a couple of years: a new <a href="http://www.cairographics.org">Cairo</a>-based rendering library for gEDA designed to be used for both rendering in gschem and for printing/exporting graphics, called "libgedacairo"; and a new gEDA configuration subsystem, which I'll write about in more detail another time.</p>
<p>To get started, suppose I want to create a PDF from a schematic called <tt>grey_counter_1.sch</tt>. It's very straightforward!</p>
<code>
gaf export -o grey_counter_1.pdf grey_counter_1.sch
</code>
<p>From the output filename that I passed to the "<tt>-o</tt>" option, <strong>gaf export</strong> will detect that I want a PDF. It'll detect the size of the drawing, centre it in the default paper (choosing some suitable margins) and generate a PDF file.</p>
<h4>Batch generation of PostScript files</h4>
<p>Many people previously used <strong>gschem</strong> along with the (relatively obscure) <tt>print.scm</tt> script for batch generation of PostScript files. Usually the command looked something like:</p>
<code>
gschem -o grey_counter_1.ps -s /usr/share/gEDA/scheme/print.scm grey_counter_1.sch
</code>
<p><strong>Don't do this any more</strong>. It is slow (because it needs to load all of gschem's configuration), requires an graphical desktop to be running (because gschem can't start without trying to display its windows) and doesn't provide any way to directly customise formatting options without fiddling with Scheme scripts. Also, <strong>gaf export</strong> generates much nicer PDF output than PS, especially if you want to do anything with the output file other than printing. You could directly replaced the <strong>gschem</strong> command above with something like:</p>
<code>
gaf export -o grey_counter_1.pdf grey_counter_1.sch
</code>
<p>A Makefile rule for creating PDF files from schematic files might look like:</p>
<code><pre>
%.pdf: %.sch
	gaf export -o $@ -- $<
</pre></code>
<p>Of course, one advantage of the new tool is that it can do multi-page output. So rather than generating a whole bunch of separate PDF or PostScript files and stitching them together, you could directly generate a single PDF file with the whole of your design in it:</p>
<code>
gaf export -o schematics.pdf grey_counter_1.sch filter_1.sch
</code>
<h4>Tweaking the output</h4>
<p><strong>gaf export</strong> also lets you tweak the output for different applications. Suppose I want to produce the PNG file displayed in this blog post. First, I don't care about paper sizes; I want the output file to be sized according to how large the drawing is. To do this, I can use <tt>-s auto</tt>. I can also set the margin on the output with <tt>-m 5px</tt>. I also want to print in colour (<tt>-c</tt>). So the overall command is:</p>
<code>
gaf export -c -s auto -m 5px -o gaf_export__40160-1.png 40160-1.sym
</code>
<p>It can also be useful to set the paper size (for example, to get suitable margins for larger paper sizes). By default, <strong>gaf export</strong> uses whatever GTK thinks the default paper size is on your system. For most people, this will be ISO A4. In addition to providing measurements directly via the <tt>-s</tt> option, the <tt>-p</tt> option lets you specify a <a href="ftp://ftp.pwg.org/pub/pwg/candidates/cs-pwgmsn10-20020226-5101.1.pdf">PWG 5101.1-2002</a> paper name. For example, to use US "D" size paper:</p>
<code>
gaf export -p na_d -o grey_counter_1.pdf grey_counter_1.sch
</code>
<h4>Changing default settings</h4>
<p>The default settings for <strong>gaf export</strong> can be modified using the new <strong>gaf config</strong> command. For example, to set the default paper size for all your projects to US "Letter":</p>
<code>
gaf config --user export paper na_letter
</code>
<p>Or to make sure that all printing for a particular project is in colour:</p>
<code>
gaf config -p /path/to/project/directory/ export monochrome false
</code>
<h4>Conclusion</h4>
<p><strong>gaf export</strong> is a fast, easy-to-use way of generating graphics files from your gEDA/gaf schematics and symbols. Along with several other new features, it will be available in the upcoming unstable gEDA/gaf 1.9.0 release.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-5761579777561691352012-05-14T16:03:00.000+01:002012-05-16T09:45:30.596+01:00Remote Sensing journals and open access<p>The Remote Sensing Applications Research Group at Surrey Space Centre is in the first stages of thinking about the new <a href="http://www.ref.ac.uk/">Research Excellence Framework</a> (REF) system that will be used to assess the quality of our research.</p>
<p>We've been told by the University that we each need to demonstrate four "research outcomes" for REF. Initially, we've been given the advice that an appropriate "outcome" would be a journal paper published in one of the "top five journals in our field", as determined by various arbitrary and generally misleading journal metrics. Unfortunately, at a recent meeting to discuss this, we realised that there were a few problems with this; for example, the list of "remote sensing" journals as categorised by the <a href="http://admin-apps.webofknowledge.com/JCR/JCR">ISI Web of Knowledge Journal Citation Reports</a> included quite a few journals that would have been completely <i>inappropriate</i> for our work, while some highly relevant and high-profile journals such as <a href="http://www.grss-ieee.org/publications/jstars/">IEEE J-STARS</a> were a long way down the list due to being newer and not yet having had time to accrue high-scoring metrics.</p>
<p>However, I noted and was asked to further investigate another potential problem with our list of target journals: the problem of up-and-coming open access mandates from our UK funding bodies.</p>
<p>The 2001 <a href="http://www.soros.org/openaccess/read">Budapest Open Access Initiative</a> defined open access as:
<blockquote>Free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of... articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.</blockquote>
<p>The policy that the UK Research Councils (RCUK) are proposing to adopt in the near future would make it <em>mandatory</em> to publish results from research that is wholly or partially funded by the research councils in journals that meet RCUK standards for open access. This is a significant departure from the previous position, where open access publishing even of research council-funded results has been effectively optional. The key points from the <a href="http://www.openscholarship.org/upload/docs/application/pdf/2012-03/rcuk_proposed_policy_on_access_to_research_outputs.pdf">draft policy</a> seem to be:</p>
<ul>
<li>A user must be able to access, read and re-use papers free of charge under an extremely permissive licence. RCUK explicitly identify the <a href="http://creativecommons.org/licenses/by/3.0/">Creative Commons CC-BY licence</a> as a model.</li>
<li>Open access to the paper may be provided directly by the publisher via the journal's website at the time of publication ("Gold OA"; publishers may charge the authors for this), or the author can archive the final version of the paper as accepted for publication in an online repository other than one run by the publisher ("Green OA"). <a href="http://epubs.surrey.ac.uk/">Surrey Research Insight</a> is an example of such a repository. Journals are allowed to impose an embargo of at most 6 months.</li>
<li>RCUK grant funding can be used to pay publishers for Gold OA publication, and researchers are recommended to request funding for this in grant applications.</li>
</ul>
<p>The question, therefore, is: to what extent do "remote sensing" journals comply with this policy? To answer this, I examined the publication policies of all English-language journals in this category with respect to self-archiving of the accepted version of a paper (Green OA), the "normal" published paper, and (if applicable) paid-for open access publication (Gold OA), using the SHERPA <a href="http://www.sherpa.ac.uk/romeo/">RoMEO</a> database, Ross Mounce's <a href="https://sites.google.com/site/rossmounce/misc/a-survey-of-open-access-publisher-licenses">publisher licence spreadsheet</a>, and publishers' websites. My results are shown in Table 1.</p>
<table>
<caption>Table 1. Remote sensing journal compliance with proposed RCUK open access rules, sorted in descending order of impact factor. "R" indicates restrictions on "open access" options that prevent full compliance. Minimum publication fees are shown in brackets.</caption>
<thead>
<tr><th>Name</th><th>Publisher</th><th>Regular</th><th>Green OA</th><th>Gold OA</th></tr>
</thead>
<tbody>
<tr><td>Remote Sens. Environ.</td><td>Elsevier</td><td>No</td><td>R</td><td>R ($3000)</td></tr>
<tr><td>IEEE Trans. Geosci. Remote Sens.</td><td>IEEE</td><td>No</td></td><td>R</td><td>R ($3000)</td></tr>
<tr><td>ISPRS J. Photogramm. Remote Sens.</td><td>Elsevier</td><td>No</td><td>R</td><td>R ($3000)</td></tr>
<tr><td>J. Geodesy</td><td>Springer</td><td>No</td><td>R</td><td>Yes ($3000)</td></tr>
<tr><td>Int. J. Appl. Earth Obs. Geoinf.</td><td>Elsevier</td><td>No</td><td>R</td><td>R ($3000)</td></tr>
<tr><td>GPS Solut.</td><td>Springer</td><td>No</td><td>R</td><td>Yes ($3000)</td></tr>
<tr><td>Int. J. Digit. Earth</td><td>Taylor & Francis</td><td>No</td><td>R</td><td>R ($3250)</td></tr>
<tr><td>IEEE Trans. Geosci. Remote Lett.</td><td>IEEE</td><td>No</td></td><td>R</td><td>R ($3000)</td></tr>
<tr><td>Int. J. Remote Sens.</td><td>Taylor & Francis</td><td>No</td><td>R</td><td>R ($3250)</td></tr>
<tr><td>IEEE J. STARS</td><td>IEEE</td><td>No</td></td><td>R</td><td>R ($3000)</td></tr>
<tr><td>GISci. Remote Sens.</td><td>Bellwether</td><td>No</td><td>No</td><td>No</td></tr>
<tr><td>J. Appl. Remote Sens.</td><td>SPIE</td><td>No</td><td>R</td><td>Unclear ($1500)</td></tr>
<tr><td>J. Spat. Sci.</td><td>Taylor & Francis</td><td>No</td><td>R</td><td>No</td></tr>
<tr><td>Can. J. Remote Sens.</td><td>Can. Aeronautics and Space Inst.</td><td>No</td><td>No</td><td>No</td></tr>
<tr><td>Radio Sci.</td><td>AGU</td><td>R ($1000)</td><td>R</td><td>R ($3500)</td></tr>
<tr><td>Photogramm. Eng. Remote Sens.</td><td>ASPRS</td><td>No</td><td>Unclear</td><td>No</td></tr>
<tr><td>Photogramm. Rec.</td><td>Wiley-Blackwell</td><td>No</td><td>R</td><td>R ($3000)</td></tr>
<tr><td>Mar. Geod.</td><td>Taylor & Francis</td><td>No</td><td>R</td><td>No</td></tr>
<tr><td>Surv. Rev.</td><td>Maney</td><td>No</td><td>R</td><td>R ($2000)</td></tr>
<tr><td>Eur. J. Remote Sens.</td><td>Assoc. Ital. Telerilvamento</td><td>Yes</td><td>N/A</td><td>N/A</td></tr>
</tbody>
</table>
<p>The most common restrictions encountered on Gold OA content were prohibition of commercial use (e.g. via explicit <a href="http://creativecommons.org/licenses/by-nc/3.0/">Creative Commons CC-BY-NC licensing</a>), prohibition of redistribution, and field-of-use restrictions such as prohibition of text-mining. In addition to these restrictions, in several cases self-archiving was only permitted with an embargo period of more than 6 months. One somewhat bizarrely convoluted rule for Elsevier journals can be boiled down to: "You may archive the accepted version of your paper in your funding body's repository, but only if you don't have to archive it in your funding body's repository."</p>
<p>At this stage, Springer's recent change to CC-BY licensing of papers in their "Open Choice" system is particularly notable. It's also clear our current target journals (IEEE Trans. Geosci. Remote Sens. and IEEE J-STARS) still have some way to go before they will be BOAI-compliant or compliant with the proposed RCUK publication requirements. In my opinion, over the next few years a good outcome would be for publishers like IEEE and Elsevier to standardise on CC-BY publication for Gold OA publications.</p>
<p>In the short term, I will be recommending to my group that we should consider submitting to open access megajournals such as PLoS ONE, many of which have considerably higher journal metrics than any of the dedicated remote sensing journals. Adding PLoS ONE to the Space Centre's list of preferred journals should not be particularly controversial, as it is already listed as a preferred journal for other research centres in the faculty.</p>
<p>In conclusion, I have demonstrated that the open access publishing options available in the field of remote sensing are limited, and that this may become a problem if stricter rules, similar to those set out by the Budapest Open Access initiative, are laid down by the UK Research Councils. Either journal publishers will have to change their policies, or research groups in this field will need to consider different publishing strategies.</p>
<p><i>This post is made available under a <a href="http://creativecommons.org/licenses/by/3.0/">Creative Commons Attribution (CC-BY)</a> licence.</i></p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com4Surrey Space Centre, University of Surrey, Guildford, UK51.243582824559219 -0.5882835388183593851.242340324559223 -0.59075103881835933 51.244825324559216 -0.58581603881835942tag:blogger.com,1999:blog-5144379.post-86457645296957875322012-05-04T22:02:00.000+01:002012-05-04T22:02:40.203+01:00Planning the Guildford Cycle Network<p>Since soon after arriving at the University of Surrey to begin my
PhD studentship and discovering the terrible state of cycling
infrastructure in Guildford, I started attending Guildford Cycle Forum
meetings to try and discuss what could actually be done about it. For
most of that time, the meetings have had a rather predictable format:
a chorus of Forum members pointing out problems experienced by
cyclists and opportunities to fix them, countered by County and
Borough Council officials explaining either that no budget exists for
cycling improvements, or that the changes requested weren't in their
department and they couldn't address them.</p>
<p>Recently, however, some money <em>has</em> finally become available
via the government's Local Sustainable Transport Fund (LSTF), and
Guildford is hoping to receive approx. £900,000 of it, a large chunk
of which is intended to fund cycling improvements.</p>
<p>A major component of Guildford's bid is the establishment of a
network of cycle routes within Guildford. On Thursday 3rd May, Alan
Fordham, the "Sustainability Programme Delivery Officer", hosted a
Guildford Cycle Forum meeting at the Guildford Borough Council offices
to present and discuss the routes that are currently planned for the
network.</p>
<p>A total of fourteen local cycle routes are planned to be defined,
mostly radial routes fanning out to the north from the town centre,
which actually lies in the southern part of the town. Unfortunately,
the route maps that Alan handed round at the meeting aren't available
online anywhere yet; I asked him to circulate some digital copies by
e-mail, but in case he doesn't get time to do so I will try to copy
them onto Google Maps or something.</p>
<p>In following blog posts, I plan to discuss the routes and the
weaknesses that I see in them based on my experiences cycling to and
from Guildford every day. However, in this post, I want to discuss
some more general points about the plans.</p>
<p>The most important point which I haven't seen addressed is what the
overall objective of the project is, and how it will be assessed. In
my opinion, the logical objective is modal shift, where journeys
currently made by car are transferred to other forms of transport, and
both the design of the network and its performance should be judged by
how well it achieves that. I think this is supported by the purpose of
the LSTF, which is to help promote the use of and migration to
sustainable transport.</p>
<p>Tying into this point, one of the things that really isn't clear to
me is what type of cyclist the routes are intended for.</p>
<ul>
<li>Are they intended to be used by regular cycle commuters? Many of
this class of cyclists will be aiming to cycle quite quickly and
travel at all times of year in all weather conditions. Quite often
they will be travelling at rush hour, and given rush-hour congestion
and aggressive driving, would likely welcome the addition of good
new cycle routes. These cyclists desire routes that have good sight
lines, are no more obstructed than roads, and facilitate
bidirectional flow well. Unfortunately, these kind of requirements
can be very difficult to accommodate well without new, purpose-built
segregated cycle facilities or the provision of mandatory on-road
cycle lanes. I am one of these users.</li>
<li>Are they intended as an 'easy option' to attract occasional
cycle commuters? The provision of signposted routes might be the
key to persuading people to take up cycling to work, but if the
routes are too much slower than driving, or have significant
sections that put them in conflict with rush hour traffic, they
might be put off. To me, this is a core target group, as moving them
to cycling will often directly replace a single-occupant car
journey, and I suspect that the problem may not be getting them
cycling as <em>keeping</em> them cycling.</li>
<li>What about parents taking their children to and from school?
When I was in Cambridge, I used to see a couple who would cycle to
work at the university on their tandem, taking their children with
them in a trailer and dropping them off at primary school on the
way. For this kind of user, the routes really need to be accessible
either when towing a trailer or when using one of the <a
href="http://www.dutchbike.co.uk/">Dutch-style family carrier bikes
with a bay in the front</a> (these are also really good for
shopping, or so I hear). For these cyclists, who are often heavily
laden, it is important to provide facilities that are wide enough to
accommodate them and have few sharp corners. Even a single chicane
<a href="http://www.cyclestreets.net/location/31069/">like this
one</a> can make a route impassible.</li>
<li>Are the routes intended to be used for school travel by children
old enough to ride their own bikes, but not experienced or confident
enough to fall into one of the first two categories? For these
users, good segregation of cycle routes from traffic is important,
because they will commonly want to ride with their friends and might
be easily distracted from paying attention to other vehicles.
Another factor is that, unfortunately, many of these users will be
using equipment that is incomplete or in poor condition (e.g. bad
brakes, or no lights), and once again, good segregation may be key
in keeping them out of danger.</li>
<li>Or are the routes intended for casual cyclists and cycle
tourists? These users, who will usually be travelling with a
flexible itinerary, in favourable weather conditions, and at times
of day when traffic is relatively light, can be accommodated much
more easily than any of the other types of user described above.
</li>
</ul>
<p>In following blog posts, I will try to consider the proposed routes
with reference to how suitable they are for each of above types of
user. Unfortunately, one of my biggest worries about the network as a
whole as it is currently envisaged is that it accommodates the last of
those classes of user really well, but that many of the routes are
fatally flawed for any of the other groups to depend on. Because of
that, I worry that the objective of getting many people living in
Guildford to change to cycling might be compromised.</p>
<p>Another problem is that there is very little money actually
allocated under the plan to major improvement works (such as altering
junctions to make them safer for cyclists, or changing road layouts to
add cycle lanes of appropriate width), and that the main Surrey County
Council highway planning department doesn't seem to be involved in the
process. As far as I can tell, this seems limit the project to mostly
an exercise in putting up signposts to direct cyclists onto the least
inadequate of the existing routes (and even then, one of the Cycle
Forum members raised the "environmental concern" that "ugly" signs
were "unnecessary"). Fortunately, however, there are a few
improvements being made to some of the most obviously hopeless
spots.</p>
<p>Overall, I think that just the fact that this project is taking
place is a major step forward for cycling in Guildford, finally making
a move onto the long road towards making Guildford a town that's
genuinely accessible by bicycle.</p>
<p>In my next post, I will investigate Route 4: Wooden Bridge to
Jacobs Well, and how well it holds up during rush hour.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-34716035704215957712012-03-10T13:42:00.002+00:002012-03-10T14:02:49.447+00:00Leek and potato soup<p>It's been a while since I lasted posted a recipe. Here's a recipe for some delicious leek and potato soup that I first tried out last Christmas. Serves 3-4.</p>
<ul>
<li>1 medium onion</li>
<li>2 cloves garlic</li>
<li>2 large leeks</li>
<li>350 g potatoes</li>
<li>250 ml white wine</li>
<li>150 ml double cream</li>
<li>1 tbsp olive oil</li>
<li>2 tsp vegetable bouillon powder</li>
</ul>
<p>Peel and dice the potatoes.</p>
<p>Finely chop the onion and leeks and, using a large heavy-based saucepan, fry gently in the olive oil over a medium heat for 2-3 minutes. Crush and add the garlic, and fry for a further 1 minute. Meanwhile, boil a kettle.</p>
<p>Add the potatoes, wine and bouillon powder to the pan, with enough boiling water to just cover the vegetables. Add salt and pepper to taste.</p>
<p>Turn the heat down to a simmer, and cook for 30-40 mins until the potatoes are nice and soft. Blend or mash the soup, and finally stir in the double cream just before serving.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-25983819393024674282012-01-31T15:20:00.003+00:002012-01-31T15:26:22.437+00:00Paper for EUSAR 2012<p>I'll be presenting a paper in an invited session on urban remote sensing at this year's European Conference on Synthetic Aperture Radar (EUSAR 2012). The preprint is now available on my website.</p>
<ul><li>P.T.B. Brett and R. Guida. Geometry-Based SAR Curvilinear Feature Selection for Damage Detection. In 9th European Conference on Synthetic Aperture Radar - Invited Papers (EUSAR 2012 - Invited Papers), 23-26 April 2011. <a href="http://peter-b.co.uk/downloads/brett_guida_2012.preprint.pdf">[Preprint]</a></li></ul>
<p>In other news, I will be joining <a href="http://thecostofknowledge.com/">the boycott of Elsevier and their journals</a>, and I encourage other researchers to do the same. Their behaviour is actively damaging to science.</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0tag:blogger.com,1999:blog-5144379.post-1921254989255230902011-12-28T16:39:00.003+00:002011-12-28T19:18:17.948+00:00Coming up to gEDA 1.7.2<p>Later this week, gEDA/gaf 1.7.2 will be released! Hopefully, this will be the final unstable release before gEDA/gaf 1.8.0, and it will contain a few neat new things that we've been working on over the last six months.</p>
<ul>
<li>As I've <a href="http://blog.peter-b.co.uk/2011/06/scheme-api-merge-and-keybinding-in.html">discussed previously</a>, this release will incorporate all of the basic Scheme API functionality, to make it easier to write extensions for gschem in pure Scheme. I've had quite a few e-mails and feature requests from people who have been doing interesting things with the new API, and I'm looking forward to hearing about more tools for enhancing gschem! All of the new functions are documented in the <a href="http://geda.peter-b.co.uk/geda-scheme/"><i>gEDA Scheme Reference Manual</i></a>.</li>
<li>In terms of documentation, I've completely re-written the <a href="http://geda.seul.org/wiki/geda:gschem_ug"><i>gschem User Guide</i></a>, more fully describing many features in gschem that were previously poorly documented, and hopefully making it easier to find the information you need. Gareth Edwards has done a fantastic job of adding man pages to gEDA, and now all of the programs we install have a properly-written man page.</li>
<li>The first parts of my rework of keymapping will also be included. Now key names will be properly internationalised when they're displayed, and having Caps Lock enabled will no longer mess up your keybindings. You can also now bind key combinations that use the Mod, Super or Hyper modifier keys.</li>
<li>You can now read gEDA documentation on Windows! (About time, really).</li>
</ul>
<p>What's next once 1.7.2 has been released? Well, the most important thing is going to be to get a new stable release out. Many distributions are keen to move to Guile 2.0.x, and the current stable version of gEDA (1.6.2) doesn't support it. My current hope is to go into a string freeze for gEDA 1.8.0 very soon after gEDA 1.7.2 is released, and aim to get all the translations updated in time for a stable release at the end of January 2012.</p>
<p>There are many exciting plans for the 1.9.x branch of gEDA, but I'll leave them for another time...</p>Peter Bretthttp://www.blogger.com/profile/13507691713687465296noreply@blogger.com0