Wednesday, 31 December 2014

Npower refusing to comply with orders from the ombudsman

I have to say I've been quite impressed so far with the Energy Ombudsman's handling of my Npower case.  On 26th November they ordered Npower to:
  • Send me a letter of apology
  • Credit my account with the discounts that I was entitled to, which they failed to apply (the Ombudsman actually calculated this to be slightly more than I had)
  • Ensure that an unexplained bill adjustment is corrected
  • Credit the account with a "goodwill" payment
  • Refund me all of the remaining credit on the account
The goodwill payment wasn't as much as I'd hoped, given the amount of time and inconvenience that they had cost me, but all in all this seemed not a bad resolution to the problem.

Npower were required to comply with this order by Christmas Eve...  so today I've written to the ombudsman to report that they haven't bothered to comply...  I've had no communication from Npower at all since the ombudsman issued the order 5 weeks ago.

With levels of competence like this, I do wonder why the regulator doesn't just rescind Npower's licence.

Wednesday, 26 November 2014

Are exclusivity deals good for the consumer?

Opendium has been going for over 9 years now, and over that time we've gained a number of schools as very happy customers through word of mouth (and as a testament to their satisfaction, no school has ever left us!)  We've spent that time working with our customers to build a very capable product, which is also somewhat cheaper than most of our competitors, and we're actively working to promote our product to more schools.

We're primarily marketing to independent schools, since they have much more freedom to make their own decisions, and there are a number of organisations that represent the British independent schools which we're actively engaging with.  This should be good for both us and their members.  Just last month we sponsored and attended the Welsh Independent Schools Council's annual conference, which was a good experience for us and brought up some interesting ideas for our product roadmap.  In fact, we've already implemented some of those ideas, and they are now going through the testing phase of our development cycle.

However, I was surprised by the Independent Schools Association's attitude when we contacted them - they refuse to work with us as they say they already have exclusive contracts with "preferred suppliers".  They are supposed to be working for their members, which seems at odds with any kind of supplier exclusivity.  Competition is almost always good for the consumer - it leads to innovation and lower prices, and conversely exclusivity almost always leads to stagnation and high prices.  Surely if they truly are working for the interests of their members, they would be trying to foster as much competition as possible and giving their members a wide variety of suppliers to choose from to meet their individual needs?

Thankfully, ISA's attitude doesn't seem to be shared by the other people that we have been in discussions with, so we're looking forward to working with them and benefiting their members.

Monday, 13 October 2014

Small suppliers picking up the tab for support

I've been having some thoughts about how the support load is distributed between large suppliers such as Microsoft, Apple and Google vs. smaller suppliers such as ourselves.

It seems that people buy in services from big providers, but when problems hit they can't get the necessary level of support from them, so turn to the smaller providers with whom they have a not-entirely-related support contract.

This happens to us all the time - we have all manor of kludges in our software to make Apple devices work reliably with it due to bugs in Apple's software, for example.  In an ideal world, the customer would call Apple and Apple would diagnose the problem (possibly with our help) and fix their software.  In a less than ideal world, we would do a temporary work-around to get our customers up and running, then report these bugs to Apple who would fix them.  Back in the real world, Apple never fix the bugs and the work-arounds become permanent bodges that are an ongoing minefield for us.

We used to report all the Apple bugs we found to Apple with the expectation that they would be interested in fixing them.  These days we don't bother - they have never shown any interest in fixing a bug we've reported.  Usually reporting went like this: after spending hours diagnosing a problem, we send them a comprehensive report of what's happening, how to reproduce it, often with network traffic dumps clearly showing the problem.  They respond asking for exactly the same information as we just provided, but in a different format.  So we spend hours reproducing the problem again, send them everything they asked for and never hear anything back.  I've got no problem with spending some time collecting information for them if they are actually going to use it, but it's a complete waste of our time to do debugging that they will ignore every time.

We're currently having problems with Microsoft's web servers - they have a bug in them that means certain clients can't connect (pretty much anything using OpenSSL on Scientific Linux 6.5).  In particular, our proxy server software can't connect without some work-arounds.  There is no well publicised address for reporting bugs, but we found a promising looking address and sent a comprehensive bug report.  We even prefaced the report with a "if this is the wrong address, please forward it on to the right department" note.  Instead, we have simply been bounced from department to department, many refusing to hand out email addresses and instead insisting on us phoning - the phone operators are completely ill-equipped to handle this kind of bug report and inevitably bounce us on to another department.

So much like Apple, Microsoft seem disinterested in actually looking at bug reports.  Limited experience of dealing with Google is much the same - they are just too big to be interested in resolving problems that don't affect hundreds of thousands of customers.

So we're back to my initial thoughts - customers buy expensive products from the big guys, who pocket the profits and refuse to support them properly.  Leaving us to have to pick up the pieces despite it not really being part of our remit, because telling a customer "we're not going to help you" isn't really an option for us.  Yet somehow, when we are unable to work around the problems it is somehow seen as our fault and reflects badly on us - no one ever stops buying from the big guys because of this stuff.

Friday, 26 September 2014

ICO correspondence

Many people don't realise, but sending unsolicited email is unlawful here in the UK.  There are several ways companies go about doing bulk email marketing:
  1. Collect the recipient's details via an existing business transaction, giving them the opportunity to opt-out of marketing emails at the same time.
  2. Collect the recipient's details via an existing business transaction, giving them the opportunity to opt-in to marketing emails.
  3. Buying/acquiring a mailing list from someone else.

The Privacy and Electronic Communications (EC Directive) Regulations 2003 states that you're on dodgy ground if you do (1).  If you do (2) then you can only send marketing regarding "similar products and services".  Doing (3) is never lawful.  Also, any marketing is required to contain a valid address that recipients can contact the sender on.

Whenever I give a company my details, I always ensure that I opt-out of marketing if there's an option to do so, and I never opt-in, so in theory I should get no spam.  Unfortunately, these regulations are widely disregarded, even by big corporations, so I send a standardised response to spam email that I receive from British companies (usually to several of their email addresses):
This is an unsolicited communication by means of electronic mail transmitted to an individual subscriber for direct marketing purposes. This is contrary to section 22 of The Privacy and Electronic Communications (EC Directive) Regulations 2003.
Please do not send any further unsolicited emails. A charge of £25 per email will be made for any further unsolicited emails received and your sending of any such emails will be deemed as acceptance of these terms.
I am also making a subject access request under Section 7 of the Data Protection Act 1998 for all the data / information you hold on me and from where you obtained it.
I suggest you remove me from your list and review your marketing methods with a qualified lawyer. Please confirm the receipt of this email. Failure to respond will result in your organisation being reported to the Office of the Information Commissioner.
I want to know how they came upon my details and why they think I've opted in to their spam, so as part of the above email I make a subject access request (SAR) under the data protection act - companies have a maximum of 40 days to respond.  Usually I get no response at all, and usually the spamming continues.

At the weekend I tidied up my email a bit, and took the opportunity to actually file complaints with the information commissioner's office.  They have the power to follow up these complaints and fine the companies responsible.

In total I made 5 complaints under the PECR - these were companies who had spammed me, been sent the above warning and had continued to send spam regardless.  I also made 6 complaints under the DPA - companies who had spammed me and had not responded to the SAR.

All of the complaints I filed followed the same format - I filled in the appropriate form provided by the ICO and attached it to an email as a PDF.  o the same email I attached all of the relevant emails I had sent or received as message/rfc822 attachments - this means they include all of the headers added by the email client and email servers.

Today the ICO sent me their first response - they tell me they can't investigate my DPA complaint against Halfords because I didn't include any of the email headers and therefore they don't know what date I made the SAR on...  I'm not sure if they're incompetent or looking for an excuse to not do their job - all the emails I forwarded to them had the complete headers.

Thursday, 11 September 2014

Diagnosing Sharepoint Breakage

Every so often you get a proper puzzle to solve, and this morning is one of those times. One of our customers reported that they were unable to contact the Microsoft Sharepoint servers through their proxy server, a quick test on my test system confirmed the same issue so we spent about 3 hours delving right into the nitty gritty to figure out what was going on.

The proxy was reporting "connection reset by peer" during the TLS handshake - TLS (Transport Layer Security) is the cryptography protocol used to secure HTTPS web sites, and TLS problems tend to be a pain since the OpenSSL library usually doesn't give especially verbose error messages.  It was clear this wasn't going to be a trivial problem to solve so we immediately disabled HTTPS interception for the Sharepoint site to get it up and running again.  Customer confirmed that this had resolved the issue, so that takes the pressure off a bit but raises a question: why is it working ok when the browser is negotiating the encryption, but not when the proxy is negotiating?

The first port of call was to capture some network traffic and load it into Wireshark for analysis.  This showed that the proxy is sending a TLS "Client Hello" handshake, the server was returning a TCP ACK, but no TLS response.  30 seconds later the server tears down the connection with a TCP RST.  The ACK confirms that the server got the "Client Hello", and you'd usually expect the response to be sent in the same packet as the ACK so it looked like the packet wasn't being dropped by intermediate network hops - the server simply was never sending a handshake response.

Time to make things simpler - instead of using the proxy server, lets ask OpenSSL to connect directly:
openssl s_client -showcerts -connect
This failed in the same way when we tried it on the test server, but succeeded when run on my Fedora workstation.  Comparing the network traffic between the working and non-working tests showed that the most obvious different was that the non-working handshake presented a few more ciphers for the server to choose from - maybe one of those extra ciphers was confusing the Sharepoint server.

We tried adjusting the list of cipher suites, but each time we tried we found that the request succeeded and we couldn't pin down anything specific that would break it.  We needed to start with the broken handshake and edit it bit by bit until it started working - that would let us figure out specifically what needed to change to make it work.

So we took the captured network traffic and dumped it out as hex:
tcpdump -r capture.pcap -x > capture.hex
We're not interested in the TCP layer stuff, so the first three packets can be ignored (SYN, SYN ACK, ACK) - these are the normal TCP three-way handshake.  The next packet contains the "Client Hello" which we're interested in, but it also contains the Ethernet, IP and TCP headers.  Using Wireshark it's trivial to identify the start of the payload, and we just trimmed everything before that off the hex dump.

Now to replay it and make sure it still fails:
(sed -e 's/#.*$//' capture.hex | xxd -r -p ; sleep 5) | nc 443
The sed bit at the start just strips off anything after a # so we can put comments in the hex file.  xxd converts it back into binary and we used nc to connect to the web server and send the data.

We checked the traffic in Wireshark - all looks as expected and the web server still didn't respond, so far so good.

Again, using Wireshark we can identify the various parts of the packet, and set about modifying them.  Of interest are four headers indicating the length of various sections - the TLS Record Layer has an overall length header, within that there is the "Client Hello" data which has its own length header, and within the "Client Hello" are a cipher suite list and an extension list, which again have their own headers indicating their respective lengths.  Each length header is 16 bits long, so can contain a value of up to 65535.

As mentioned, we were interested in the cipher suites - in particular the extra ones that were presented in the broken handshake but not in the working one.  So we set about removing them one by one - each cipher suite is 16 bits long, so removing it involves deleting it from the cipher suite list, and then reducing the cipher suite length, client hello length and tls record length headers by 2 each.

Each time we removed a cipher suite, we replayed the data to the server and looked to see what happened.  After removing two cipher suites, the server suddenly started responding with a "Server Hello"!  We put these ciphers back and removed two others so see if it was specifically one of those ciphers confusing the server, but that didn't break anything again - the server was still happy.

The broken handshake that we started out with had a TLS record length of 258 octets and removing two ciphers (16 bits each) reduced it to 254 - a number that will fit in a single octet, whereas 258 requires two octets.  So we tried adding all the ciphers back in and removing one of the records from the extensions list (5 octets) instead.  Again, the server responded and was happy.

So there we go.  It looks like Microsoft's Sharepoint server has a bug in it that breaks any client that tries to handshake with a TLS record more than 255 octets long.  Evidently the proxy presents a larger selection of cipher suites to the server than most web browsers, so it works fine from the browser but not from the proxy.

We have contacted Microsoft, although I have no idea if we've contacted the right department but hopefully it will get passed on to the right people.

IP Based Controls

Over the years, our Iceni servers have undergone a number of design changes in order to accommodate the changing nature of devices being used on networks.  In particular, authentication of web traffic has needed special attention constantly.

In the old days, software usually had really good support for authenticating with web proxy servers.  Windows clients would silently authenticate each web request using NTLM or Kerberos, non-Windows stuff used HTTP Basic authentication (it pops up a username/password box when you start a session, but you can use the "remember password" checkbox to stop this getting annoying).  Every so often we came across a rare example of software that couldn't handle proxy authentication and we'd have to tweak the proxy configuration a bit to bypass the authentication and filtering, but in general life was good.

We're increasingly seeing software support for web proxy servers getting poorer though - quite a lot of software just plain ignores the system-wide settings and bypasses the proxy, and an increasing amount of software can't handle proxy authentication at all.  In fact, this latter point has often shown just how poorly built some of the modern software is: Windows 8, for example, tries to log in to when you log into a machine, and if the proxy asks for authentication the machine hangs and has to be hard-reset!  Apple devices seem particularly bad too - if the proxy asks an iPhone to authenticate when it tries to synchronise its calendar, it just retries immediately, and keeps going indefinitely - hundreds of times a second!

A few years ago we did a lot of work to work around these broken devices:  Firstly we introduced a transparent proxy to deal with the software that completely ignores the proxy settings.  Then we started caching the last known user for each IP address and for client software that's known to be broken, we just reused those details rather than asking them to authenticate.  We also added a captive portal and support for WISPr authentication to help the Apple devices along a bit.

Along the way we his some surprising problems - for example, you would expect every web request to be independent of each other, but we found that if we avoided authenticating certain iPhone traffic, then completely unrelated traffic from that device that would usually work fine suddenly stopped being able to cope with authentication too!

The move to Iceni 2 saw more changes - administrators can now tell the system to only use the captive portal/WISPr authentication for certain problem URIs, or disable authentication entirely in some cases.  For example, by default Iceni 2 servers don't authenticate of filter

All this work has gone a long way to avoiding the problems that were cropping up, but increasingly there's a feeling that things like phones and tablets have such poor support for HTTP proxy authentication that its probably preferable to turn it off entirely for those devices and rely on the captive portal and WISPr.  But how do you do that just for those devices, and not for things like the on-domain Windows machines which still work fine?

This brings me on the the latest stuff I've just finished working on and is now going through QA testing (soon to be released to the customers, all being well!):  We now allow you to define a network - a network address and netmask - and drop it into a user group as if it were a user.  This means you can do stuff like disabling authentication for all devices on a particular network - your wifi network, for example.

The bonus of this is that, if your network is split up appropriately, you can also tweak filtering based on the workstation's location - you can relax the filtering for class rooms that are well supervised, for example.

We've also got rid of the "Guest" user and renamed the "Guests" group to "Anonymous" to better reflect what it means.

Over all I really like the new model, and I have plans to extend it to the mail server component as well.

However, my new job for today is to fix a locking bug in the web filter - joy of joys!

Monday, 1 September 2014

Incompetent billing

I have billing disputes with no less than three separate companies at the moment.  Its depressing that this kind of thing is probably going to go completely unnoticed by a lot of customers and result in these companies actually making money out of their mistakes...


I switched my POTS line away from BT on July 25th.  My annual "line rental saver" contract expired on July 23rd so this should have been ok.  Except BT emailed me to say I was going to be billed £22.22 as an "early cancellation charge" because they thought I was still in contract.  I gave them a call and was assured by the call centre agent that the email had been sent by mistake and I wasn't actually going to be charged.  He said he had made a note on my account to that effect to ensure it got reviewed before actually billing.

Then they took that charge from my bank account by direct debit.  So I called them again and they refused to refund it, stating that the only note on my account stated something very generic such as "customer had a billing question, explained it to him" or words to that effect.  After about 45 minutes of shouting at them they finally agreed to review their call recording.

I got a call back a few days later saying they had reviewed the call recording.  They stated that I had never been told that it was a mistake, never told that I wouldn't be charged and never told that a note had been made on my account.

So I emailed them the recording that I had made of the call...  Suddenly they refund the charge, no questions asked.  (Ok, they miscalculated the refund and I had to tell them to fix it, but still...)

Originally I would've had no problem switching back to BT in the future, but now I've come to regard them as a company that will outright lie to make money fraudulently so long as the customer can't produce any evidence to prove they are lying...  Pretty bad.

(Yeah, I know I could've enacted the DD guarantee to get my money back, but they would've just recorded it as a default so best to get it sorted at the source).

FalconNet / Merula

When I switched away from BT, I moved my internet connection and POTS over to FalconNet (who are a trading name of Merula Ltd).  £22/month (inc. VAT) for the internet connection (40Mbps down, 10Mbps up, FTTC) and £9.50/month inc. VAT for the POTS line.

They are invoicing me £25 ex. VAT and £8.20 ex. VAT respectively.  Emailing them to point out that they're billing me incorrectly has resulted in a grand total of no response at all.  Not impressed.  We'll wait to see what they take by direct debit and possibly ask the bank to back charge the DD if I can't get any response.

Edit: All cleared up and a credit note has been applied to my account.


The troubles with Npower seem to be continuing, even after I have stopped being their customer.  Over an 18 month period I have received 13 separate bills from them (bearing in mind they are supposed to be billing twice a year...)  Most of the bills are wrong and the next bill (usually also wrong) starts by cancelling the previous bill.  On the whole I've ended up with a massively confusing mess of bills that has taken me a considerable amount of time to go through and understand exactly what they have billed me for.

The latest bill says I owe £144.07, but as far as I can tell they are actually about £166.30 out and they actually owe me £22.23.

Here's the message I just sent to them...  I'm through trying to sort this stuff out over the phone, it's just a complete waste of my time...

My latest bill states that I owe you £144.07.  As I have previously mentioned to you over the phone, I do not intend to pay you until you send me a correct bill - the bill you have sent is incorrect, just like numerous previous bills:

1. My direct debit discount is supposed to amount to £100/year.  In January 2014, the discount for 2013 did not appear on my bill, so I called you and was told that it would be credited to my next bill 6 months later.  This did not happen, so I called again on June 9th and was told the direct debit discount would be refunded.  I called again on June 25th and was told it was "still being handled".  My latest bill, received last week, still shows that this direct debit discount has not been refunded.

2. Although I was a customer until the middle of July this year, I have only received £21.09 as a pro-rata direct debit discount for 2014. It is true that my bill has largely not been paid by direct debit this year. That is your fault though - the direct debit was set up, you just didn't bother to charge it.  I still expect to receive a pro-rata discount of around £50.

3. The statement dated October 22 2013 states that my closing balance is £403.50.  The following statement, dated January 29 2014 lists the opening balance at £419.80.  The £16.30 difference appears to be unaccounted for.

On the whole, I have received an extremely confusing mess of bills, cancelled bills and amended bills since January 2013 - I have received no less than 13 separate bills over this period, most of them wrong in one way or another, and it has been very difficult and time consuming to piece together exactly what you've done.

Thursday, 7 August 2014

Defending Telesales?

I came across this post, which I thought was a remarkably fun idea:

Essentially, the guy was cold-called by a recorded message that asked if he wanted to talk to someone about his mortgage.  So he opted to talk to someone, pretended to be the cold-caller's IT department and convinced him to factory reset his phone.  If the phone is auto-provisioned then this is a few seconds inconvenience for the cold-caller, but if it is manually provisioned then his phone is out of action until someone can set it all up again.

What really surprised me were the number of people who were coming out to defend the cold-caller.

"The poor guy was doing the job he was told to so he could get a paycheck."

So lets look at this rationally:
  1. Cold callers cost the person they are calling (depending on the situation in time, inconvenience or money) and there is no way to preemptively stop them.
  2. This was a recorded message.  I'm not sure what the laws are like in Canada, but in the UK it is illegal to use a recorded message when cold-calling people.
  3. The caller was ignoring the national "Do Not Call" registers (i.e. Canada's equivalent of our TPS).
  4. The caller was sending a fake caller ID to disguise their identity.
  5. Given that they have gone out of their way to disregard the law and disguise their identity, it's probably reasonable to assume that whatever they're selling is a scam - if you were operating a legitimate business you want people to find you, so don't try and hide your identity.

So at best, cold callers are an annoyance - they know they are annoying the people they call, yet they choose to do the job.  At worst, they are breaking the law (certainly sounds like that was the case this time).  They are pretty much universally a drain on society.

I don't buy the whole "he was just doing his job" and "maybe it was the only job he can get to feed himself" arguments - you could apply these to any criminal.  Should we be defending the drug dealers and the burglars because "maybe they couldn't get a better job?"

I also don't understand the "he probably didn't realise it was unlawful" arguments that I often hear.  Firstly, ignorance is no defence - if you're arrested for doing something illegal then "I didn't know it was illegal" isn't going to keep you out of jail; and secondly I just don't believe that none of the thousands of people who have been called haven't informed the cold-caller of their legal position.  Whenever I have been cold-called and pointed out to them that they were breaking the law, they have always told me that I was wrong, even when I quoted the relevant legislation at them.  Unless you don't care about the law, the sensible thing to do when someone tells you what legislation you're breaking is to actually go and look at it and see if they're right.

If your employer tells you to do something, it is your responsibility to figure out if it is legal before doing it.  "I was just following orders" doesn't cut it.

Thursday, 17 July 2014

Data Retention and Investigatory Powers (DRIP) bill

In April, the European Court of Justice ruled that the routine collection of location and traffic data about phone calls, texts, emails and internet use and its retention for between six months and two years meant a very detailed picture of an individual's private life could be constructed, that this amounted to a severe incursion of privacy and it therefore contravened EU law.

In response, the British government have put forward the Data Retention and Investigatory Powers (DRIP) bill to restore their ability to snoop on everyone.  Not wanting this bill to come under too much scrutiny, it was passed by the commons after a single afternoon's debate.  51 MPs voted to do their job properly and take time to make a decision, but they were overruled by 441 votes to just push it through as quickly as possible.  Assuming the Lords agree, this will pass into law tomorrow.

Once enacted, this legislation will allow the government to require internet service providers, internet application providers and telephone companies to record and retain metadata on any members of the public, without needing a warrant.  The bill would also apply to non-UK companies like Facebook (although quite how they expect to enforce this I'm not sure).

"The police will not know who that suspect is until they come to the police’s attention, at which point they have to get historical evidence. These days, part of that historical evidence will be in data records. They have to be able to access everybody’s data records in order to find those of one particular person, because the police, no more than the rest of us, are not given powers of clairvoyance with which to anticipate who is and who is not to be a suspect. Unless or until I hear from opponents of this Bill and of data retention how the police can be expected to identify in advance those who are going to be suspected of crime, I have to say that the whole logical basis of their argument completely falls away." - Jack Straw

Traditionally, a certain amount of information was kept by an ISP/telco/whatever for normal day to day business purposes. For example, a telco may keep call metadata records for a certain amount of time for billing purposes. If the police have a suspect then I see no issue with them getting a warrant to access that data.

If someone is a suspect, then I also see no problem with the police getting a warrant to record extra information that wouldn't normally be recorded/retained. e.g. they may get a warrant to have the ISP log web requests made by a suspect, or have the telco record call audio.

However, there is a distinction between the above examples, which require the police to suspect someone and convince a judge to issue a warrant, and what the government is increasingly trying to do, which is to capture data about *everyone's* activities, specifically for law enforcement purposes, just in case they later become a suspect.  That is something that's fundamentally wrong IMHO - as someone who has committed no crime, I have a right to privacy, and that right is being violated by having data recorded and retained for law enforcement purposes.

The whole "but it won't be used unless you become a suspect" argument is flawed - once the data is there, I have no confidence that access to it will be tightly controlled. Data may be leaked by accident, on purpose (illegally), the laws regarding under what circumstances it can be accessed may not be robust enough to prevent legal access for questionable purposes, and the whole thing is subject to feature creep - the access controls may be ok now, but I can't demand the historical data be deleted if their scope expands in the future.

Once upon a time, people were  considered innocent until proved guilty, but these days it seems that everyone is treated as guilty from the start and just happen to be allowed some freedom until the authorities can figure out what crime they committed.

All three main parties are backing this attack on our privacy, but is an afternoon of debate really enough to decide to throw away everyone's freedom?

Wednesday, 9 July 2014

Decoding Freesat, Part 2

As I mentioned in the last post, I'm reverse engineering the Freesat transmissions in order to extract the channel numbers so I can automatically update my MythTV system to use sensible channel numbers.

I've now managed to figure out most of the important bits:  Transport stream 2315 is broadcast on 11.428GHz Horizontal, at 27.5 Mbaud, FEC 2/3 and contains a stream with PID 3002.  This stream transmits a carousel of service description tables (SDTs) and bouquet association tables (BATs).  The SDTs aren't especially interesting, so I'm ignoring that for now.

Freesat tailor their channels to groups of consumers by grouping them into bouquets - each of the four countries (England, Wales, Scotland and Northern Ireland) get three bouquets - one for standard definition receivers, one for high definition receivers and one for G2 (second generation) HD receivers, so 12 bouquets in total for the time being.  (But more about regionalisation later).

Each BAT is packetised into one or more sections, and there is one BAT for each of Freesat's bouquets.  To collect all the information, you just keep watching the carousel until you've seen all of the sections for all of the BATs.

A BAT consists of a header, zero or more "descriptors" (lumps of data that have an ID that identifies the type of data they hold) and zero or more "transport streams".  Each "transport stream" in the BAT contains zero or more descriptors that contain information relating to a DVB transport stream (i.e. satellite transponder).

The top level descriptors in the BAT include standard descriptors (bouquet name, country availability, private data specifier descriptor) and some non-standard ones:
Descriptor IDDescription
0xd4Region table
0xd8Category table

I haven't investigated any of these except for the region table.  0xd5 - 0xd7 appear to be binary data.  0xd8 looks like a list of (ID, language, category name) tuples, but I'm not sure what the category IDs are referenced by; the category names are stuff like "Entertainment", "News", "Shopping", etc.

0xd4 is the one that's of interest to me - The bouquets are geographically pretty coarse, and Freesat tailor the channels to much smaller regions.  So the south of England gets BBC One South on channel 101 whilst the East Midlands gets BBC One East Midlands on channel 101, etc.  Descriptor 0xd4 contains a list of the regions that are served by the bouquet (I don't understand why bouquets are used at all for regionalisation though, since I can't see a reason for not handling it all through this fine grained regionalisation system?)  The data in this descriptor is a bunch of variable length chunks concatenated together, with the header of each chunk containing its size so the next chunk can be found. The data format of the chunks is:

Offset (octets) Length (bits) Description
0 16 Region ID
2 24 Language
5 8 Length of region name
6 Variable Region name

"Language" is a three letter text string and is always "eng" at the moment.

Now, as I mentioned above, the BAT also contains a list of transport streams, with a bunch of descriptors in each.  So looking at the descriptors within a transport stream, as well as a few standard descriptors there is descriptor ID 0xd3, which maps service IDs to channel numbers.  This contains a bunch of variable length chunks concatenated together, with the header of each chunk containing its size. The data format of the chunks is:

Offset (bytes) Length (bits) Description
0 16 Service ID
2 16 Unknown
4 8 Length of remainder of the chunk
5 Variable LCN/region mappings

The "LCN/region mappings" data is a concatenated set of fixed length subchunks as follows:

Offset (bytes)Length (bits)Description
0.512Logical channel number
216Region ID

So, we can select the appropriate bouquet (e.g. if we're using an HD receiver in England, we would choose bouquet 272, which is England HD) and pick our region (such as region 15 - "E Midlands/Central E").  In theory we can filter down the data to get a list of channel numbers and what transport ID and service ID (i.e. what channel) they are assigned to.

There are a couple of gotchas:

Firstly, one service can be assigned to multiple channel numbers.  The BBC regions are all typically available on 9xx channel numbers, but your local region is on 10x as well.

Region number 65535 appears to be a fallback or default region.  So, for example, in the England HD region, service ID 10060 (ITV 1 London) is assigned to channel 103 in regions 1, 18, 27, 31 and 38 and channel 977 in region 65535.  Region 1 is London, the other regions don't appear in the region list so I assume they are legacy IDs.  So in this example, if you're in London then ITV 1 London appears on channel 103, but if you're anywhere else it is on channel 977.

Region number 0 is a complete unknown...  See below!

Open questions

Region 0 - I can't figure it out at all.  It only seems to be used for BBC One (logical channel numbers 101 and 108), and it seems that multiple channels can end up assigned to a single channel number in region 0.

For example, looking at the Wales HD bouquet (274), BBC One London, BBC One West Midlands and BBC One South are all assigned to channel 108 in region 0.  Channel 108 isn't assigned in any other region.

Similarly, the Wales SD bouquet (258) assigns these same three channels to 101 for region 0.  We clearly can't just ignore region 0 because there's no other way to assign a channel to 101 in this case, but I can't see how set top boxes can choose between the three assigned channels.  Also, I note that the SD version of BBC One Wales (service ID 10311) isn't listed in the BAT at all - do SD Freesat receivers in Wales no longer get a regional BBC One?

Even more confusing are bouquets 272 and 280 (England HD and England G2), which seem to have BBC One Scotland HD (service ID 8901) assigned to channel 108 in region 0!

It would certainly be interesting to look at a branded Freesat decoder and see what channels appear on it; unfortunately I don't have one.

Monday, 7 July 2014

Decoding Freesat, Part 1

To watch/record TV, I use MythTV connected to a satellite receiver.  Unfortunately, MythTV's handling of channel numbering is a bit bonkers... So I've been doing a bit of reverse engineering of the data transmitted by FreeSat to try and automagically pull out the local channel numbers and update the MythTV channels database...  Information on the internet seems thin on the ground, so this is what I've figured so far:

The "FreeSat Home" transponder is the interesting one (transport stream ID 2315).  This is located at 11.428GHz Horizontal, with a symbol rate of 27500 and FEC 2/3.  PID 3002 on this transponder transmits a bouquet association table (BAT).

For each transport stream, there is an entry in the BAT, containing a descriptor tag 0xd3 and an associated lump of data.  The data is a set of variable length chunks concatenated together, with each chunk containing a length value so the offset of the next chunk can be calculated.

The chunk format appears to be:

Offset (octets) Length (bits) Description
0 16 Service ID
2 16 Unknown
4 8 Length of remainder of the chunk
5 4 Unknown
5 + 1 nybble 12 Local channel number
7 Variable Unknown

I haven't been able to figure out how the channels are selected by region - for example, local channel number 101 is allocated to BBC 1 London if you're in London, BBC 1 Wales if you're in Wales, etc. but I haven't found this information in the BAT yet. Compare:

BBC One London BBC One West
Service ID 18 9d (6301) 18 c5 (6341)
Unknown 1 81 f9 (33273) 82 01 (33281)
Size 08 (8) 08 (8)
Unknown 2 d (13) d (13)
LCN 3 b6 (950) 3 c5 (965)
Unknown 3 ff ff f0 6c 00 00 ff ff f0 6c 00 00

Edit: "Unknown 2", "LCN" and "unknown 3" appears to be an array mapping LCNs to regions:

ITV 1 London ITV 1 Granada
Service ID 27 4c (10060) 27 60 (10080)
Unknown 1 83 f3 83 f2
Size 18 (24) 0c (12)
Unknown 2 d (13) d (13)
LCN 067 (103) 067 (103)
Region 00 01 00 07
Unknown 2 d (13) d (13)
LCN 067 (103) 067 (103)
Region 00 12 00 27
Unknown 2 d (13) d (13)
LCN 067 (103) 067 (103)
Region 00 1b 00 2b
Unknown 2 d (13)
LCN 067 (103)
Region 00 1f
Unknown 2 d (13)
LCN 067 (103)
Region 00 26
Unknown 2 d (13)
LCN 3d1 (977)
Region ff ff

Descriptor 0xd4 in the BAT seems to translate the 16 bit region IDs into human readable strings (is it me, or does 16 bits sound a bit excessive for region IDs?)

Changing ISP...

BT just sent round a reminder for me to renew my annual "line rental saver"...  it seems to have gone up significantly - £159.84 (so equivalent of £13.32/month) and its kicked me into having a look at my options.  Currently I pay BT for the POTS line and then UK Free Software Network (an EntaNet reseller) get £23.70 for my internet connection.  The internet connection is a plain old ADSL2+ connection* with a /29 static IPv4 subnet and a /56 static IPv6 subnet and is currently synced at about 6.7Mbps down, 960Kbps up.

(* It's supposedly ADSL2+, but my TP Link ADSL modem won't resync properly when the noise floor increases, so I actually have to run it in G.DMT mode...  There isn't a huge difference in speed though).

So anyway, all in I'm basically paying £37.02/month for POTS and internet.  The only reason I need the POTS bit at all is because it's required for the ADSL connection - I get free evening/weekend calls from BT, but that's not really worth the cost of the line.  In fact, I think it's bonkers that BT are putting their prices up, given the increasingly wide selection of alternative providers.  SIPGate, for example, charge 1.19p/minute for geographic calls, and even my pay as you go mobile is only 3p/minute.

Unfortunately, UKFSN don't appear to do the POTS bit themselves, expecting you to use BT for that, but I have been pretty happy with them so I've been looking at other EntaNet resellers.  One that has stood out is FalconNet - they are offering FTTC internet connections (40Mbps down, 10Mbps up) for £22 and POTS for £9.50, totalling £31.50/month.  Their installation cost is £96 - given that my "line rental saver" has to be paid up front, a £96 up-front cost doesn't seem bad at all - I basically end up in credit for the first 8 months.  I fired off an email to FalconNet and they confirm that they do IPv6 and a /32 static IPv4 subnet.

This is pretty compelling: amortised over 18 months, I get a much faster internet connection for about the price I'm already paying and everything after that is a saving; and no more up-front annual fees.  I just lose the free evenings and weekends calls - FalconNet charge 1.14p/min for geographic calls, so the amount I'm saving can pay for about 8 hours of calls a month.  Although truth be told, for the sake of 0.05p/minute I'll probably just use SIPgate (or another SIP gateway).

Seems like a no brainer.

Monday, 23 June 2014

"Superfast Broadband"

Business Wales have sent around an email making local businesses aware that "superfast broadband" is being rolled out across Wales and pointing out various benefits that businesses should look into.  They seem to be talking about FTTC and FTTP connections (they don't mention how available FTTP is - I think "not very" may well be accurate).

This is all quite sensible to make businesses aware of what's going on since the less technical businesses may well not be aware of this stuff.  However, the material they've sent out looks a bit questionable to me.  For starters, they've linked to a savings calculator which works out how much you could save by shifting some of your internal equipment out to "the cloud".  One of the provisos they give is "If your organisation hosts more than 1 server it will use it purely for file, print or authentication purposes" but they don't mention anything about what services you're using if you only have a single server - I'm pretty sure their pricing doesn't cover a lot of the services we run on our servers - i.e. one of our servers runs an Asterisk phone exchange, Subversion/Trac revision control, support ticketting system, a name server, email service, etc.  We also have several other servers doing different things, but in principle we could consolidate a lot more stuff onto a single server if we wanted to.

But that proviso also listed some slightly insane stuff - why would you want a print server to be in "the cloud"?  Print jobs can be *huge*, so sending them up over the internet to a print server, only to be pulled down again to the printer seems nuts to me.  Also, your printer is going to need to be connected to the network in order to talk to the print server, so is there actually much benefit in having the workstations talk to a separate print server instead of just sending jobs directly to the printer?

The other thing that caught my eye was that they are saying that servers hosted in the cloud are free.  They are suitably vague about where they're these free services come from - Google Apps, for example, is £33/user/year - certainly not free.

They do talk about the SLA provided by the cloud services, quoting figures like 99.9% uptime, but what about the reliability of the internet connection itself.  This certainly shouldn't stop people from considering cloud services, but you do need to consider that if your business is wholly reliant on a fast internet connection for everything then you're going to have serious problems if that connection is down for a long time.  BT have been known to take several days to repair faults, and whilst you can pay for an SLA that "guarantees" a fast fix, the fact remains that BT frequently miss the guaranteed fix times and the compensation is a pittance (they quote £25 compensation if they don't meet the SLA).  A small office can get by with a 3G connection for a few days if they are just using the internet for some web surfing and email; but this isn't going to work if you're expecting to shift move many gigabytes of files and print jobs between your network and a cloud service.

There's also some blurb about how to choose an ISP, but it seems to muddle lots of bits and pieces together here.  They talk about the ISP hosting your domain, email, website but don't (in my opinion) make it clear that these things might already by hosted elsewhere.  There is no discussion about why hosting these things with the ISP may, in fact, not be a great idea (e.g. tieing your web hosting into your internet service may make changing either one of them in the future quite tricky).

Finally, there's absolutely no mention about IPv6 connectivity.  This is going to become a big deal - the only part of the world that has IPv4 addresses left is Africa, and they are forecast to run out in around 5 years; for everyone else, there are no spare IPv4 addresses - any time an address is needed it has to be repurposed from another service.  There are a few ways of dealing with this problem, but fundamentally everyone needs to migrate onto IPv6.

Pretty much every ISP is going to need to implement either CGNAT or NAT64 as a stop-gap measure to ensure IPv4-only services remain accessible.  However, these technologies are bodges, each with their own list of problems and the only real solution is going to be to move everything to IPv6.

Some ISPs, such as A&A and EntaNet have been providing IPv6 connectivity for years and if you're on one of these and have a properly configured router you don't need to worry about it at all.

Some other ISPs have said they won't be rolling out IPv6 any time soon - for example, PlusNet have said they are going to use CGNAT instead.  As mentioned above, CGNAT isn't really a solution, it's a stop-gap measure with a whole host of problems, so prolonging the use of the stop-gap by not rolling out IPv6 seems extremely questionable.

Most ISPs have remained completely quiet on the issue, and anyone using these ISPs should certainly be concerned since they have provided no indication about how well they have been planning for the future.  To be clear: all ISPs have had almost 20 years to plan and roll out IPv6 infrastructure, so there isn't really any excuse for this not to be well under way.

Friday, 13 June 2014

IPv4 running out

Well, it was bound to happen sooner or later - Microsoft say they have
run out of US IPv4 addresses for their Azure service and are now starting to assign non-US addresses to US services.

This follows LACNIC (the regional registry responsible for Latin America and the Caribbean) announcing that their Gradual Exhaustion and New Entrants policies had come into effect on Monday as they hit their last /10 (around 4 million IPv4 addresses).  This essentially means that the remaining addresses are going to be allocated on an extremely strict policy, which to all intents and purposes means there are no more addresses to be had from LACNIC.

APNIC (Asia Pacific) and RIPE (Europe) have been running under their respective end-game policies for some time now, and ARIN (North America) are also operating under their "phase 4" policy, although this policy seems very vague compared to those of the other RIRs.  So the only region where IPv4 addresses are still being handed out is AFRINIC (Africa).

AFRINIC have a relatively large number of addresses left and don't really seem to burn through them very fast, probably owing to relatively slow technological development within the region - they have about 3.14 /8s (about 53 million addresses) and are projected to run out some time around the middle of 2019 with their current rate of consumption.  It will be interesting to see if the AFRINIC consumption now increases as global businesses, such as Microsoft, start requesting assignments from AFRINIC to be used in other regions.  Indeed, I wonder if the history of the applicant will count against them - will AFRINIC refuse to hand addresses to Microsoft, for example, because they now have a history of using large numbers of non-US addresses on US services?

I note that The Register has made a few silly comments in their article on this latest news that probably need some clarification:

Firstly they ask "why Azure is relying on Ipv4".  Microsoft's blog post didn't mention anything about the internals of Azure - it didn't indicate that the actual infrastructure was affected at all.  Indeed, the internals are probably running on IPv6, or private IPv4 addresses, so wouldn't be affected by the IPv4 address shortage.  However, The Register has failed to figure out that the people hosting services on Azure want the general public to be able to access them, and with the majority of ISPs still dragging their feet to roll out IPv6 to their customers there's no alternative but to use IPv4 on the server end too.

The other question they pose is "how Redmond let itself run out of IPv4 addresses" when they were "buying IPv4".  Now to be clear: IPv4 addresses have never been a purchasable commodity.  Addresses were assigned by the regional registrars according to strict criteria - at no point could Microsoft have gone to a registrar and said "can we have enough addresses to last us 10 years please" - the registrar would've told them to get lost (RIPE's policy was for LIRs to only get 6 months' worth at a time, for example).  Now, MS did, in fact, buy a bunch of IP addresses from the bankrupt Nortel in 2011, and there was some surprise that the registry didn't just demand that they be returned since strictly speaking this kind of sale doesn't seem to be allowed - the terms of most registries is that you return addresses to the registry when you stop using them.  Arguably a company that is selling off thousands of addresses is no longer using them, so they should've probably been returned rather than being sold.  In any case, in the general case, IPv4 addresses have never been for sale, so its hard to fault Microsoft for not buying up the world's supply years ago.

What is clear, however, is that everyone, especially ISPs, seriously needs to pull their finger out and roll out IPv6 soon.  Its long past the point where we can expect a slow trouble-free migration, we're well into a period where there's going to be some major disruption as people who have been dragging their feet suddenly spot the sky falling in on them.

Monday, 9 June 2014

Npower, yet again

So after Npower's previous screwup, they've done it again.

To recap, In January 2013 I switched my gas and electricity to Npower.  They set up the direct debit and everything, but sent me no paperwork at all until seven months later when I got my first bill... and it was enormous.  Coz as it turned out they had put me on completely the wrong tariff and hadn't actually bothered to charge the direct debit (I hadn't noticed).  It was merry hell to sort it out and took a considerable amount of my time, with them trying to the deflect blame back to me at pretty much every step (e.g. yes they had forgotten to charge the direct debit, but apparently that was my fault, because I'm apparently responsible for checking up on them all the time to ensure they've done their job.  I do wonder what the advantage of direct debit is supposed to be over just paying manually every month if they think I have to check it every month anyway).

Anyway, it eventually got sorted out and I paid off the hundreds of pounds of debt that had amassed on my account.

In January this year, I got a letter from them saying I was overpaying and that they were reducing my DD payments by about half.  All well and good.  However, I now have a letter telling me I am hundreds of pounds in debt to them (again) and that I must pay this within 9 days of the letter date (actually, I must pay it by 26th May, but of course I didn't get the letter on time because I was away from home).  It turns out that rather than reducing the DD payments, they just plain stopped them altogether (again).

So clearly its their fault that I've ended up with a load of debt against my account again (although I'm sure they will insist that its my fault), but even if it wasn't their fault, how is it right that they first thing they've done to let me know is give me 9 days notice to pay up?  Surely if the account is set to be paid by DD but isn't actually being paid by DD then they should tell the customer immediately rather than waiting until the account is a few hundred quid in debt?

Needless to say, I've now switched energy supplier and am waiting for Npower to call me back to reach a resolution for this...  Never before have I dealt with such an utterly incompetent company.

(Edit: Having just got off the phone with them, they have applied the £88 direct debit discount that they "forgot" to deduct from my account (despite me querying it in January already) and have let me pay off the balance so I can switch supplier.  I await the revised bill that they are sending through so I can check it's actually accurate this time.  I do wonder how much money they make off "forgetting" things like DD discounts and hoping that people don't notice.  Maybe I'm just cynical, but I feel like there isn't much incentive for them to fix their systems when the billing errors are making them money.)

Wednesday, 26 March 2014


Most of the customers who have us do websites and web applications for them get us to host it all for them.  Although we have to maintain the servers, this is usually a bit less faffy and more reliable than dealing with a third party.  However, sometimes customers already have hosting, and want to carry on using this, which is fine too.

Today, I'm slightly confused by a relatively large third party hosting company who shall remain nameless.  One of our customers reported that their web application had stopped working and was reporting database errors.  Some investigation shows that their hosting provider has been doing upgrades (in the middle of the working day) and have shut down the PostgreSQL server... and it's been down for hours.

The control panel shows "This server is currently receiving security and feature updates. If you experience any problems, please try again later when this message has been removed. Thank you for your patience." - that's it, no ETA or anything. It also seems likely that they never informed their customers about this (but I can't actually guarantee that since I'm not their customer).

Is this really the expected behaviour of a reputable hosting company? I know we are very careful to minimise disruption and wouldn't dream of taking whole services offline for hours on end...

Tuesday, 25 March 2014

User guides

I've been spending some time writing user guides for the new Iceni 2 system (  Yes, I know it might come as a shock that we're actually suppling documentation! :)

Its surprising how time consuming it is to write this stuff, put together screen shots, etc. even though this user guide isn't especially large: the aim is not to provide a 2000 page definitive guide that no one will ever read; instead we're doing reasonably short documentation that shouldn't take too long for someone to read through and gain a basic overview of how everything works.  Eventually we'll also augment it with a knowledgebase of how-to documents explaining how to achieve things that people frequently ask for help with.

Although Iceni 2 has had a year's worth of development work put into it, many of the concepts date back to the original Iceni which has been running many customers' networks for years, so this documentation may even be of interest to those customers (who will, of course, eventually be migrated onto the shiny new system).  Its surprising to think that we started developing the original Iceni back in 2005, and looking back on it we seem to have got most of the fundamentals about right right from the start.

While I've been doing the docs, we've also been rolling out our first Iceni 2 servers to customers.  That seems to be largely going ok, although as always you do end up finding a few niggling bugs that never showed up during testing.  So far they've been all quick to fix.

Saturday, 22 February 2014

Back from honeymoon!

Its been a busy couple of weeks!  Mel and I were married on Saturday February 1st at the King Arthur Hotel in Reynoldston.  Everything seemed to go off without a hitch and we thoroughly enjoyed the day.  Big thinks to all of our friends who could make it, and pretty much everyone seemed to be up and dancing at the twmpath in the evening!

A lot of planning went into it and it certainly paid off, can't wait to see the photos (there are loads on Facebook and we must chase people to get the originals.  We're waiting on seeing what the official photographers have done but I must say they were excellent on the day.

Flying Out

We headed off for our honeymoon in the Canadian Rockies, leaving Cardiff in the early hours of Monday 3rd.  We arrived at the airport rather a tad early, but better early than late!  Our flight went via Amsterdam with around a 4 hour stopover so we figured we'd get some time to look around there.  We had intended to have a look around the flower market while we were there, but that proved rather too far.  After catching the wrong train and then managing to get on the right train with the help of a friendly guard, it was simply going to take us too long to walk from the central station - we got about half way there before we decided we were going to have to turn around and head back to the airport.  Having spoken to other people since, it seems we should've caught the tram instead of the train as it would've been much much quicker.  Never mind - it was good to get out and stretch our legs before the long flight to Calgary anyway.

I have to say I was quite impressed with KLM - they provided regular meals on both the Cityhopper to Amsterdam, and the flight to Calgary...  because of the 7 hour time difference between the UK and Calgary I have absolutely which meals of the day they provided, but we got about four of them in all! :)  Usually I've flown on cheap European charter flights where you get no food and drink unless you pay through the nose for it, so it was quite refreshing to see this handed out to everyone as a matter of course.

Driving to Banff

When we got to Calgary, we went to pick up our hire car from Thrifty Car Rentals.  We had known we might be driving in pretty poor weather conditions, a bit nervous of this since neither of us have driven a left-hand-drive car, or an automatic before and this would be our first time driving in Canada - we didn't expect this to be a big problem though as we had booked our car hire for 14:30, and with an under 2 hour drive to Banff we should have been arriving at the hotel a good 90 minutes or more before dark, meaning we'd be driving in the daylight.  Over the course of the holiday we were also going to be driving along some roads for which winter tyres were legally mandatory, so when we booked the car hire through Car Del Mar, we double checked with them that winter tyres would be available, who confirmed that Thrifty said they would be, so all was good

Unfortunately it seems that Thrifty aren't too good at keeping their promises - initially they didn't have a car for us at all, so we had to wait around.  They then found one but said it didn't have winter tyres - we reiterated that we needed them because some of the roads we were going to be driving on had legal requirements for such, so they set about trying to find us another car.  Eventually we were told they had a bigger car with winter tyres, handed over the keys and charged us the daily winter tyre fee - so we're off!  Except we're not... a quick check of the car shows no winter tyres, so back to the office - they now say that as it was a slightly bigger car then that should be enough and we don't need the winter tyres.  Apparently the roads we intended to drive are "not that bad"...  So not only have they told us to ignore the law and drive without the right equipment, but they have charged us for equipment that they aren't providing, and then lied to say they were providing it!  We were starting to get pretty unhappy by this point and time was ticking away.  Eventually they found us yet another car (a Toyota Camry) - the tyres on this carried M+S (mud & snow) markings, which was rather disappointing - winter tyres have a "snowflake and mountain" symbol and although they are designed to deal with snow, M+S tyres aren't truly winter tyres and don't necessarily cope so well with such low temperatures.  However, as time was now truly ticking on we decided to cut our losses and accept what they had given us.  It was now 16:30 - 2 hours after we were supposed to have picked up the car, and it was rush hour in Calgary.  Another thing we weren't happy about is that Thrifty were insisting that even though we had booked and paid for an "insurance excess waiver" with Car Del Mar, we had to pay again directly to Thrifty and claim that fee back from Car Del Mar.  This isn't at all how it was explained to me by Car Del Mar - they had originally said that they would simply refund any excess we had to pay, but as Thrifty were insisting that this wasn't the case, we ended up going along with it.  So a job I still have to do now we're home is write to Car Del Mar and ask them to refund this fee.

With the additional rush hour traffic, we didn't even get out of the city until after dark.  Driving conditions weren't great - we had drifting snow and this was making driving very disorientating as it looked like you were driving over a constantly moving road surface.  Intermittently, the snow was covering all the road markings, and because of a covering of snow over everything, it was impossible to distinguish where the edge of the road was.  Suddenly I realised I had no idea what my current road position was, and started slowing down and trying to figure things out.  Somehow I decided I was too far to the left, so moved right slightly... straight off the side of the highway and into deep snow!  Because the ground sloped away from the highway slightly and the snow was quite deep, we couldn't drive the car back onto the road - with my snow chains on it would've been no problem, but the M+S tyres they had put on the car were completely hopeless.  So there we were, stuck in < -30°C temperatures, in a blizzard off the side of the highway... great, not what we wanted for the start of our honeymoon.

Helpfully another motorist stopped to help.  He didn't have a tow rope, but offered to give us a lift to Canmore where we could get a tow.  While we were talking with him, another motorist stopped who did have a tow rope, but we then discovered that the Toyota Camry didn't appear to have any kind of front tow eye and the manual didn't mention anything about towing at all.  Just then a police car stopped and called for a tow truck for us, who turned up within half an hour and quickly pulled us back onto the road, charging us $380 CAD for the privalidge.  Expensive, but at least we were on our way again.

We finally rolled into Banff and found our hotel a bit after 21:00 - exhausted, we just rolled straight into bed for a good night's sleep.

Another couple of comments about the car while I'm on the subject - firstly it had a foot operated parking brake about where you'd expect to find the clutch on a manual: bad news!  But also the parking brake was impossible to see in the dark of the footwell, so to those of us not used to it we were forever peering down there with torches to try and find it to release the thing - what's wrong with a good old fashioned handbrake?!  Secondly, I've not driven an automatic before, but this seemed to like sitting in the highest gear possible.  Fine for cruising, but when you're pulling out and spot someone heading towards you and want to get out of the way quickly, putting your foot to the floor and feeling nothing at all happen for about 3 seconds until the car decides it needs to change down is... unnerving, to say the least!  Give me a car with a proper gearbox any day.

A Trip up the Icefields Parkway

We started our day with a simple breakfast - toast, cereal, coffee.  Since we were only spending a single night in Banff, we had largely picked our hotel for its cheapness - we had a private room in The Banff Y Mountain Lodge - this is a hotel / hostel for travelers run by the YWCA, with the profits going to support the charity.  We had a standard double room with en-suite bathroom and it was a pretty reasonable affair - I guess I'm not a hotel snob, but I couldn't see anything I was unhappy about and all for $69 CAD for the night, which included the breakfast.

After discovering an apple we'd accidentally left in the car was frozen completely solid, we had a short wander around Banff itself, which seems to be a lovely, very picturesque town, and then set off up towards Lake Louise and then onto the Icefields Parkway.  It winds up through the Rockies, takes you from Lake Louise to Jasper and was actually a much better road than we expected - although there were patches of ice and snow on it, it was ploughed and quite wide.  The speed limit was around 90 Km/h on most of the road and we found the driving pretty relaxed.  There were fairly regular stopping places for viewpoints, walks, etc.  We didn't think we had enough time to go for a walk as we wanted to get to our destination in the daylight, but we did stop quite a few times and have a short wander round.  Some of the stopping places are off-road carparks, which were too deeply covered in snow to drive into (although we could usually park in the turning into the carpark, leaving us off the highway so as not to be a hazard), but there were also plenty of large lay-by style parking spaces at the side of the road, which were obviously well used and only covered by a few cm of packed snow, so not a problem to drive onto them and park.

There was fairly minimal traffic - we did large stretches of the road without another car in sight, but there were enough other cars around that we were happy that if we got into trouble we wouldn't have to wait long for someone to pass by.

We especially liked seeing a sign pointing at Honeymoon Lake, so obviously had to stop and have a look around! :)

No wildlife spotted on the drive, other than a few ravens, but plenty of mountains.

Stopping off in Jasper to pick up supplies we headed on to our accommodation for the first four nights - the aptly named Gingerbread Cabin.  It was a really nice cabin, a few minutes outside the national park boundary, in Folding Mountain Village - basically a single road with about 25 cabins along it of all shapes and sizes.  The cabin was really well provisioned and a good size - you could happily take a family there, with three double beds (one in its own room on the ground floor with an en-suite bathroom and two in the mezzanine style loft, a large living room with a wood burning stove (but there was central heating too).  Full cooking facilities - hobs, ovens, microwave, coffee pot, kettle, etc. and we were really very comfortable.  Through the front windows were nice views of the mountains to the south, and the north side looked out into a wooded area.  There weren't really any footpaths near the cabin itself as far as we could tell, so we drove into the national park each day.

We got a quick phone call from the cabin owners to check we had arrived safely almost as soon as we arrived, which was nice.

Maligne Canyon

We tended to have fairly lazy mornings, heading out just before lunch on most days and doing a bit of low level walking in the afternoon.

We awoke on Wednesday 5th February to see the thermometer outside our bedroom window showing -30°C.  A quick wander up to the top of the road and back confirmed that yes: it was indeed pretty nippy, and we went back indoors to put on some extra clothes.  Once properly dressed for the conditions, we found the temperatures pretty much fine - in many respects the dry -30° seemed nicer than the usual damp 8° we've been having in the UK.

The first challenge was figuring out how to lock the door!  The front and back doors were equipped with doorknobs with locks in the centre of the knob itself.  This appears to be pretty common in North America, but the cabin owners acknowledged that Europeans struggle with them.  In the end we gave up and left the cabin unlocked, expecting the crime rate to be pretty low given that it was pretty much in the middle of nowhere.  Later on we discovered that you need to push and turn the inside handle and then pull the door closed.

Having stopped to photograph some deer along the roadside, we drove to the car park next to the first bridge across Maligne Canyon where there's a cafe (closed in winter) and had sandwiches and coffee in the car.  Maligne Canyon is a deep slot canyon, through which the Maligne River flows, including a series of waterfalls.  We were surprised to see that it was recommended to pay a guide, but actually the walking is really easy going on well a trodden signposted path so we just made our own way.  The estimated times given for the walk seemed somewhat slow and in the end we actually went a lot further than we had expected to given the estimated times on the signposts.  Since this was all well made paths on fairly flat terrain, we hadn't bothered to take crampons and walking axes, although we did make use of our Petzl Spikeys since it was quite slippery in places.

In the summer, there would be raging waterfalls flowing through the canyon, but for us they were beautifully frozen and we saw a couple of ice climbers having a bash at one of them, leaving us wishing we'd got our ice climbing gear with us (which we were unable to take on holiday due to weight limits).

At a particularly low part of the canyon wall, we climbed through the railings and had a careful walk up the frozen river - crampons would have definitely been a good idea at this point, and as if to prove it, Mel slid over and landed quite heavily, leaving a large bruise.  After going as far as we thought was sensible, we turned around and headed back onto the footpath.

In the end we went as far as the fifth bridge, where the water was liquid and flowing freely, before heading back along a higher level path, which as it turned out was much quicker and we were back at the car in no time, having spotted and said hello to a red squirrel along the way.

Patricia Lake

There are a number of lakes on both the East and West of Jasper, so on Thursday we had lunch next to Pyramid Lake and then went for a walk next to Patricia Lake.  It was considerably warmer and we both found we'd taken far too much clothing for the -16° temperature.  Both lakes are to the northwest of Jasper and we parked next to some horse riding stables to the east of the lake.  The walk was very pleasant and we extended it a bit towards Riley Lake - there are a lot of footpaths criss-crossing the whole area, so quite easy to change your route as you go along.

About half way around, we started spotting some kind of black "fur" on a lot of the trees - I think this was probably some kind of lichen, but by the end of the walk Mel was getting quite paranoid that we were about to stumble upon a bear.  Of course, this didn't stop her later making an "it's so cute" remark at a photo of a bear!

Skating on Mildred Lake

An especially lazy day on Friday 7th - although we had thought about heading up Whistler Mountain, in the end we spent much of the day indoors lazing around doing a jigsaw puzzle.  However, we did make it out late afternoon and went over to Mildred Lake, to the East of Jasper.  The Fairmont hotel maintains a skating rink on Mildred Lake and hires skates for about $11 CAD (if you've got your own skates you can use the lake for free).  There's a circuit cut around the outside of the lake, probably about half a kilometer or so in circumference, and then four hockey pitches in the middle.

Having not skated on a lake before, I was surprised to see how bumpy it was - lots of cracks, which make skating a bit like rollerblading on the pavement - you've got to be ready for one of your skates to hit a crack and stop.  The cracks going across the track weren't too bad but the ones going along in the same direction as the track were perfect for your skate dropping down unexpectedly - especially problematic after dark when you can't actually see the cracks!  The cracking sound as you skate around was also slightly disconcerting, but the ice was transparent enough to see that it was at least half a metre thick so no chance of falling through.

Leaving the Cabin

Friday night was our last night at the cabin, and on Saturday we left and headed back down the Icefields Parkway to Lake Louise.  We had had a really good time in Jasper and we had seen a squirrel and a rabbit behind our cabin, and deer, elk (or caribou?) and a wolf as we had been driving around.  As we drove towards Jasper, we were able to stop near to a herd of goats who were grazing at the side of the road.

Much like the drive up the Icefields Parkway on Tuesday, we had a pretty relaxed drive in good weather, stopping off a few times on the way.  One of the more interesting stops was at Athabasca Falls.  The Athabasca River flows over a large chunk of Canada and the water eventually discharges into the Arctic Ocean.  The falls looked quite impressive in their frozen state.

We arrived at Lake Louise at a reasonable time, checked into the Deer Lodge and Mel rented her skis from the Fairmont Chateau, which seemed as good a price as any - ski rental seems to be quite expensive compared to Europe, but they were probably a little cheaper than we expected.  Then down to Lake Louise Village to get some grub, which was a wholesome fish & chips for me and a "mountain burger" for Mel, which disappointingly wasn't an actual mountain in a bun.


A stereotypical Canadian breakfast had to be done - very nice pancakes with blueberries and maple syrup!  The Deer Lodge ran a shuttle bus between the hotel and the ski resort, which made things easy for us - we didn't have to bother about driving.  We quickly found that they were a bit short on snow - whilst the snow report claimed a base of about 1.5 metres, there were definitely thin patches on the pistes, especially the steeper ones, with quite a few rocks and mud patches showing through.  Skiing was largely good on the greens and blues, although you had to keep your eyes peeled for rocks sticking through.  I ventured on a black almost immediately and discovered it was sheet ice with moguls - I picked up a few bruises and decided to stick to greens and blues for the time being.  By the end of the day we were back to doing a few blacks though.

There was a temperature inversion, which is apparently very common in Lake Louise, making it -28°C at the base and around -15°C towards the top.  Consequently we were getting quite warm at the top and then suffering from slightly cold fingers and toes at the bottom.

In the afternoon we made use of the "Ski Friends" service - a bunch of volunteers who do free ski tours of the mountain.  We picked the "blue" group, who kept up a reasonable pace which we were both quite happy with and it proved useful to show us around the mountain a bit.  We both ended up with quite cold fingers though and discovered that our air activated chemical hand warmers... errm, didn't.  I'm not sure if they were just old or something, but we opened several over the course of the week and none of them got more than very mildly warm.

Back to the hotel after a good day of skiing, and Mel discovered she was starting to get the very early stages of frostnip - very very dark blue toes, which worried us a bit.  However, after rewarming them in cool water, they returned to normal colour and apparently no ill effects except for a mild loss of sensation on one of the big toes - something for us to keep an eye on for the rest of the week though!

Before dinner we felt we had to give the hot tub a go - the hotel had one in the open air on the roof.  Care had to be taken when entering it because there was a lot of sheet ice on the floor, but once immersed it was lovely.  In fact, a tad too hot and we intermittently had to sit on the side out of the water to cool off, which is an interesting concept in -30° temperatures - I had interesting problems with my swimming trunks freezing to the ground!  Very relaxing though, and amusing conversation with the other hot-tubbers.

Quite a nice dinner, although I might say a bit too "fancy" for my tastes - I mostly like reasonably simple grub in large portions.

More Skiing!

Monday morning and after eggs benedict for breakfast for me, and granola and yogurt for Mel, back out onto the slopes and we were getting comfortable with the pistes and doing a mix of greens, blues and blacks.  I was eyeing up the double-blacks, but they were looking a bit too gnarly with the lack of snow - very hard packed, icy and with rocks and bare patches in places, so we steered clear.

We broke early for hot chocolate and cookies to check on Mel's toes - we didn't want a repeat of the previous day's blue toes.  Then back on the slopes for a bit, a late lunch, then more skiing.  We were both finding all the runs quite acceptable, if a little icy in places, and we were both keeping up a reasonable speed, even on the steeper stuff...  There was talk of possibly some new snow heading our way too.

We kept a similar pace on Tuesday, exploring the Lake Louise ski area more and ticking off lots of runs.  It started to snow in the afternoon, which was only going to be a good thing, and continued until the evening, stopping while I was still sitting in the hot tub.


We had decided to head to the Norquay ski resort at Banff on Wednesday - leaving the hotel we realised there had been quite a fall of snow overnight.  However, once we had driven to Norquay, it was apparent that although there was more snow than at Lake Louise, they hadn't had nearly as much as we had seen at our hotel, so we questioned whether it had been such a good decision.  However, the visibility wasn't great at altitude so we concluded that actually Norquay may well have been better skiing conditions than the higher altitude Lake Louise.  Anyway, we were there and we were going to enjoy it!

Before lunch time, we were eyeing up Memorial Bowl - a black run on the edge of the resort - slightly challenging but it was a lovely run with largely untouched powder over reasonably gentle moguls.  At the bottom it split into a run through deep powder and trees and an easy track across to the bottom section of an adjacent black.  Mel opted to take the track and I discovered that you get a lovely fresh pine scent when you're face first in a tree in knee deep powder trying to figure out how to reach the bindings of your skis to remove them and dig yourself out!

After lunch we did an adjacent black, and then rather than walking around to the other lifts we decided to go up again and try our luck on "The Lone Pine", a double black.  It was steep, heavily mogulled, but the snow was good and we didn't have too much trouble.

A few easier runs for the rest of the afternoon, although I did discover a grand slalom set up on the shallower section of another black run so had to give it a go.

With it still snowing, as it had been all day, we headed to the upper hot springs in Banff for some relaxation.  Somewhat cooler than the hotel hot tub - in fact, a rather more pleasant temperature, at around 39°C I think they said.  The pool is fed from hot geothermal springs, and open to the air so you can relax, floating in the water with the snow drifting down around you, watching the mountains all around - amazing.  We noted a sign saying "maximum recommended time in the pool: 10 minutes" and promptly ignored it, much like everyone else seemed to be doing, and had a good hour or so to ease the aches and pains.

I had been having pizza cravings, so we went to find an Italian restaurant in the town before heading back to the hotel.  On the way back, Mel noticed a quite severe vibration through the whole car, but after getting off the highway at Sunshine and examining the car by torch light, we couldn't find anything wrong so carefully continued driving.  The vibration was still evident later in the week, although had vanished by the time we drove to Calgary on Saturday - the weather had become much warmer by that point, so I suspect a large lump of ice had become attached to one of the wheels or something similar, and had presumably melted off by Saturday.

Back at Lake Louise

It continued to snow through Thursday and Friday and the snow conditions were greatly improved on the ground - we were now happy to tackle the double blacks which were dotted all around Lake Louise, especially in the back bowls and we both became very accustomed to taking off-piste trips through the wooded areas, although this did result in me gaining a few large bruises from clipping trees when misjudging turns.

Its noticeable that Canadians have a much more loose idea for what constitutes a piste than the Europeans - in Europe, if something is marked on the piste map then it's usually signposted and well marked on the ground.  Not so in Canada: many of the runs on the piste map aren't marked at all and you simply know that "in that wooded area there are 3 black runs", so you make your way into the trees and figure out your own way down.  Still, all good fun.

I should also mention that North America has a slightly different grading system to Europe - in Europe we have green (although Austria doesn't have greens), blue, red and black.  In North America there are green, blue, black and double black.  I'd say that a Canadian green is roughly equivalent to an easy European blue; a Canadian blue is something like a European hard blue or easy red; a Canadian black is somewhere around a hard European red or an easyish black and double-black is anything harder than that.  I didn't really see any slopes in Canada that would be equivalent to a European green, except for the bunny hill (beginner slopes).

Heading Back

We set off slightly late on Saturday, but did get chance to call in at Banff for a quick bit of souvenir shopping on the way.  I set off the metal detectors at Calgary Airport and was invited for a choice between a pat-down and a trip through the body scanner - I opted for the pat-down but the security guard then tried to persuade me to take the scan by telling me it was "only ultrasound"...  I knew this was untrue, but after a look at the scanner I saw it was only a sub-millimetre wave scanner so I agreed (if it had been a backscatter X-ray scanner I would've flatly refused).

All reasonably plain-sailing after that on the flight to Amsterdam - again, we were treated very well, although not so much food as we'd had going out - presumably because we were flying over night.  I did end up having a couple of free glasses of wine though, in the hope it might send me to sleep... it didn't.  If anyone tells you that you can sleep on a plane, don't believe them!

We got to Amsterdam about 30 minutes early, which was nice since we were only expecting a 1 hour stopover.  Slightly odd that you have to go through security again to get on the next flight, even though you've only just got off a plane.  After boarding the plane to Cardiff and watching the exits being pointed out to us, the pilot cut the power to the engines and announced that there was a "technical problem" and that an engineer had been called.  After much banging and whirring, we were told that the problem was bigger than expected and were moved to a different plane.

The second plane got us to Cardiff on Sunday morning without any more problems, although obviously rather later than expected since our takeoff had been delayed by an hour.  Unfortunately it seemed our checked bags hadn't made it out of Amsterdam.


Our luggage did arrive in Cardiff on Sunday evening and was couriered to us for Tuesday, although luckily we hadn't got anything we urgently needed in those bags.

All in all a very enjoyable trip!