Thursday, 1 April 2021

When a spammer ignores court action

It is unlawful for an organisation to send unsolicited marketing email / SMS messages to personal email addresses / telephone numbers, except in some specific circumstances:

  1. If you have been a customer of the organisation that sent the marketing (or have negotiated business with them, even if you never actually bought anything in the end);
  2. If you were given a clear opportunity to opt out of marketing communications at the time your details were originally collected; and
  3. If you were given a clear opportunity to opt out of marketing communications in every such communication.

If all of the above is true, they're allowed to send you unsolicited marketing, otherwise they aren't.  The rules are different for business addresses, and I won't get into that here - this post is specifically about spam to personal addresses.  The legislation for this is Regulation 22 of The Privacy and Electronic Communications (EC Directive) Regulations 2003 (PECR), and despite having "EC Directive" in the title, these regulations still apply post-Brexit.  Ignoring the regulations is probably also considered a misuse of personal data under the General Data Protection Regulation (GDPR)

Whenever I hand over personal data, I always make sure I opt out of marketing if there is the option, so in theory I shouldn't ever get any spam.  Of course, I still get quite a bit, because no one actually bothers to comply with the regulations.  And why should they?  About the worst thing that will happen to an organisation that ignores the regulations is a sternly worded letter from the Information Commissioner's Office.

Luckily, Regulation 30 of PECR and Article 79 of GDPR both allow civil action, so I've been sending legal notices to spammers, off and on, for a few years.  Usually this gets settled out of court, occasionally it goes as far as me filing proceedings with the small claims court (whereupon the spammer usually figures out that I'm serious and settles).

This is the first time that a spammer has completely ignored legal paperwork.  It was a bit of a learning curve for me, so I thought I'd document the process.

This all started when I made a purchase from bulkpowders.co.uk and they subsequently started sending me regular spam SMS messages.  I've got a standard email that I use in these situations, which I sent to their data protection officer.  Broadly, the email contains 3 parts:

  1. A "Notice Before Action", which outlines how they have broken the law, why I think I'm entitled to damages and how much I'm asking for.  I invite their comments and any offer of settlement.  They have 14 days to respond to this.
  2. A "Subject Access Request" under Article 15 of GDPR.  This is to find out what information they have about me and I specifically ask for information about which third parties they have shared my data with.  They have one month to respond to this part.
  3. A request to opt out of further communications, together with a contract.  The contract says that I'll charge for any subsequent communications and that sending any will be deemed as acceptance of the contract terms.

In this case, I did get a reply from Bulk Powders' DPO basically saying they were innocent because they had given me an opportunity to opt out.  I replied pointing out that the only way to opt out was to read down to the 1742nd word of their privacy policy which explains how to opt out, which certainly doesn't meet the requirements of a "simple" opt out.  They also responded to the Subject Access Request.  They did, however, continue to send spam SMS messages.

For both responses, they left it until the last possible day allowed by the deadlines before responding.  I queried whether they intended to settle damages and they said they did not, so that was that and I filed proceedings with the small claims court via Money Claim Online.  From what I could tell, Bulk Powders is a trading name of Sports Supplements Limited, so that's who I filed against.

Money Claim Online is a government website that makes it very simple to file small claims proceedings.  It costs £25, which you add onto the claim so that the defendant pays it if you win.

First of all you fill in a claim with your details, the defendant's details, an explanation of the claim and the total amount being claimed (including the £25 fee).  In my case, the total included the damages, plus the cost of each subsequent SMS message that I had set out in my original email to Bulk Powders:

The court sends a notice to the defendant and the defendant is supposed to either pay up or file a defence within 2 weeks.  Sports Supplements did neither - they completely ignored the paperwork that the court had sent them, so this was never going to end well for them.  So since they hadn't disputed the claim, I could ask the court to enter a default judgement in my favour.  This is a bit confusing because, from the claimant's perspective, the paperwork just says it is a "request for judgement" rather than saying that it is actually the judgement itself, but its all done through the Money Claim Online web site:

So now, they should just pay up, right?  Except they didn't - they again ignored the court.  So how to collect the money that I'm owed?  For larger amounts, you can employ the High Court Enforcement Officers, but for smaller amounts like I'm claiming you use bailiffs.  I was pretty unsure how to do this, and some Googling turned up lots of people saying that the bailiffs were usually slow and useless.  But it actually turned out to be dead easy and quite quick .

To send the bailiffs, I again went to the claim on the Money Claim Online web site and used the "Request Warrant" link.  This costs £77, which again gets added to the claim.  So if you manage to get paid, you don't have to pay (if the bailiffs can't extract money from the defendant, you're now £25 + £77 = £102 out of pocket).  A week later I received a letter from the court saying they had received a cheque and that it would be forwarded to me after it cleared.  It took another couple of weeks before I had a cheque from the court:

So there we go, it took a bit over 5 months from the time of my initial complaint until payment by the court.  I'm not sure exactly why they paid up in the end - did the bailiff actually show up at their door (in the middle of a pandemic) demanding money?  I originally asked for a £200 settlement, but by repeatedly ignoring the problem, Bulk Powders / Sports Supplements ended up paying out £362 (I got £260, the court got £102).  I'm not really sure what they hoped to achieve by ignoring things, especially once they started getting paperwork from the court - were they expecting me not to risk £77 to engage the bailiffs?

I still have serious concerns regarding Bulk Powders' handling of personal data, and I have made a complaint to the ICO (but they usually take 4-5 months to respond to complaints).  Not only did they send unlawful marketing emails, but I know that they have passed my details onto another company, who has also used them for marketing purposes.  Bulk Powders' privacy policy explicitly says they won't do that.  (I'm taking action against the other company at the moment, so can't really comment on that until it is resolved one way or the other).

Tuesday, 9 March 2021

Making children as safe as they are offline

In a speech last week, The Information Commissioner, Elizabeth Denham said:

The internet was not designed for children, but we know the benefits of children going online. We have protections and rules for kids in the offline world – but they haven’t been translated to the online world.

— Elizabeth Denham, Information Commissioner

Neil Brown from decoded.legal posted an insightful blogpost on the perception that the internet is unregulated and dangerous, compared to the offline world.  The main thrust of the blogpost is that the offline world is not designed to be safe for unsupervised children.

The ICO's Children's Code is intended to make the internet safer for children.  This is a laudable goal, and there are certainly some parts of the code that all companies should be following to protect everyone, children and adults alike.  For example, privacy information is often quite opaque, even to adults, so the requirement to provide clear privacy information would benefit us all.

The offline world is very rarely designed for unsupervised children.  As Neil points out, even children's play areas are usually only designed to be safe for children who are under supervision.  They only prevent unsupervised children from entering by posting a sign (with complex grammar that might not be understood by children).  Washing your hands of the safety of unsupervised children by posting a similar sign on your website or app would almost certainly not be allowed under the Children's Code.

The intent of the Children's Code appears to be to make the internet safe for unsupervised children, but we don't do this in the offline world because it is usually not proportionate.

And this is the crux of the matter: its impossible to make the whole offline world safe for unsupervised children.  It would require banning essential tools, or placing huge financial burdens on vendors.  Councils would have to spend a disproportionate amount of money ensuring that unsupervised children cannot access dangerous roads.  So why do we expect to do so for the online world?

The key point is supervision: the internet should be safe for children, but we shouldn't be going to a disproportionate amount of effort to make it safe for unsupervised children.

But supervision is hard.  If your child is in the kitchen juggling knives, you'll probably notice, whereas they could be in the same room as you, doing unsafe things on their phone and you'll never notice.

Neil briefly points at a few technologies that can be used for child protection:

I use some of the measures which have come in for criticism recently — VPNs, and DNS over https — to maximise the scope of the filtering of Internet connections. More filtering, and more aggressive filtering, not less.

Indeed, I suspect that it is easier to prevent an unsupervised child from travelling to a particular place online than it is offline, if that's the path down which the responsible adult wishes to go.

— Neil Brown, decoded.legal

As Neil points out, by using a VPN you can direct your child's network traffic through system which can block access to inappropriate content and allow parents to supervise their child's online activities.  The parents can install an inspection certificate on your child's device which says "your parents' filter is allowed to decrypt and supervise this device", without allowing unauthorised decryption by others.

This means that the parents, or the child's school, can have a centralised system that they use to set parental controls and supervise the children under their care, across the whole internet.

Unfortunately, the main corporations that control online platforms have unilaterally decided that parents and schools shouldn't be allowed to supervise their children.  In 2016, Google effectively pulled the plug on inspection certificates by disabling them in all Android apps.  Facebook, Twitter and others had already disabled inspection certificates in their own apps some years before.

Without inspection certificates, fine grained filtering and supervision are off the table.  The playground has an opaque fence - you saw your child enter the playground, but you're not allowed to supervise their play.  You know the playground has a slide that is too high for a child of their age, but you're only allowed to control whether they can go into the playground, not whether or not they can go on the high slide.  Are they being bullied in the playground?  Who knows - you're not allowed to look!

There are a few technologies on the horizon which could make it even harder for parents to supervise and control their children's internet access, and historically, Google, Facebook, Twitter, et-al have imposed new privacy technologies and policies upon the public without consultation.  The problem is not the technologies themselves, but that they are unilaterally imposed on users rather than giving them the choice.  Whilst the ability to improve your own privacy is great, the decision over whether a parent can supervise their child should be made by the parents and children themselves, not by untouchable corporations.

The Children's Code does talk about parental controls and monitoring, but there is no framework or requirement to standardise them so that they can interact with a parent's centralised system.  The Children's Code's requirements will simply produce a fragmented approach.  Rather than the parent being able to set controls and supervise their child across the whole internet, they will need to log into each website and app separately.  Imagine having to log in and check separate "is my child juggling knives", "is my child playing with matches" and "is my child bullying their sibling" apps in the offline world.

Rather than demanding that all websites and apps are safe for unsupervised children, the ICO should be setting out a framework for websites and apps to interoperate with centralised systems operated by parents and schools.  They should be placing requirements on companies to consider whether their policies or technologies are detrimental to filters and supervision systems that are already in place.


Note: I am the Technical Director of Opendium, a company that specialises in network based online safety systems for UK schools.  This subject is of importance not only to parents, but to anyone or any organisation that is in a position of loco-parentis, such as schools, foster parents, etc.


Update: 12th March 2021

Neil has posted a follow-up response to this blogpost.

Firstly I'd like to say that, although Neil and I fundamentally disagree on a lot of things, it's very healthy to be having the conversation, and it underscores the fact that there is no single "one size fits all" when it comes to safeguarding children.  Everyone in a position of responsibility over children will have a different opinion on how best to protect those children, and these are the people who should be making the decisions - not governments or corporations, but parents and carers.

Also, although I certainly see technology as a very important part of online safety, I'd never advocate it as the only, or either primary, solution.  Neil is absolutely right that surveillance and supervision are not the same thing, and supervision requires carers to engage with the children and actually teach them how to be safe and to support them.  Indeed, gone are the days where schools just ticked their "online safety" box by installing a filter and letting it quietly run in the corner.  These days, schools are expected to support and teach children to be safe online.  Of course, some schools are very good whilst a few do just install a filter and treat it as a done job.  Thankfully, the inspectors are getting better at asking schools about their online safety policies.  I certainly think that there should be limits on how much carers invade children's privacy, but I also think that children can't expect absolute privacy - there's some balance to be had, and that balance isn't going to be the same for every situation.

My previous comments weren't intended as a rebuttal against Neil's original post - I saw them more as a reflection on something that I think(?) we agreed on (you should supervise children instead of trying to make the world safe for unsupervised children), but our idea of supervision obviously diverges somewhat.  I think this update is probably a rebuttal of Neil's follow up post though.

Traffic decryption

So, without further ado (quotes are from Neil's blogpost):

In most implementations, your target will never know that they are not talking directly to Facebook.

This isn't really true.  Android, for example, has a persistent notification that pops up every time you boot your device reminding you that you have authorised a third party to monitor your connection.  Its not quite as obvious in on a desktop machine, but it is certainly discoverable - clicking the padlock in Firefox clearly shows a warning.  Chrome isn't quite as good, but the information is there.  The persistent Android notification could probably be made more specific, such as telling you who you authorised to monitor your connection, rather than just that someone has been authorised.

State actors have the resources to install certificates directly in the OS's root certificate store, so there's not a lot that OS vendors can do to warn the user about that - this discussion is basically about certificates that the user has authorised themselves.

If someone has built the infrastructure to intercept and inspect your communications in this way, they can look your communications with your bank, the content of your email (and modify it!) and so on.

Entirely true, but thankfully most 5 year olds don't have bank accounts.  I think it goes without saying that how you supervise children depends on a lot of factors.  A primary factor is, of course, the child's age, and what is appropriate for a 5 year old is not appropriate for a 15 year old and certainly not appropriate for adults.  There isn't a "one size fits all" solution, so why should corporations impose one?

Walled gardens

Neil talks about using DNS whitelisting to set up a walled garden that only allows access to specific websites.  This means you to decide which websites to allow access to based entirely on their host name - the rest of the web address is encrypted.  Whilst a great idea in theory, and certainly a staple of school filtering 15 years ago, in the modern age this seems quite naive and doesn't really reflect the reality of the situation for a couple of reasons:

  1. Modern websites use resources from all over the place.  As a recent example, the government's COVID testing website uses Google's reCAPTCHA, which is hosted on www.google.com, so if you wanted to allow access to the COVID testing website, you would also need to allow access to Google web search, Google Images, Google News, Google Videos, etc.  The same is true for most websites and online services these days.  Not only does this undermine the protection of your "walled garden", but it also makes it extremely hard to actually set up the whitelist in the first place - you can't just whitelist the host name of one website, you have to figure out what other resources it needs (this usually can't be automated reliably).
  2. Harmful content is quite often stored along side safe content on the same host name.  If you're allowing access to googleusercontent.com so that various Google applications work, you're also allowing access to a lot of inappropriate content.  Since the child is probably under supervision, it may not be a big concern, but we certainly shouldn't pretend that the problem doesn't exist.

That schools are discouraged from using overly restrictive blocking policies should be an indication that a walled garden approach might do more harm than good.  Parents certainly need to make a decision as to whether its better for children to be in a very restrictive walled garden, or to be allowed to explore the internet more freely with a more dynamic system offering some protection from harmful content they might stumble across.  Again, this is a decision for the parents and carers, not for government or corporations.

Their platform, their rules

The second notion I found particularly interesting was that the private space on these companies' platforms (fixing the weaknesses in their own apps), and the operating systems they develop, should not be theirs to control, and that the decisions as to how they develop their services and products should not be theirs

Businesses, of course, have an obligation to fix security weaknesses in their own apps or platforms (although will I dispute the idea that the user making the choice to allow their communications to be decrypted by a specific party is a security weakness in the operating system).  However, where there are large sections of the population who will be negatively affected by the change, I do believe that a business has an obligation to enter into a discussion to see whether everyone can be accommodated.

Neil's opinion largely seems to be "their platform, their rules, if you don't like it go elsewhere".  But where else can users go?  There are basically 2 choices for mobile operating system:

Android phones start at about £45.  They have the aforementioned problems.

Pretty much the entire online safety sector has been asking Google for a dialogue for the last 5 years and have been roundly ignored.  I've seen numerous bug reports in the Android bug tracker, opened by online safety vendors and schools, and they have all been ignored or closed by the Android team without discussion.

In 2017, the IWF put me in contact with Katie O'Donovan, Google UK's head of Public Policy to try and open a dialogue, but Google were simply not interested in discussing the matter.  People within the Home Office have expressed similar frustrations.

So lets "go elsewhere": an old model iPhone starts from about £300 (£1000 for something more up to date).  Not everyone can afford to spend that kind of money on a phone.

As well as the cost of iPhones, I have to point at an incident that happened around 2 years ago: Apple provides a mobile device management (MDM) system, which is designed to allow businesses to manage their devices.  Parental control software was also allowed to hook into the MDM system, But then Apple changed the rules so that MDM could no long be used for parental control.  Since there was no other system that parental control software could use, there was outcry from the software vendors.  Apple ignored the vendors' concerns and banned the parental control apps from the App Store.  Only later did they reverse this decision as a result of bad press.

So there are only two mobile platforms, and they both have a history of refusing to engage with the people their decisions affect.

If what Steve means is that it should have been left to responsible adults to decide whether or not they want encryption which is MitM'able or not, they do, of course, have that choice: they are not required to let their children send traffic to Facebook or Twitter, if they don't agree with the way they operate, nor are they required to adopt the Android operating system.

The "their platform, their rules" argument could be applied anywhere: Should Facebook be absolved of any child protection obligations, because it's their platform?  Should an outdoor activity centre be absolved of health and safety obligations because it's a private location?  No, of course not - we expect private businesses, both on and offline, to adhere to various duty of care obligations.  Why should we not expect Google, Apple, Facebook, Twitter, Microsoft, etc. to have a duty of care to their users, and to undertake a proper consultation to make sure that changes they make do not undermine their users' safety?  Especially if people have been trying to make them aware of the problems for years.

The idea that we should leave private companies to do whatever they want because parents have a choice to ban their children from those platforms is ridiculous, and at odds with the government's stance with respect to Online Harms.

Surveillance companies, unilateral decisions, and consultations

Lastly, I wonder if there is a degree of double-standards at play here, in that I cannot help but wonder if the vendors of child surveillance systems operate with this degree of transparency and co-operation.

Can these vendors show that those most affected by their software — the children they surveil — were consulted?

No, almost certainly not, but I don't think this is the smoking gun of double standards that Neil wants it to be.  Certainly, as far as Opendium goes, we do not "surveil" children - we merely provide the tools for schools to safeguard the children who are under their care.  Can a CCTV camera vendor show that their customers have complied with the various laws that surround installation and operation of CCTV cameras?  Almost certainly not - in both cases, the vendor is not the company responsible for doing these things, so there is no way for them to guarantee that they have been done.

What I can say is that we do work closely with our customers, and would always advise that they must not undertake any covert monitoring.  Data protection legislation does require schools to be transparent with the children about what monitoring is being done, etc.  I'm not sure why the Information Commissioner's Office has limited the Children's Code to only online services, since much of it is equally relevant to the offline world - schools certainly should be providing clear and understandable privacy information to children.

Do children have a consequence-free option of not being subjected to these surveillance measures?

In law, the child's parents (or the people in loco-parentis) are responsible for making decisions regarding the child's safety.  Do children have a consequence-free option of not being subject to their parent's gaze while playing in the park?  Are they allowed to play in the playground without a teacher watching them?  Probably not - this is not the child's decision, because they are... a child.  It is up to their parents.

But it is certainly a discussion that a child can have with their carer.  I'm certainly aware of one case where a parent requested that their child not be monitored, and the school complied with the request (after having the parent sign a suitable waiver).  I have no idea what the legality of that situation is, given that the result might be the school failing to comply with their statutory obligations.

Anyway, that's enough for today.  As I said at the start of the update, I think these discussions are healthy and, as with politics, we're far better off having a chat about these things to try and understand the opposing point of view rather than just stand at the sidelines shouting "you're wrong". :)