In a speech last week, The Information Commissioner, Elizabeth Denham said:
The internet was not designed for children, but we know the benefits of children going online. We have protections and rules for kids in the offline world – but they haven’t been translated to the online world.
— Elizabeth Denham, Information Commissioner
Neil Brown from decoded.legal posted an insightful blogpost on the perception that the internet is unregulated and dangerous, compared to the offline world. The main thrust of the blogpost is that the offline world is not designed to be safe for unsupervised children.
The ICO's Children's Code is intended to make the internet safer for children. This is a laudable goal, and there are certainly some parts of the code that all companies should be following to protect everyone, children and adults alike. For example, privacy information is often quite opaque, even to adults, so the requirement to provide clear privacy information would benefit us all.
The offline world is very rarely designed for unsupervised children. As Neil points out, even children's play areas are usually only designed to be safe for children who are under supervision. They only prevent unsupervised children from entering by posting a sign (with complex grammar that might not be understood by children). Washing your hands of the safety of unsupervised children by posting a similar sign on your website or app would almost certainly not be allowed under the Children's Code.
The intent of the Children's Code appears to be to make the internet safe for unsupervised children, but we don't do this in the offline world because it is usually not proportionate.
"The Internet was not designed for children. "
— Bloor (@alexbloor) March 8, 2021
What, like matches, knives, cars, soldering irons, most pets, etc.
If parents let their kids have access to all the above unsupervised, we think they are not very cautious parents. Why do we view the Internet differently? https://t.co/zBshcIzwqB
And this is the crux of the matter: its impossible to make the whole offline world safe for unsupervised children. It would require banning essential tools, or placing huge financial burdens on vendors. Councils would have to spend a disproportionate amount of money ensuring that unsupervised children cannot access dangerous roads. So why do we expect to do so for the online world?
The key point is supervision: the internet should be safe for children, but we shouldn't be going to a disproportionate amount of effort to make it safe for unsupervised children.
But supervision is hard. If your child is in the kitchen juggling knives, you'll probably notice, whereas they could be in the same room as you, doing unsafe things on their phone and you'll never notice.
Neil briefly points at a few technologies that can be used for child protection:I use some of the measures which have come in for criticism recently — VPNs, and DNS over https — to maximise the scope of the filtering of Internet connections. More filtering, and more aggressive filtering, not less.
Indeed, I suspect that it is easier to prevent an unsupervised child from travelling to a particular place online than it is offline, if that's the path down which the responsible adult wishes to go.
— Neil Brown, decoded.legal
As Neil points out, by using a VPN you can direct your child's network traffic through system which can block access to inappropriate content and allow parents to supervise their child's online activities. The parents can install an inspection certificate on your child's device which says "your parents' filter is allowed to decrypt and supervise this device", without allowing unauthorised decryption by others.
This means that the parents, or the child's school, can have a centralised system that they use to set parental controls and supervise the children under their care, across the whole internet.
Unfortunately, the main corporations that control online platforms have unilaterally decided that parents and schools shouldn't be allowed to supervise their children. In 2016, Google effectively pulled the plug on inspection certificates by disabling them in all Android apps. Facebook, Twitter and others had already disabled inspection certificates in their own apps some years before.
Without inspection certificates, fine grained filtering and supervision are off the table. The playground has an opaque fence - you saw your child enter the playground, but you're not allowed to supervise their play. You know the playground has a slide that is too high for a child of their age, but you're only allowed to control whether they can go into the playground, not whether or not they can go on the high slide. Are they being bullied in the playground? Who knows - you're not allowed to look!
There are a few technologies on the horizon which could make it even harder for parents to supervise and control their children's internet access, and historically, Google, Facebook, Twitter, et-al have imposed new privacy technologies and policies upon the public without consultation. The problem is not the technologies themselves, but that they are unilaterally imposed on users rather than giving them the choice. Whilst the ability to improve your own privacy is great, the decision over whether a parent can supervise their child should be made by the parents and children themselves, not by untouchable corporations.
The Children's Code does talk about parental controls and monitoring,
but there is no framework or requirement to standardise them so that they can interact with a parent's centralised system. The Children's Code's requirements will simply produce a fragmented approach. Rather than the parent being able to set controls and supervise their child across the whole internet, they will need to log into each website and app separately. Imagine having to log in and check separate "is my child juggling knives", "is my child playing with matches" and "is my child bullying their sibling" apps in the offline world.
Rather than demanding that all websites and apps are safe for unsupervised children, the ICO should be setting out a framework for websites and apps to interoperate with centralised systems operated by parents and schools. They should be placing requirements on companies to consider whether their policies or technologies are detrimental to filters and supervision systems that are already in place.
Note: I am the Technical Director of Opendium, a company that specialises in network based online safety systems for UK schools. This subject is of importance not only to parents, but to anyone or any organisation that is in a position of loco-parentis, such as schools, foster parents, etc.
Update: 12th March 2021
Neil has posted a follow-up response to this blogpost.
Firstly I'd like to say that, although Neil and I fundamentally disagree on a lot of things, it's very healthy to be having the conversation, and it underscores the fact that there is no single "one size fits all" when it comes to safeguarding children. Everyone in a position of responsibility over children will have a different opinion on how best to protect those children, and these are the people who should be making the decisions - not governments or corporations, but parents and carers.
Also, although I certainly see technology as a very important part of online safety, I'd never advocate it as the only, or either primary, solution. Neil is absolutely right that surveillance and supervision are not the same thing, and supervision requires carers to engage with the children and actually teach them how to be safe and to support them. Indeed, gone are the days where schools just ticked their "online safety" box by installing a filter and letting it quietly run in the corner. These days, schools are expected to support and teach children to be safe online. Of course, some schools are very good whilst a few do just install a filter and treat it as a done job. Thankfully, the inspectors are getting better at asking schools about their online safety policies. I certainly think that there should be limits on how much carers invade children's privacy, but I also think that children can't expect absolute privacy - there's some balance to be had, and that balance isn't going to be the same for every situation.
My previous comments weren't intended as a rebuttal against Neil's original post - I saw them more as a reflection on something that I think(?) we agreed on (you should supervise children instead of trying to make the world safe for unsupervised children), but our idea of supervision obviously diverges somewhat. I think this update is probably a rebuttal of Neil's follow up post though.
Traffic decryption
So, without further ado (quotes are from Neil's blogpost):
In most implementations, your target will never know that they are not talking directly to Facebook.
This isn't really true. Android, for example, has a persistent notification that pops up every time you boot your device reminding you that you have authorised a third party to monitor your connection. Its not quite as obvious in on a desktop machine, but it is certainly discoverable - clicking the padlock in Firefox clearly shows a warning. Chrome isn't quite as good, but the information is there. The persistent Android notification could probably be made more specific, such as telling you who you authorised to monitor your connection, rather than just that someone has been authorised.
State actors have the resources to install certificates directly in the OS's root certificate store, so there's not a lot that OS vendors can do to warn the user about that - this discussion is basically about certificates that the user has authorised themselves.
If someone has built the infrastructure to intercept and inspect your communications in this way, they can look your communications with your bank, the content of your email (and modify it!) and so on.
Entirely true, but thankfully most 5 year olds don't have bank accounts. I think it goes without saying that how you supervise children depends on a lot of factors. A primary factor is, of course, the child's age, and what is appropriate for a 5 year old is not appropriate for a 15 year old and certainly not appropriate for adults. There isn't a "one size fits all" solution, so why should corporations impose one?
Walled gardens
Neil talks about using DNS whitelisting to set up a walled garden that only allows access to specific websites. This means you to decide which websites to allow access to based entirely on their host name - the rest of the web address is encrypted. Whilst a great idea in theory, and certainly a staple of school filtering 15 years ago, in the modern age this seems quite naive and doesn't really reflect the reality of the situation for a couple of reasons:
- Modern websites use resources from all over the place. As a recent example, the government's COVID testing website uses Google's reCAPTCHA, which is hosted on www.google.com, so if you wanted to allow access to the COVID testing website, you would also need to allow access to Google web search, Google Images, Google News, Google Videos, etc. The same is true for most websites and online services these days. Not only does this undermine the protection of your "walled garden", but it also makes it extremely hard to actually set up the whitelist in the first place - you can't just whitelist the host name of one website, you have to figure out what other resources it needs (this usually can't be automated reliably).
- Harmful content is quite often stored along side safe content on the same host name. If you're allowing access to googleusercontent.com so that various Google applications work, you're also allowing access to a lot of inappropriate content. Since the child is probably under supervision, it may not be a big concern, but we certainly shouldn't pretend that the problem doesn't exist.
That schools are discouraged from using overly restrictive blocking policies should be an indication that a walled garden approach might do more harm than good. Parents certainly need to make a decision as to whether its better for children to be in a very restrictive walled garden, or to be allowed to explore the internet more freely with a more dynamic system offering some protection from harmful content they might stumble across. Again, this is a decision for the parents and carers, not for government or corporations.
Their platform, their rules
The second notion I found particularly interesting was that the private space on these companies' platforms (fixing the weaknesses in their own apps), and the operating systems they develop, should not be theirs to control, and that the decisions as to how they develop their services and products should not be theirs
Businesses, of course, have an obligation to fix security weaknesses in their own apps or platforms (although will I dispute the idea that the user making the choice to allow their communications to be decrypted by a specific party is a security weakness in the operating system). However, where there are large sections of the population who will be negatively affected by the change, I do believe that a business has an obligation to enter into a discussion to see whether everyone can be accommodated.
Neil's opinion largely seems to be "their platform, their rules, if you don't like it go elsewhere". But where else can users go? There are basically 2 choices for mobile operating system:
Android phones start at about £45. They have the aforementioned problems.
Pretty much the entire online safety sector has been asking Google for a dialogue for the last 5 years and have been roundly ignored. I've seen numerous bug reports in the Android bug tracker, opened by online safety vendors and schools, and they have all been ignored or closed by the Android team without discussion.
In 2017, the IWF put me in contact with Katie O'Donovan, Google UK's head of Public Policy to try and open a dialogue, but Google were simply not interested in discussing the matter. People within the Home Office have expressed similar frustrations.
So lets "go elsewhere": an old model iPhone starts from about £300 (£1000 for something more up to date). Not everyone can afford to spend that kind of money on a phone.
As well as the cost of iPhones, I have to point at an incident that happened around 2 years ago: Apple provides a mobile device management (MDM) system, which is designed to allow businesses to manage their devices. Parental control software was also allowed to hook into the MDM system, But then Apple changed the rules so that MDM could no long be used for parental control. Since there was no other system that parental control software could use, there was outcry from the software vendors. Apple ignored the vendors' concerns and banned the parental control apps from the App Store. Only later did they reverse this decision as a result of bad press.
So there are only two mobile platforms, and they both have a history of refusing to engage with the people their decisions affect.
If what Steve means is that it should have been left to responsible adults to decide whether or not they want encryption which is MitM'able or not, they do, of course, have that choice: they are not required to let their children send traffic to Facebook or Twitter, if they don't agree with the way they operate, nor are they required to adopt the Android operating system.
The "their platform, their rules" argument could be applied anywhere: Should Facebook be absolved of any child protection obligations, because it's their platform? Should an outdoor activity centre be absolved of health and safety obligations because it's a private location? No, of course not - we expect private businesses, both on and offline, to adhere to various duty of care obligations. Why should we not expect Google, Apple, Facebook, Twitter, Microsoft, etc. to have a duty of care to their users, and to undertake a proper consultation to make sure that changes they make do not undermine their users' safety? Especially if people have been trying to make them aware of the problems for years.
The idea that we should leave private companies to do whatever they want because parents have a choice to ban their children from those platforms is ridiculous, and at odds with the government's stance with respect to Online Harms.
Surveillance companies, unilateral decisions, and consultations
Lastly, I wonder if there is a degree of double-standards at play here, in that I cannot help but wonder if the vendors of child surveillance systems operate with this degree of transparency and co-operation.
Can these vendors show that those most affected by their software — the children they surveil — were consulted?
No, almost certainly not, but I don't think this is the smoking gun of double standards that Neil wants it to be. Certainly, as far as Opendium goes, we do not "surveil" children - we merely provide the tools for schools to safeguard the children who are under their care. Can a CCTV camera vendor show that their customers have complied with the various laws that surround installation and operation of CCTV cameras? Almost certainly not - in both cases, the vendor is not the company responsible for doing these things, so there is no way for them to guarantee that they have been done.
What I can say is that we do work closely with our customers, and would always advise that they must not undertake any covert monitoring. Data protection legislation does require schools to be transparent with the children about what monitoring is being done, etc. I'm not sure why the Information Commissioner's Office has limited the Children's Code to only online services, since much of it is equally relevant to the offline world - schools certainly should be providing clear and understandable privacy information to children.
Do children have a consequence-free option of not being subjected to these surveillance measures?
In law, the child's parents (or the people in loco-parentis) are responsible for making decisions regarding the child's safety. Do children have a consequence-free option of not being subject to their parent's gaze while playing in the park? Are they allowed to play in the playground without a teacher watching them? Probably not - this is not the child's decision, because they are... a child. It is up to their parents.
But it is certainly a discussion that a child can have with their carer. I'm certainly aware of one case where a parent requested that their child not be monitored, and the school complied with the request (after having the parent sign a suitable waiver). I have no idea what the legality of that situation is, given that the result might be the school failing to comply with their statutory obligations.
Anyway, that's enough for today. As I said at the start of the update, I think these discussions are healthy and, as with politics, we're far better off having a chat about these things to try and understand the opposing point of view rather than just stand at the sidelines shouting "you're wrong". :)
No comments:
Post a Comment