Three years on, should we trust Telegram?

In August 2013, two Russian developers—and brothers—Nikolai Durov and Pavel Durov released Telegram to the world, a new instant messaging platform with a simple promise: to provide privacy and security that competing platforms available at the time weren't delivering. Telegram is usable on mobile devices and desktop operating systems alike, and promotes Secret Chats as a way to securely exchange messages with end-to-end encryption. Indeed, Telegram is quite pleasant to use for the most part. Messages are delivered very quickly, the available mobile and desktop clients provide a fairly pleasant user experience and there's no dependency on your mobile device having an active connection to use Telegram from another device (like with WhatsApp). 

Most unusual about the design of Telegram, however, was the decision to engineer a new encryption scheme called MTProto, using symmetric encryption keys, rather than using previously tested and well-known encryption schemes. Cryptographers expressed doubt about whether custom-designed cryptography will be subject to flaws that compromise the security or privacy of the end-user. Some experts, including researchers at Aarhus University, have expressed concern about whether the encrypted messages are properly authenticated, leading to potential weaknesses. MTProto has received criticism from the Electronic Frontier Foundation (EFF). To look at this alone, the outlook doesn't seem good.

Perhaps most daunting overall is the fact that Telegram actually doesn't perform end-to-end encryption of instant messages by default, instead reserving this functionality only for "Secret Chats", which must be manually initiated by the user and can only take place between two specific devices (a Telegram user with multiple devices will only be able to interact with that secret chat session on the device it was initiated from/accepted at). Telegram claim that this is because cloud syncing of instant messages between devices is more convenient for non-secret chats than the guaranteed security that end-to-end encryption provides. What this means in practice is that normal instant messages sent over Telegram are actually stored by Telegram in a format that they can decrypt themselves. Perhaps we should just hope instead that nobody raids Telegram's datacenters.

Take Apple, for example, who took a different approach with iMessage that allows them to provide end-to-end encryption between devices whilst still providing the illusion of message sync across devices. Instead of encrypting the message once for the recipient user, iMessage actually encrypts the message for each recipient device separately, as each device has it's own encryption keys. In effect, if you own an iPad, an iPhone and a Mac and a friend sends you an iMessage, they are actually encrypting and sending the message three times, once for each device. Every device receives a copy of every message, so you can jump between devices without a loss of history, but no actual syncing of message history is taking place between clients and the iMessage server. Everything end-to-end, as it should be.

There's no doubt that the methodology used by Apple works. Huge volumes of iMessages are sent daily, and a user of iMessage never has to think about whether or not they should really be switching to a secret chat as all messages are end-to-end encrypted by default. This introduces the next significant problem for Telegram as a secure platform: human error.

Humans are typically the weakest link in any secure system, and it only takes a user to type something secret into a non-secret chat by mistake (or just forget to initiate a secret chat altogether) and effectively it's game over. It is hugely irresponsible of Telegram to market itself as a secure messaging platform and yet place the responsibility for security solely into the hands of the user, all whilst making the baseless assumption that the user will actually remember or recognise when a secret chat should be used instead of a regular one. In fact, it makes an even worse assumption that all Telegram users even know that secret chats exist or how they worksomething that we should not assume to be correct for those who have simply been told to download Telegram by their friends and family without having performed any further reading or research. 

That's not to say that iMessage is perfect by any means. Indeed iMessage also has weaknesses, largely in the fact that you must trust the public key infrastructure that Apple uses for iMessage-capable devices to discover each other's public keys. Specifically, you must trust that Apple will not inject additional public keys into the directory without your knowledge or consent, given that Apple devices will not notify you as a user when someone else's public keys change. This is not an unsolvable problem, however, and can easily be mitigated by allowing the user to control which keys (or rather, devices) it should trust and notifying the user when new public keys appear for your contacts. Legitimately this would happen if someone were to log into iMessage from a new device, but equally it may also happen if a sneaky Government were trying to obtain a copy of any messages you sent to that user from that point forward.

Whilst not perfect, however, the iMessage approach is clearly superior. Treat all messages as if they're secret. Treat each of the recipient's devices as a separate entity with it's own unique encryption keys. Keep the private keys in the hands of the user's device. Only store messages on the iMessage server in a format that Apple themselves can't decrypt. Don't place any of the onus on the user to be secure. Don't assume the user knows when they are and aren't being secure.

There are a lot of things that Telegram would do well to learn from iMessage.

Knowing that Telegram's developers are knowingly overlooking such critical issues or design flaws, however, makes it very difficult to recommend Telegram as a truly secure messaging solution, especially to non-technical friends and family. Whilst competitors, such as WhatsApp and Facebook Messenger, are already working to further spread the deployment of end-to-end encryption for messages, Telegram seems to have stagnated and does not appear to be interested in solving the core issues with non-secret chats, or better yet, eliminating the idea of non-secure chats altogether.

It may be prudent to not place too much trust in it after all.

Are we right to blame Tesla in the wake of autopilot accidents?

Tesla, founded in 2003, have become the benchmark in the production of viable electric vehicles. The rollout of the Autopilot feature to many Tesla cars worldwide, which essentially allow the cars to drive themselves, has resulted in multiple headlines with questions raised by automotive regulators and Governments worldwide on how safe and mature the technology is. "Can a car really be trusted to drive itself?" they ask.

However, within recent months, a number of accidents involving Teslas in Autopilot mode (and even some that weren't) have also made the news, causing widespread doubt on whether this technology should even have been rolled out in the first place. Tesla claim the technology to be "beta" ⎯ that is, not fully complete and still evolving, and users of the system must accept this on a warning message when they first try to activate the system. Critically, the message warns users that "you need to maintain control and responsibility of your vehicle while enjoying the convenience of Autosteer". The system even continuously monitors the presence of the driver's hands on the steering wheel, and will slow down after an audible warning if the driver leaves the steering wheel hands-free. 

In essence, whilst the system may be able to function relatively autonomously in the right conditions, ultimately responsibility and control remains with the driver at all times, who is able to override and take control of the system simply by resuming normal driving. In the case of Joshua Brown, who on the 7th May 2016 was unfortunately killed in a car accident whilst relying on Autopilot, would have been able to prevent the accident had his attention been entirely focused on the road ahead, in the same way that any other driver using standard Cruise Control would be expected to take action to remain safe and in control, and to take preventative measures before their vehicle collided with anything. 

What we must remember is that Tesla's Autopilot feature is nothing more than glorified Cruise Control, and that whilst we can label the functionality as "semi-autonomous", the car is by no means entirely self-driving.

Therefore we cannot pin liability of such accidents singularly on Tesla. No self-driving system that exists today is free of flaws, and many only work in the right conditions. Autopilot, for instance, will only work where the lane markings are clear and visible, and should not be used on any route where there are sharp turns to follow. Even Google's self-driving cars, which are fundamentally aimed at being entirely driverless, also have limitations, and have also been involved in road accidents. 

It will take decades to find out whether or not self-driving or semi-autonomous vehicles are truly safer than those piloted manually by us, but in the meantime some attention should be drawn to the fact that only a very small number of Tesla vehicles have been involved in Autopilot accidents, fatal or otherwise, compared to the average of 5 people each day that die in road accidents on UK roads in manually-controlled vehicles.

For Your Protection

Technology has become an ever-present factor of our daily lives. We depend on phones and computers and social networks and internet search and email on a scale that could not have possibly been imagined 20 years ago. We communicate with our friends and our family, sharing photographs and chit-chat and our inner-most thoughts and secrets with one another, we exchange business deals and trade secrets and financial transactions ranging from the smallest startup to the largest multinational corporations. We exchange unforeseen quantities of data digitally, and we do so with the fair and reasonable expectation that our communications are private. We place our trust in those who supply our technology, our communications infrastructure, to take adequate measures to protect our interests. Strong encryption has provided us with that guarantee.

This week, the FBI came to blows with Apple over security measures built into the hugely popular iPhone. The high-profile San Bernardino case, in which 14 people were killed and a further 22 injured in a terrorist attack, has left the FBI with a considerable problem: they feel that crucial evidence may reside on the iPhone owned by one of the terrorists. The FBI approached Apple for help in defeating the security measures built into the phone in the hope that they may find something useful on it, however, Apple have opposed the request and declined assistance.

The measures in question are features built into the iPhone in order to protect the information stored within if the device is ever lost or stolen. The contents of the phone are fully encrypted and passcode entry to unlock the phone is rate-limited - that is, you can only enter the passcode incorrectly so many times within a given period before the device will wipe itself. In this instance, any evidence stored on the phone would be irreversibly destroyed, and the encryption renders it infeasible to retrieve the data without the correct passcode. 

At an early glance, one may be tempted to side with the FBI. After all, Apple in this instance have obstructed an investigation which may reveal further evidence. After all, nobody likes terrorism, and Governments would really rather us believe that this is actually for our own protection. However, it is just as likely that the phone contains little to no relevant information to the case, and yet may open a spectacle into unrelated personal matters in his own, and in other people's worlds. There is no way to know for certain without unlocking the device.

The precedent set by Apple if they were to comply is all the more chilling: it sends a message to the hundreds of millions of smartphone users out there that companies can be forced to betray their trust if ordered to do so by a Government entity. There is no guarantee that the reasons would always be legitimate. 

It is also worth mentioning that the same "For Your Protection" mindset and the huge fear of terrorism and crime is the exact same reason that all reason seems to go right out of the window the moment we go anywhere near an airport. Western society has become paralysingly afraid of extremism and terrorism and this makes it all the easier to encroach on your freedoms in the name of "fighting terror". 

The FBI's proposal in this instance is that Apple should simply build a version of the iOS software, specifically for this one device, that will not implement those security measures, allowing massive numbers of passcode permutations to be entered into the device in a short period of time. Eventually they would hit on the correct passcode and unlock the handset. Apple are the only ones who have the ability to both engineer the firmware to do this and to cryptographically sign it so that it will actually run on the phone. Attempting to install modified firmware that is not correctly signed will fail as it will be rejected by the handset. It may even render the handset altogether useless. 

However, the problem is that it is incredibly difficult to perform this once only. As a friend rather colourfully described to me this morning, Apple could certainly select a team of engineers, place them in complete isolation and then kill them and all of the equipment once the work was finished. Beyond that, it is incredibly difficult to guarantee that this exercise would not be repeatable in the future.

Worse, it also would prove the feasibility of such a backdoor and to reinforce the idea that the Government can strong-arm companies into taking such action again in the future.

It is becoming increasingly evident that personal liberties are not high on the agenda for the average Government. In the UK, the Government is working to pass a law in which citizens will have the Internet browsing activity tracked for up to a year, and that a number of organisations, including the Police, would have unrestricted access to this information. (This is worsened only by the fact that those drafting the law seem to have a complete misunderstanding of how such technology actually works.) Snowden revelations have already proven that power structures in the US and all around the world have been engaging in massive surveillance operations, gathering Internet traffic and communications and processing them in private for a variety of reasons, and often without warrant or cause. 

It is for this reason that more and more services are starting to encrypt user information and communications with strong cryptographically. Apple has traditionally been at the forefront of this approach, with their services such as iMessage and FaceTime making use of very strong transit encryption. More and more people are turning to services such as these, and others that also employ strong encryption (for example, Telegram Secret Chats, WhatsApp end-to-end encryption, WhisperSystems' Signal and others) in order to protect their own liberties and to take themselves away from potential unwanted spying.

Apple's Tim Cook writes in his an open letter his clear understanding of this issue:

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.
— Tim Cook

Other large companies, including Google, have since come out in support of Apple's preference to protect individual privacy and against undermining those same defences that we all use day-to-day to protect our own interests (although admittedly, some organisations shout louder than others). It is not just Apple devices that are covered here too, as Android devices, amongst many others, also employ similar encryption in order to protect user information.

This precedent is hugely important, and it is one that affects all of us. We must be guaranteed the ability to adequately protect privacy and our interests, and we must be free to reject propaganda set forth by Governments that is justified to us time and time again in the name of "fighting terror". We must be free to continue to employ strong encryption not just to protect ourselves from those abusing their power, but also from hackers and fraudsters and others who have a vested interest in the kinds of information we so frequently encrypt.

The technology industry is moving in a clear "encrypt everything" direction, and this will thankfully continue to present a major challenge for those who wish to perform mass surveillance or to break into our private assets. In the meantime, I fully welcome and agree with Apple's resistance to this order. It's time that more of us stood up and delivered the same resounding message, that our right to privacy is no less important now in the Information Age than before.


Technical Note: The proposed attack by the FBI is only possible in this instance due to the model of the phone in question: the iPhone 5C, which predates the "Secure Enclave" technology which is used to store encryption keys in an even more hardened fashion. The Secure Enclave in newer Apple devices also includes a number of features to prevent the encryption keys from being stolen, and these protections are altogether separately implemented from the operating system. As the iPhone 5C does not have this built-in, protective measures are instead built into iOS to provide the same effective level of protection. The same attack, if performed on an iPhone 6 or newer, would be ineffective as the integrity of the Secure Enclave would not be affected in any way by an iOS firmware upgrade.

Porting cjdns to the Ubiquiti EdgeRouter

cjdns is experimental software that aims to produce an end-to-end encrypted IPv6 network that guarantees security and privacy. A routing algorithm loosely based on Kademlia is used to establish routes to other nodes in the network. Having recently come to own an Ubiquiti EdgeRouter X, I started to wonder how easy it might be to port cjdns to the ER-X.

The operating system on the ER-X, known as EdgeOS, is actually a fork of the Vyatta virtual router system, which itself is Debian Linux-derived. The system is built around a dual-core MIPS processor with 256MB DDR3 RAM and a further 256MB of NAND flash storage. There are five Ethernet ports, including one supporting Power-over-Ethernet (PoE), all of which can be joined to a hardware-driven switch. Certainly more than capable of stepping up to the job.

The first step of the process was to build an environment which can be used to cross-compile the cjdns binary itself to the MIPS architecture of the ER-X. As it turned out, a Debian Jessie environment proved suitable for this, using the crossbuild toolchains. The build system packaged with cjdns itself already includes some cross-compilation support, so a few easy steps later, I wrote a Makefile that would build cjdns using the MIPS toolchain. Fairly easy sailing so far.

(As I later found out, building for the EdgeRouter X was significantly easier than building for the EdgeRouter Lite, due to the fact that the ER-L uses a 64-bit MIPS architecture instead of the 32-bit one used by the ER-X. The Debian embedded crossbuild toolchains don't seem to have any support for the 64-bit MIPS architecture, so in the end a contributor on GitHub dug out an altogether different toolchain from Codescape.)

However, building the cjdns executable itself was only a minor part of the battle. Vyatta-based systems, EdgeOS included, have a command-line configuration interface (known as vyatta-cfg) which allows the configuration of the router and its various components. The cjdns package had to fit into this in order to be user-friendly, otherwise the user of the software would need to manually edit the cjdns configuration files - not ideal). 

The vyatta-cfg system actually draws all of its supported configuration commands from a folder structure stored on the system itself, in which every configuration node is defined with a number of options including the types of values that should be accepted, and what to do with those values once they were added to, updated in or removed from the system configuration. Not knowing really where to start with this, I figured it would be easiest to start with an existing Vyatta package and to modify the contents. I later discovered that actually, vyatta-cfg is actually fairly-well documented

Having defined the options that should be available to configure cjdns was still not enough. After all, the vyatta-cfg system still didn't know how to generate a configuration that would be suitable for cjdns to parse. (For the record, the cjdns configuration file is a JSON file which made it somewhat easier to manipulate.)

The final part of the puzzle was to write a script that could take a variety of inputs from the vyatta-cfg system and to use it to modify the cjdns configuration file by itself, adding, changing or removing values based on the input to the Vyatta command line. I chose to write this script in Python largely for two reasons: one was because I wanted to reinforce my Python skills a little, and the other because it seems to be already fairly widely in use within Vyatta/EdgeOS. It seemed like a logical choice.

Finally, all of this was pulled together into a Debian package and the net result is a package that can be deployed to the EdgeRouter in order to provide cjdns functionality. At present the necessary functionality to set up cjdns peerings is present, both over UDP and using Ethernet beacons, and configuring the firewall is also there. There are still some features missing, such as configuring IP Tunnel and specifying Ethernet peers by MAC address, however I plan to add these soon. There is also a fairly decided lack of input validation at present, so entering bad values will probably just result in cjdns failing to start.

I have been running this package on my ER-X for nearly a month now with very few problems. Sometimes the cjdns executable crashes (after all, cjdns is still alpha software), but I have found that the easiest way to get around this in the interim is just to configure a scheduled task within the CLI that checks every minute if the application is still running, and starts it if not. Not entirely ideal, but I haven't yet had the time to write the necessary boilerplate code to "supervise" the process correctly. 

I have open-sourced this project and it is hosted on GitHub, along with documentation how to build it using a Debian Jessie system and how to configure it once installed on the EdgeRouter:

Disconnecting from social networking

Recently, I did something that would send horror down the spines of both teenagers and adults worldwide: I deleted Facebook and Twitter from my phone. I did this a week ago. The results have been rather interesting.

Social networking has essentially hyper-connected us with everyone else we know, be them good friends, family, or even just people we've become casually acquainted with once or twice. Large social networks, like Facebook and Twitter, unfortunately have absolutely no understanding of the varying dynamics of friendship. These sites don't really have any way of knowing if the people that you interact with online are actually good friends or not, or more importantly, whether they are people you really care about.

After all, do you really care that a girl that you went to school with fifteen years ago has just moved house? When was the last time that you ever really found yourself wondering what your colleague was having for lunch? 

The result of this is that we find ourselves connected online with hundreds, if not thousands, of people that actually have very little to no impact on our lives. Several times a day, you scroll and scroll and scroll through a mine of information about people, their photographs and inner-most thoughts, their political rants and horrendously outdated scientific/religious beliefs, trying to pick out the little tidbits that might actually serve some relevance to you. 

At the beginning of this week, I came to the conclusion that this actually isn't good for me. I realised that:

  1. It's just plain-and-simple information overload. People post so much information onto social networks, and 99% of this content is just simply uninteresting and/or has zero value. Political discussions online serve only to descend into argumentative chaos, and I think most people would agree that the typical religious or scientific comment is bound to garner an equally negative reaction.
  2. The people that you really care about get lost in the noise. In an age where you are expected to be constantly available to everyone, the people you are really interested in communicating with somehow get buried in the mix. Facebook, particularly, seems really bad at showing you posts from your actual friends. The algorithm used to decide which posts you see is pretty hopeless and is probably more motivated by advertising revenue than actual human connections.
  3. Knowing what everyone else is doing makes you feel worse about yourself. After all, it is human nature for us to share the most interesting details of our lives, not the mundane and repetitious parts. When everyone looks like they're having so much more fun than you, it's important to take a step back and add a pinch of salt before comparing your life to theirs. 
  4. It becomes more and more addictive checking up and seeing what has happened in the world up to the point that it can take up quite a bit of time. Ultimately it doesn't really do any good, either.

You'll note that at the beginning of the article that I mentioned only having deleted Facebook and Twitter from my phone, instead of having deleted my accounts on these services altogether. Mobile is where we spend most of our time consuming social networking, after all, so this made sense as a good place to start.

In the last week, however, I have only logged into Facebook once, and that was from a computer. Beyond that, I have not even felt the compulsion to use it. A friend mentioned to me the other day that I have probably missed out on a new internet meme stick-man called "Bob". Somehow I don't feel terribly disappointed by this.

I have used Twitter from a computer on a couple of occasions, but again, the compulsion to sit and scroll pointlessly through the feed just because it's there has largely disappeared. At this point I have also learned that so few people that I actually know in real life and care about even use Twitter, so largely all I have filtered out are either people that I haven't met (and probably never will), or organisations that have a social networking presence. That doesn't seem like a particularly great loss either.

The amazing thing about this is that such a minor change, even in just a week, has had positive results in my life. It's caused me to stop focusing on the wrong people, and to just reach out to the right ones. I feel more balanced and less ready to strangle some of my acquaintances. I just feel overall better.

Instant messaging and phone calls are still a big winner in my world, and from not knowing what's going on in the world through social osmosis, I feel like the conversations that I am having with my family and friends have become more interesting and meaningful. Even just in a week, I am starting to understand the benefits of disconnecting from social networking, even just partially. The next step may be to remove myself from these services altogether.

I have no doubt that doing so may cause me to simply become forgotten by a substantial number of people (after all, if you're not directly available to someone through social networking, you had might as well be dead) but I fully expect that it will continue to have positive effects.

I think a growing number would agree, all said and done, that social networking has actually become rather anti-social.