Understanding ICMP and why you shouldn't just block it outright

ICMP, or "Internet Control Message Protocol", is a protocol designed to help computers understand when things go wrong out on a network. It's a supporting protocol - that is, to say, that IP does not strictly require on ICMP to function, however typical networking devices such as routers and endpoints are expected to speak and understand ICMP. You might also know ICMP thanks to "ping", a utility designed to see if a remote computer on a network is alive and connected. 

You might also know about ICMP from some security guide that you read online which tells you, unwaveringly, to block ICMP traffic. "ICMP is a security risk," they chorus, "you must filter all ICMP packets from your network!"

Sadly, there are a staggering number of security professionals who actually know very little about the real working mechanisms of IP (and an even more staggering number that know nothing about Layer 2!) writing articles online and broadcasting this advice with a broken, or altogether missing understanding of what ICMP actually does

So what does ICMP do?

ICMP is a simple protocol. The packets are usually very small and typically contain a very limited amount of information. They also contain a type code that describes the purpose of the packet - in essence, the message being sent. There are a variety of type codes. ICMP is a bit of a diagnostic utility, and it is a bit of an error-reporting mechanism. It's used to tell a network device about a problem in the network when sending an IP packet.

You're probably familiar with "ICMP ECHO", or as it's known more affectionately, "ping". There are other codes that describe "ICMP DESTINATION UNREACHABLE" (for example, a host is offline, or there is no known route to it) and "ICMP TTL EXCEEDED" (the packet went over more hops than it was allowed to - actually, this is very useful for diagnosing routing loops!). There's a code to describe "ICMP BAD IP HEADERS" for a malformed IP packet, and some more to include information about "ICMP REDIRECTS" - this packet is no good here, send it there instead.

Network endpoints and routers should generate these ICMP messages as a result of one or more network conditions that may result in a packet not being delivered correctly to it's destination. That is, if you send a packet to an IP address for which no route exists, expect a router in the path to respond back to you with a "destination unreachable" ICMP message.

What are the consequences of blocking ICMP?

When you block ICMP, you are effectively filtering or dropping these warning packets from being delivered back to the sending endpoint. That IP packet that you sent off before? It never got there, but you'll never find out about it because the ICMP "destination unreachable" message was discarded before it reached you. Therefore your computer just assumes that nothing went wrong, and it will sit and wait, quite often for a long time, before giving up on expecting a response. This is known as a "timeout".

Had the "destination unreachable" packet made it back to you, you would know that there was a problem with the destination and your computer would give up instantaneously on that connection. You, or your application, would not be forced to wait (sometimes up to 60 seconds, or more!) for the connection to "timeout".

Another important ICMP packet that could be discarded by the filter is an "ICMP FRAGMENTATION REQUIRED" message. When that happens, things start to go really wrong.

Fragmentation? What's that?

So far we've seen the consequences of trying to talk to an unreachable host - whilst timeouts are inconvenient in that case, they are only masking another issue. However, blocking ICMP may even stop you from being able to communicate with reachable hosts too!

The key to this is that not all network links are created equal. To understand why, a bit of Layer 2 knowledge is required. In Ethernet land, a single "frame" of data (that is, a frame containing an IP packet) can only be so big. The default maximum frame size for an Ethernet network is 1500 bytes. Once you go above 1500 bytes, you have to create a new frame to send the next 1500 bytes. And so on, and so forth. To stream a large amount of information, you may send hundreds, thousands, or even hundreds of thousands of frames. 

Of course not all network links are based solely on Ethernet. Many broadband providers rely on a protocol called PPP to establish your connection across their network to the Internet, as this provides them with the extra authentication capability to identify you as a specific customer. PPP has additional headers, and when also wrapped in an Ethernet frame (this is known as PPPoE), means that there is less space available for your IP packet. Therefore 1500 bytes actually becomes 1492 bytes when you include the additional room needed to "fit" PPP onto the pipe.

This number - whether 1500, 1492 or any arbitrary number of bytes - is known as the "Maximum Transmission Unit", or MTU, and network devices must be aware of the MTU of a given link so that it does not produce frames too big for that link.

What happens when you send a frame that's 1500 bytes down a link that only supports frames of 1492 bytes? You guessed it - it won't fit. At this point, a router on the path has received a frame from a link where the MTU is 1500, and has tried to send it back out of another interface where the MTU is 1492.

Logically this is an impossible situation, so the router simply discards the frame and sends back a "fragmentation required" ICMP message back to the sender to say "This packet is too big for where you're trying to send it, so please make it smaller". The sending computer can then break down the packet into smaller chunks and resend them so that they'll now fit down the link. 

The sending computer will also "learn" from this ICMP message, temporarily remembering this condition for that given destination address, so the next packets that get sent will not exceed the given MTU size, avoiding the problem and allowing seamless communication back and forth. 

So if the sender never gets the "fragmentation required" packet...

... then the router on the path will discard the packet that's too big, it'll send an ICMP message back to you asking you to send smaller packets instead, that ICMP message will be blocked by your firewall and you will never get that memo, therefore your computer assumes that everything was fine.

In reality, it isn't fine because your packet was discarded by an upstream router so it simply never got there, and you never found out why because the warning ICMP packet was discarded before it got to you, therefore you sit and wait yet again until the "timeout" with absolutely no clue as to whether the data you sent actually got there or not. 

This is loosely known as "Path MTU discovery", and is a core feature of IP, to ensure that larger packets can be sent across different types of network link without too much trouble. By blocking ICMP, you are completely removing the computer's ability to learn about these conditions, creating unstable connectivity to that destination. 

Okay, so why do people claim it's such a security problem?

There are some legitimate reasons for believing that ICMP is a security issue, and there are plenty of outright myths.

One real problem is that "ICMP ECHO" packets (those used for "ping") can actually contain any arbitrary amount of information of pretty much any kind, which makes them useful for tunnelling packets inside "ICMP ECHOs" to avoid network boundary filtering using specialised software for this purpose. The reality of this is that most people are completely unaware that this is even a possibility, and some more intelligent firewalls may even be able to identify when this is happening.

Another is that "ICMP ECHO" actually reveals the existence of a device on the end of a given IP address very easily. The reality is that there are actually plenty of other ways to determine this, therefore this is a bit of a non-issue. Knowing that a machine exists doesn't really help you all that much - ICMP doesn't provide you with a "backdoor". You would still need some other route in by means of other open ports, and those open ports are just as likely to reveal the existence of the machine as "ping" is. 

Some are concerned that the "ICMP TTL EXCEEDED" packets may actually reveal the existence of routers on a path between two given hosts. This is actually the basis for how "traceroute" functions - send multiple packets with deliberately incremental TTL values, allow them to expire in transit and capture which routers report back with the "ICMP TTL EXCEEDED" warning.

Overly security-conscious individuals would prefer not to reveal the existence of things because it provides a kind of "security through obscurity", and therefore will just block all ICMP, not understanding that it has many functions outside of "ICMP ECHO". This is sadly a real world knowledge gap for a lot of IT professionals.

But I really really don't want people to be able to ping devices in my network.

Okay. In which case, what you need is to actually configure your firewall properly so that rather than blocking all ICMP traffic, you just simply block ICMP traffic with the specific type codes relative to "ping". For your information, those are "ICMP ECHO REPLY" (type code 0) and "ICMP ECHO REQUEST" (type code 8). 

That way, other diagnostic traffic, such as "ICMP DESTINATION UNREACHABLE" (type code 3) or "ICMP TTL EXCEEDED" (type code 11) will still be allowed through, and your computers will be able to learn about network problems properly. Hooray!

If you were particularly concerned about not revealing the existence of routers by means of "ICMP TTL EXCEEDED" (type code 11), then this specific message type could be filtered without too much ill-effect, at the expense of creating timeouts in genuine circumstances, i.e. when attempting to reconstruct fragmented packets or dropping packets where the TTL field has reached zero.

What else should I know about ICMP?

The problem is that even blocking "ping" can be bad in some scenarios - some software may use this mechanism to see if a host is available before trying to speak to it.

An extremely widely used example of this is Active Directory for domain-joined Windows clients, where ICMP is used to perform "slow link detection" before downloading and applying Group Policy Objects (GPOs). For more information on the consequences of ICMP on Active Directory, take a look at TechNet. Seeing problems with GPOs applying at logon? This might be related.

The other thing to be aware of is that ICMP is typically classed as "low priority" traffic by many routers and firewalls. That is, these devices should not prioritise ICMP traffic over typical IP traffic, therefore ICMP should really have little-to-no negative impact on the throughput of your network. However, if you were conscious about whether or not your network could be flooded with ICMP traffic, you can safely rate-limit ICMP, so long as you are not throttling it down so much that the ICMP packets end up being dropped regardless. 

In conclusion...

... there's a lot more to ICMP than initially meets the eye, and there are very real cases where ICMP is needed. Don't take the decision to block it outright lightly.

The "Year of the Linux Desktop" is a myth

The approach to New Year isn't complete without developers across the world asking whether this coming year is going to be the year that Linux conquers the desktop. The year that Windows and macOS will be dethroned, the year that open source will win and distributors of proprietary software will cower in fear. This isn't a new discussion - it's been taking place for years, and is usually measured with the same optimism of success. 

But this isn't going to be the year that Linux wins at home, and it isn't going to be the year to throw out the proprietary operating systems of the world. To be frank, next year won't be, either. Or the year after that.

So what's the issue? Linux is already sitting at the core of servers worldwide, Android devices, home wireless routers and a whole range of other commodity items. Linux is, no doubt, wildly successful. Doesn't it stand to reason that it can be just as successful on your standard home or office PC?

In fact there are a number of issues that prevent Linux from enjoying the same success in these places.

The distribution model is user-hostile

Linux distributions are operating systems made up of a Linux kernel, a collection of software utilities and often a package management system. Many of these distributions are free to obtain and advertise themselves for a variety of purposes: some are for embedded systems or for a specific purpose, but there are a whole host of general-purpose distributions, such as Debian, Ubuntu, Mint, Gentoo and CentOS. 

For power users and developers, choosing a distribution might be second-nature. It might be that you prefer a source-driven package management system, like emerge on Gentoo, or perhaps you would prefer to avoid systemd like the plague. Perhaps you would like to stick with a distribution that claims to be "pure" and doesn't contain closed-source binary-only drivers. Maybe you are using an obscure computer architecture that is supported by some distributions and not others. 

However, for regular users at home, choosing a distribution is a daunting and summarily confusing task. Often it is not clear whether one distribution will provide any real benefits for a given user over another. 

The diversity between different distributions can cause headaches not just for inexperienced users, but also for software developers alike. The creators of different distributions often pick different system libraries, or even different versions of the same library, when building their system. This means there is absolutely no guarantee of binary compatibility between Linux distributions. There's no "write once" or "compile once", because the system that you built the application on probably doesn't look anything like the system that your users will run on. You don't even have a guarantee that the correct prerequisites are on your user's system. Which leads us onto a phenomenon known as "dependency hell". 

The fires are still burning hot in Dependency Hell

Let's imagine that you have a library on your system that takes an MP3 file and plays it, or a library that takes a JPEG photograph and renders it. You want to write an application that takes advantage of functionality provided by these libraries, so you set off writing your application. 

You then take your newly written application to a friend's machine and try to run it. It fails to launch. What went wrong? It turns out that your friend is probably running either a different distribution, or a different release of the same distribution, or maybe they've just not installed any patches in the last six months. In any case, the library you leveraged in your application is a different version on the target machine, and the developers of that library were not careful enough to perfectly preserve API compatibility. 

Is it possible to avoid this issue?

You can perhaps build the library into your application directly. This way, you do not have a dependency on the target computer having the correct version of the library that you need. This sounds good in practice, but has some unintended side effects, namely that your application bloats in size, especially if the library in question is large or complex. It also makes the assumption that the library itself has no specific dependencies. Many do, so this falls over quickly.

Alternatively you can package your application such that it will only install through a package management system if specific dependencies are met. This is the more commonly used approach. The package manager has to resolve the dependencies itself when installing your application by making sure that the prerequisites are present. The issue with this is that those specific versions of those specific libraries also need to be made available through the package manager, many of which aren't. It's not common practice for package repositories to maintain an entire back-catalogue of every version of every library or utility. There's no guarantee that those specific versions or libraries will even be available at all from that repository. You may end up with multiple versions of the same library being installed on the same system. 

Neither of these approaches are solid. The only reason that this is not so much of a problem with commercially-developed Windows or macOS is because they are usually much more tightly controlled to preserve compatibility between releases, and there are not multiple distributions of these operating systems that need to be considered in the same way that there are in the Linux world. 

Can a normal user really be expected to understand what is taking place when installing packages or resolving dependencies?

There's still not much vendor support for hardware

You've just bought a new printer. You bring it home, open the box and plug it in. Nothing happens. Oh, wait. We haven't installed the driver software. Ordinarily you'd get the CD out of the box (or download them from the web), install the drivers and done! The printer now prints exactly as advertised.

The problem is that many hardware manufacturers simply don't produce hardware drivers for Linux. Many drivers that are available in the Linux kernel have been developed by the open-source community to fill a gap, but often these drivers are not perfect either, and either only cover basic functionality or are simply incomplete due to lack of proprietary knowledge of that particular product. How do you know that the peripheral that you've just bought will actually work on your Linux computer at home?

In many cases, the scene is not as dire as it once was. For example, nVIDIA and AMD are now fairly good at providing drivers for their graphics cards and chipsets. On the other hand, try finding drivers for most Intel kit. There's still no Intel graphics drivers for many of their graphics adapters on Linux, nor is there an Intel RAID driver for Rapid Storage Technology. Hell, even whole Intel Atom CPUs are simply not accounted for in the Linux kernel. How does a user at home even know that when they install Linux on their PC at home, that all of their hardware will be fully supported?

Inexperienced computer users simply don't have the knowledge either to recompile the kernel or to load additional kernel modules when drivers are needed. The process of handling and managing drivers in the Linux world has never been streamlined nor simplified. 

There still isn't much vendor support for software, either

More often than not, big-name software doesn't appear on Linux desktops either. Perhaps the most famous example is Microsoft Office, which is fairly universally accepted by most. Other common applications like iTunes also have no Linux support. Steam release a small number of games on Linux but are very few in number compared to those available on Windows, or even on Mac. 

Many open-source alternatives are available but often they are either lacking in features or in usability. It's not reasonable to suggest that OpenOffice is really a suitable replacement for Microsoft Office, nor that GIMP is really a suitable replacement for the Adobe Creative Suite. This is also not helped by the fact that common day-to-day utilities can change dramatically even just between different desktop environments, of which there are no shortage. Just ask the average crowd of Linux users about their favourite text editor, let alone anything more complicated than that. Can we really expect at this stage that the open-source community is going to be able to produce a whole desktop that works for the majority?

The future of Linux probably isn't on the desktop anyway

If you want to look at some major Linux success stories, look no further than Android, Google's originally-mobile-now-everywhere operating system. It's largely successful because a huge amount of effort was placed into the Android runtime to follow the "write once run everywhere" model. It's also really not very Linux-y at all. Core Android kernel patches have since been upstreamed into the main Linux kernel source tree, but on most Android devices, even user-space utilities beneath the "pretty" user interface vary dramatically. 

And you know what? It doesn't matter, because nobody who writes Android applications needs to worry about what user-space Linux utilities will or won't be present on the system, or even to a certain extent which system libraries are present, as their needs will largely be met by the Android runtime. This is very much closer to the kind of model that Microsoft and Apple use, providing a common and unchanging API.

The open-source community just outright lacks the cohesion to maintain the unified vision their product in the same way as the software giants do. This is why there are so many different desktop environments available on Linux-based distributions, and most of them completely unable to agree on even common design or usability principles. Often the technically-brilliant individuals of the open-source community do not understand normal, real-world users and don't have the funds or the time or the capability to correctly research what really works for everyone else out there. (At this time, I feel it's only appropriate to look at Richard Stallman, no doubt a genius, but also has large and frequent completely-not-of-this-earth moments.)

So in the meantime, we'll continue to see the Linux kernel appearing at the heart of other products, like Android. Linux-based desktop distributions won't disappear either, remaining largely reserved for the technically capable or the particularly willing. Manufacturers might even provide Linux as an alternative operating system, like we saw five or six years go with the great Netbook explosion (which, coincidentally, and understandably, failed). 

But the Year of the Linux Desktop? The year where you step into John Lewis or Currys and pick from swathes of Linux-powered computers? It's just not going to happen.

Three years on, should we trust Telegram?

In August 2013, two Russian developers—and brothers—Nikolai Durov and Pavel Durov released Telegram to the world, a new instant messaging platform with a simple promise: to provide privacy and security that competing platforms available at the time weren't delivering. Telegram is usable on mobile devices and desktop operating systems alike, and promotes Secret Chats as a way to securely exchange messages with end-to-end encryption. Indeed, Telegram is quite pleasant to use for the most part. Messages are delivered very quickly, the available mobile and desktop clients provide a fairly pleasant user experience and there's no dependency on your mobile device having an active connection to use Telegram from another device (like with WhatsApp). 

Most unusual about the design of Telegram, however, was the decision to engineer a new encryption scheme called MTProto, using symmetric encryption keys, rather than using previously tested and well-known encryption schemes. Cryptographers expressed doubt about whether custom-designed cryptography will be subject to flaws that compromise the security or privacy of the end-user. Some experts, including researchers at Aarhus University, have expressed concern about whether the encrypted messages are properly authenticated, leading to potential weaknesses. MTProto has received criticism from the Electronic Frontier Foundation (EFF). To look at this alone, the outlook doesn't seem good.

Perhaps most daunting overall is the fact that Telegram actually doesn't perform end-to-end encryption of instant messages by default, instead reserving this functionality only for "Secret Chats", which must be manually initiated by the user and can only take place between two specific devices (a Telegram user with multiple devices will only be able to interact with that secret chat session on the device it was initiated from/accepted at). Telegram claim that this is because cloud syncing of instant messages between devices is more convenient for non-secret chats than the guaranteed security that end-to-end encryption provides. What this means in practice is that normal instant messages sent over Telegram are actually stored by Telegram in a format that they can decrypt themselves. Perhaps we should just hope instead that nobody raids Telegram's datacenters.

Take Apple, for example, who took a different approach with iMessage that allows them to provide end-to-end encryption between devices whilst still providing the illusion of message sync across devices. Instead of encrypting the message once for the recipient user, iMessage actually encrypts the message for each recipient device separately, as each device has it's own encryption keys. In effect, if you own an iPad, an iPhone and a Mac and a friend sends you an iMessage, they are actually encrypting and sending the message three times, once for each device. Every device receives a copy of every message, so you can jump between devices without a loss of history, but no actual syncing of message history is taking place between clients and the iMessage server. Everything end-to-end, as it should be.

There's no doubt that the methodology used by Apple works. Huge volumes of iMessages are sent daily, and a user of iMessage never has to think about whether or not they should really be switching to a secret chat as all messages are end-to-end encrypted by default. This introduces the next significant problem for Telegram as a secure platform: human error.

Humans are typically the weakest link in any secure system, and it only takes a user to type something secret into a non-secret chat by mistake (or just forget to initiate a secret chat altogether) and effectively it's game over. It is hugely irresponsible of Telegram to market itself as a secure messaging platform and yet place the responsibility for security solely into the hands of the user, all whilst making the baseless assumption that the user will actually remember or recognise when a secret chat should be used instead of a regular one. In fact, it makes an even worse assumption that all Telegram users even know that secret chats exist or how they worksomething that we should not assume to be correct for those who have simply been told to download Telegram by their friends and family without having performed any further reading or research. 

That's not to say that iMessage is perfect by any means. Indeed iMessage also has weaknesses, largely in the fact that you must trust the public key infrastructure that Apple uses for iMessage-capable devices to discover each other's public keys. Specifically, you must trust that Apple will not inject additional public keys into the directory without your knowledge or consent, given that Apple devices will not notify you as a user when someone else's public keys change. This is not an unsolvable problem, however, and can easily be mitigated by allowing the user to control which keys (or rather, devices) it should trust and notifying the user when new public keys appear for your contacts. Legitimately this would happen if someone were to log into iMessage from a new device, but equally it may also happen if a sneaky Government were trying to obtain a copy of any messages you sent to that user from that point forward.

Whilst not perfect, however, the iMessage approach is clearly superior. Treat all messages as if they're secret. Treat each of the recipient's devices as a separate entity with it's own unique encryption keys. Keep the private keys in the hands of the user's device. Only store messages on the iMessage server in a format that Apple themselves can't decrypt. Don't place any of the onus on the user to be secure. Don't assume the user knows when they are and aren't being secure.

There are a lot of things that Telegram would do well to learn from iMessage.

Knowing that Telegram's developers are knowingly overlooking such critical issues or design flaws, however, makes it very difficult to recommend Telegram as a truly secure messaging solution, especially to non-technical friends and family. Whilst competitors, such as WhatsApp and Facebook Messenger, are already working to further spread the deployment of end-to-end encryption for messages, Telegram seems to have stagnated and does not appear to be interested in solving the core issues with non-secret chats, or better yet, eliminating the idea of non-secure chats altogether.

It may be prudent to not place too much trust in it after all.