The "Year of the Linux Desktop" is a myth

The approach to New Year isn't complete without developers across the world asking whether this coming year is going to be the year that Linux conquers the desktop. The year that Windows and macOS will be dethroned, the year that open source will win and distributors of proprietary software will cower in fear. This isn't a new discussion - it's been taking place for years, and is usually measured with the same optimism of success. 

But this isn't going to be the year that Linux wins at home, and it isn't going to be the year to throw out the proprietary operating systems of the world. To be frank, next year won't be, either. Or the year after that.

So what's the issue? Linux is already sitting at the core of servers worldwide, Android devices, home wireless routers and a whole range of other commodity items. Linux is, no doubt, wildly successful. Doesn't it stand to reason that it can be just as successful on your standard home or office PC?

In fact there are a number of issues that prevent Linux from enjoying the same success in these places.

The distribution model is user-hostile

Linux distributions are operating systems made up of a Linux kernel, a collection of software utilities and often a package management system. Many of these distributions are free to obtain and advertise themselves for a variety of purposes: some are for embedded systems or for a specific purpose, but there are a whole host of general-purpose distributions, such as Debian, Ubuntu, Mint, Gentoo and CentOS. 

For power users and developers, choosing a distribution might be second-nature. It might be that you prefer a source-driven package management system, like emerge on Gentoo, or perhaps you would prefer to avoid systemd like the plague. Perhaps you would like to stick with a distribution that claims to be "pure" and doesn't contain closed-source binary-only drivers. Maybe you are using an obscure computer architecture that is supported by some distributions and not others. 

However, for regular users at home, choosing a distribution is a daunting and summarily confusing task. Often it is not clear whether one distribution will provide any real benefits for a given user over another. 

The diversity between different distributions can cause headaches not just for inexperienced users, but also for software developers alike. The creators of different distributions often pick different system libraries, or even different versions of the same library, when building their system. This means there is absolutely no guarantee of binary compatibility between Linux distributions. There's no "write once" or "compile once", because the system that you built the application on probably doesn't look anything like the system that your users will run on. You don't even have a guarantee that the correct prerequisites are on your user's system. Which leads us onto a phenomenon known as "dependency hell". 

The fires are still burning hot in Dependency Hell

Let's imagine that you have a library on your system that takes an MP3 file and plays it, or a library that takes a JPEG photograph and renders it. You want to write an application that takes advantage of functionality provided by these libraries, so you set off writing your application. 

You then take your newly written application to a friend's machine and try to run it. It fails to launch. What went wrong? It turns out that your friend is probably running either a different distribution, or a different release of the same distribution, or maybe they've just not installed any patches in the last six months. In any case, the library you leveraged in your application is a different version on the target machine, and the developers of that library were not careful enough to perfectly preserve API compatibility. 

Is it possible to avoid this issue?

You can perhaps build the library into your application directly. This way, you do not have a dependency on the target computer having the correct version of the library that you need. This sounds good in practice, but has some unintended side effects, namely that your application bloats in size, especially if the library in question is large or complex. It also makes the assumption that the library itself has no specific dependencies. Many do, so this falls over quickly.

Alternatively you can package your application such that it will only install through a package management system if specific dependencies are met. This is the more commonly used approach. The package manager has to resolve the dependencies itself when installing your application by making sure that the prerequisites are present. The issue with this is that those specific versions of those specific libraries also need to be made available through the package manager, many of which aren't. It's not common practice for package repositories to maintain an entire back-catalogue of every version of every library or utility. There's no guarantee that those specific versions or libraries will even be available at all from that repository. You may end up with multiple versions of the same library being installed on the same system. 

Neither of these approaches are solid. The only reason that this is not so much of a problem with commercially-developed Windows or macOS is because they are usually much more tightly controlled to preserve compatibility between releases, and there are not multiple distributions of these operating systems that need to be considered in the same way that there are in the Linux world. 

Can a normal user really be expected to understand what is taking place when installing packages or resolving dependencies?

There's still not much vendor support for hardware

You've just bought a new printer. You bring it home, open the box and plug it in. Nothing happens. Oh, wait. We haven't installed the driver software. Ordinarily you'd get the CD out of the box (or download them from the web), install the drivers and done! The printer now prints exactly as advertised.

The problem is that many hardware manufacturers simply don't produce hardware drivers for Linux. Many drivers that are available in the Linux kernel have been developed by the open-source community to fill a gap, but often these drivers are not perfect either, and either only cover basic functionality or are simply incomplete due to lack of proprietary knowledge of that particular product. How do you know that the peripheral that you've just bought will actually work on your Linux computer at home?

In many cases, the scene is not as dire as it once was. For example, nVIDIA and AMD are now fairly good at providing drivers for their graphics cards and chipsets. On the other hand, try finding drivers for most Intel kit. There's still no Intel graphics drivers for many of their graphics adapters on Linux, nor is there an Intel RAID driver for Rapid Storage Technology. Hell, even whole Intel Atom CPUs are simply not accounted for in the Linux kernel. How does a user at home even know that when they install Linux on their PC at home, that all of their hardware will be fully supported?

Inexperienced computer users simply don't have the knowledge either to recompile the kernel or to load additional kernel modules when drivers are needed. The process of handling and managing drivers in the Linux world has never been streamlined nor simplified. 

There still isn't much vendor support for software, either

More often than not, big-name software doesn't appear on Linux desktops either. Perhaps the most famous example is Microsoft Office, which is fairly universally accepted by most. Other common applications like iTunes also have no Linux support. Steam release a small number of games on Linux but are very few in number compared to those available on Windows, or even on Mac. 

Many open-source alternatives are available but often they are either lacking in features or in usability. It's not reasonable to suggest that OpenOffice is really a suitable replacement for Microsoft Office, nor that GIMP is really a suitable replacement for the Adobe Creative Suite. This is also not helped by the fact that common day-to-day utilities can change dramatically even just between different desktop environments, of which there are no shortage. Just ask the average crowd of Linux users about their favourite text editor, let alone anything more complicated than that. Can we really expect at this stage that the open-source community is going to be able to produce a whole desktop that works for the majority?

The future of Linux probably isn't on the desktop anyway

If you want to look at some major Linux success stories, look no further than Android, Google's originally-mobile-now-everywhere operating system. It's largely successful because a huge amount of effort was placed into the Android runtime to follow the "write once run everywhere" model. It's also really not very Linux-y at all. Core Android kernel patches have since been upstreamed into the main Linux kernel source tree, but on most Android devices, even user-space utilities beneath the "pretty" user interface vary dramatically. 

And you know what? It doesn't matter, because nobody who writes Android applications needs to worry about what user-space Linux utilities will or won't be present on the system, or even to a certain extent which system libraries are present, as their needs will largely be met by the Android runtime. This is very much closer to the kind of model that Microsoft and Apple use, providing a common and unchanging API.

The open-source community just outright lacks the cohesion to maintain the unified vision their product in the same way as the software giants do. This is why there are so many different desktop environments available on Linux-based distributions, and most of them completely unable to agree on even common design or usability principles. Often the technically-brilliant individuals of the open-source community do not understand normal, real-world users and don't have the funds or the time or the capability to correctly research what really works for everyone else out there. (At this time, I feel it's only appropriate to look at Richard Stallman, no doubt a genius, but also has large and frequent completely-not-of-this-earth moments.)

So in the meantime, we'll continue to see the Linux kernel appearing at the heart of other products, like Android. Linux-based desktop distributions won't disappear either, remaining largely reserved for the technically capable or the particularly willing. Manufacturers might even provide Linux as an alternative operating system, like we saw five or six years go with the great Netbook explosion (which, coincidentally, and understandably, failed). 

But the Year of the Linux Desktop? The year where you step into John Lewis or Currys and pick from swathes of Linux-powered computers? It's just not going to happen.

Three years on, should we trust Telegram?

In August 2013, two Russian developers—and brothers—Nikolai Durov and Pavel Durov released Telegram to the world, a new instant messaging platform with a simple promise: to provide privacy and security that competing platforms available at the time weren't delivering. Telegram is usable on mobile devices and desktop operating systems alike, and promotes Secret Chats as a way to securely exchange messages with end-to-end encryption. Indeed, Telegram is quite pleasant to use for the most part. Messages are delivered very quickly, the available mobile and desktop clients provide a fairly pleasant user experience and there's no dependency on your mobile device having an active connection to use Telegram from another device (like with WhatsApp). 

Most unusual about the design of Telegram, however, was the decision to engineer a new encryption scheme called MTProto, using symmetric encryption keys, rather than using previously tested and well-known encryption schemes. Cryptographers expressed doubt about whether custom-designed cryptography will be subject to flaws that compromise the security or privacy of the end-user. Some experts, including researchers at Aarhus University, have expressed concern about whether the encrypted messages are properly authenticated, leading to potential weaknesses. MTProto has received criticism from the Electronic Frontier Foundation (EFF). To look at this alone, the outlook doesn't seem good.

Perhaps most daunting overall is the fact that Telegram actually doesn't perform end-to-end encryption of instant messages by default, instead reserving this functionality only for "Secret Chats", which must be manually initiated by the user and can only take place between two specific devices (a Telegram user with multiple devices will only be able to interact with that secret chat session on the device it was initiated from/accepted at). Telegram claim that this is because cloud syncing of instant messages between devices is more convenient for non-secret chats than the guaranteed security that end-to-end encryption provides. What this means in practice is that normal instant messages sent over Telegram are actually stored by Telegram in a format that they can decrypt themselves. Perhaps we should just hope instead that nobody raids Telegram's datacenters.

Take Apple, for example, who took a different approach with iMessage that allows them to provide end-to-end encryption between devices whilst still providing the illusion of message sync across devices. Instead of encrypting the message once for the recipient user, iMessage actually encrypts the message for each recipient device separately, as each device has it's own encryption keys. In effect, if you own an iPad, an iPhone and a Mac and a friend sends you an iMessage, they are actually encrypting and sending the message three times, once for each device. Every device receives a copy of every message, so you can jump between devices without a loss of history, but no actual syncing of message history is taking place between clients and the iMessage server. Everything end-to-end, as it should be.

There's no doubt that the methodology used by Apple works. Huge volumes of iMessages are sent daily, and a user of iMessage never has to think about whether or not they should really be switching to a secret chat as all messages are end-to-end encrypted by default. This introduces the next significant problem for Telegram as a secure platform: human error.

Humans are typically the weakest link in any secure system, and it only takes a user to type something secret into a non-secret chat by mistake (or just forget to initiate a secret chat altogether) and effectively it's game over. It is hugely irresponsible of Telegram to market itself as a secure messaging platform and yet place the responsibility for security solely into the hands of the user, all whilst making the baseless assumption that the user will actually remember or recognise when a secret chat should be used instead of a regular one. In fact, it makes an even worse assumption that all Telegram users even know that secret chats exist or how they worksomething that we should not assume to be correct for those who have simply been told to download Telegram by their friends and family without having performed any further reading or research. 

That's not to say that iMessage is perfect by any means. Indeed iMessage also has weaknesses, largely in the fact that you must trust the public key infrastructure that Apple uses for iMessage-capable devices to discover each other's public keys. Specifically, you must trust that Apple will not inject additional public keys into the directory without your knowledge or consent, given that Apple devices will not notify you as a user when someone else's public keys change. This is not an unsolvable problem, however, and can easily be mitigated by allowing the user to control which keys (or rather, devices) it should trust and notifying the user when new public keys appear for your contacts. Legitimately this would happen if someone were to log into iMessage from a new device, but equally it may also happen if a sneaky Government were trying to obtain a copy of any messages you sent to that user from that point forward.

Whilst not perfect, however, the iMessage approach is clearly superior. Treat all messages as if they're secret. Treat each of the recipient's devices as a separate entity with it's own unique encryption keys. Keep the private keys in the hands of the user's device. Only store messages on the iMessage server in a format that Apple themselves can't decrypt. Don't place any of the onus on the user to be secure. Don't assume the user knows when they are and aren't being secure.

There are a lot of things that Telegram would do well to learn from iMessage.

Knowing that Telegram's developers are knowingly overlooking such critical issues or design flaws, however, makes it very difficult to recommend Telegram as a truly secure messaging solution, especially to non-technical friends and family. Whilst competitors, such as WhatsApp and Facebook Messenger, are already working to further spread the deployment of end-to-end encryption for messages, Telegram seems to have stagnated and does not appear to be interested in solving the core issues with non-secret chats, or better yet, eliminating the idea of non-secure chats altogether.

It may be prudent to not place too much trust in it after all.

Are we right to blame Tesla in the wake of autopilot accidents?

Tesla, founded in 2003, have become the benchmark in the production of viable electric vehicles. The rollout of the Autopilot feature to many Tesla cars worldwide, which essentially allow the cars to drive themselves, has resulted in multiple headlines with questions raised by automotive regulators and Governments worldwide on how safe and mature the technology is. "Can a car really be trusted to drive itself?" they ask.

However, within recent months, a number of accidents involving Teslas in Autopilot mode (and even some that weren't) have also made the news, causing widespread doubt on whether this technology should even have been rolled out in the first place. Tesla claim the technology to be "beta" ⎯ that is, not fully complete and still evolving, and users of the system must accept this on a warning message when they first try to activate the system. Critically, the message warns users that "you need to maintain control and responsibility of your vehicle while enjoying the convenience of Autosteer". The system even continuously monitors the presence of the driver's hands on the steering wheel, and will slow down after an audible warning if the driver leaves the steering wheel hands-free. 

In essence, whilst the system may be able to function relatively autonomously in the right conditions, ultimately responsibility and control remains with the driver at all times, who is able to override and take control of the system simply by resuming normal driving. In the case of Joshua Brown, who on the 7th May 2016 was unfortunately killed in a car accident whilst relying on Autopilot, would have been able to prevent the accident had his attention been entirely focused on the road ahead, in the same way that any other driver using standard Cruise Control would be expected to take action to remain safe and in control, and to take preventative measures before their vehicle collided with anything. 

What we must remember is that Tesla's Autopilot feature is nothing more than glorified Cruise Control, and that whilst we can label the functionality as "semi-autonomous", the car is by no means entirely self-driving.

Therefore we cannot pin liability of such accidents singularly on Tesla. No self-driving system that exists today is free of flaws, and many only work in the right conditions. Autopilot, for instance, will only work where the lane markings are clear and visible, and should not be used on any route where there are sharp turns to follow. Even Google's self-driving cars, which are fundamentally aimed at being entirely driverless, also have limitations, and have also been involved in road accidents. 

It will take decades to find out whether or not self-driving or semi-autonomous vehicles are truly safer than those piloted manually by us, but in the meantime some attention should be drawn to the fact that only a very small number of Tesla vehicles have been involved in Autopilot accidents, fatal or otherwise, compared to the average of 5 people each day that die in road accidents on UK roads in manually-controlled vehicles.

For Your Protection

Technology has become an ever-present factor of our daily lives. We depend on phones and computers and social networks and internet search and email on a scale that could not have possibly been imagined 20 years ago. We communicate with our friends and our family, sharing photographs and chit-chat and our inner-most thoughts and secrets with one another, we exchange business deals and trade secrets and financial transactions ranging from the smallest startup to the largest multinational corporations. We exchange unforeseen quantities of data digitally, and we do so with the fair and reasonable expectation that our communications are private. We place our trust in those who supply our technology, our communications infrastructure, to take adequate measures to protect our interests. Strong encryption has provided us with that guarantee.

This week, the FBI came to blows with Apple over security measures built into the hugely popular iPhone. The high-profile San Bernardino case, in which 14 people were killed and a further 22 injured in a terrorist attack, has left the FBI with a considerable problem: they feel that crucial evidence may reside on the iPhone owned by one of the terrorists. The FBI approached Apple for help in defeating the security measures built into the phone in the hope that they may find something useful on it, however, Apple have opposed the request and declined assistance.

The measures in question are features built into the iPhone in order to protect the information stored within if the device is ever lost or stolen. The contents of the phone are fully encrypted and passcode entry to unlock the phone is rate-limited - that is, you can only enter the passcode incorrectly so many times within a given period before the device will wipe itself. In this instance, any evidence stored on the phone would be irreversibly destroyed, and the encryption renders it infeasible to retrieve the data without the correct passcode. 

At an early glance, one may be tempted to side with the FBI. After all, Apple in this instance have obstructed an investigation which may reveal further evidence. After all, nobody likes terrorism, and Governments would really rather us believe that this is actually for our own protection. However, it is just as likely that the phone contains little to no relevant information to the case, and yet may open a spectacle into unrelated personal matters in his own, and in other people's worlds. There is no way to know for certain without unlocking the device.

The precedent set by Apple if they were to comply is all the more chilling: it sends a message to the hundreds of millions of smartphone users out there that companies can be forced to betray their trust if ordered to do so by a Government entity. There is no guarantee that the reasons would always be legitimate. 

It is also worth mentioning that the same "For Your Protection" mindset and the huge fear of terrorism and crime is the exact same reason that all reason seems to go right out of the window the moment we go anywhere near an airport. Western society has become paralysingly afraid of extremism and terrorism and this makes it all the easier to encroach on your freedoms in the name of "fighting terror". 

The FBI's proposal in this instance is that Apple should simply build a version of the iOS software, specifically for this one device, that will not implement those security measures, allowing massive numbers of passcode permutations to be entered into the device in a short period of time. Eventually they would hit on the correct passcode and unlock the handset. Apple are the only ones who have the ability to both engineer the firmware to do this and to cryptographically sign it so that it will actually run on the phone. Attempting to install modified firmware that is not correctly signed will fail as it will be rejected by the handset. It may even render the handset altogether useless. 

However, the problem is that it is incredibly difficult to perform this once only. As a friend rather colourfully described to me this morning, Apple could certainly select a team of engineers, place them in complete isolation and then kill them and all of the equipment once the work was finished. Beyond that, it is incredibly difficult to guarantee that this exercise would not be repeatable in the future.

Worse, it also would prove the feasibility of such a backdoor and to reinforce the idea that the Government can strong-arm companies into taking such action again in the future.

It is becoming increasingly evident that personal liberties are not high on the agenda for the average Government. In the UK, the Government is working to pass a law in which citizens will have the Internet browsing activity tracked for up to a year, and that a number of organisations, including the Police, would have unrestricted access to this information. (This is worsened only by the fact that those drafting the law seem to have a complete misunderstanding of how such technology actually works.) Snowden revelations have already proven that power structures in the US and all around the world have been engaging in massive surveillance operations, gathering Internet traffic and communications and processing them in private for a variety of reasons, and often without warrant or cause. 

It is for this reason that more and more services are starting to encrypt user information and communications with strong cryptographically. Apple has traditionally been at the forefront of this approach, with their services such as iMessage and FaceTime making use of very strong transit encryption. More and more people are turning to services such as these, and others that also employ strong encryption (for example, Telegram Secret Chats, WhatsApp end-to-end encryption, WhisperSystems' Signal and others) in order to protect their own liberties and to take themselves away from potential unwanted spying.

Apple's Tim Cook writes in his an open letter his clear understanding of this issue:

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers — including tens of millions of American citizens — from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.
— Tim Cook

Other large companies, including Google, have since come out in support of Apple's preference to protect individual privacy and against undermining those same defences that we all use day-to-day to protect our own interests (although admittedly, some organisations shout louder than others). It is not just Apple devices that are covered here too, as Android devices, amongst many others, also employ similar encryption in order to protect user information.

This precedent is hugely important, and it is one that affects all of us. We must be guaranteed the ability to adequately protect privacy and our interests, and we must be free to reject propaganda set forth by Governments that is justified to us time and time again in the name of "fighting terror". We must be free to continue to employ strong encryption not just to protect ourselves from those abusing their power, but also from hackers and fraudsters and others who have a vested interest in the kinds of information we so frequently encrypt.

The technology industry is moving in a clear "encrypt everything" direction, and this will thankfully continue to present a major challenge for those who wish to perform mass surveillance or to break into our private assets. In the meantime, I fully welcome and agree with Apple's resistance to this order. It's time that more of us stood up and delivered the same resounding message, that our right to privacy is no less important now in the Information Age than before.

Technical Note: The proposed attack by the FBI is only possible in this instance due to the model of the phone in question: the iPhone 5C, which predates the "Secure Enclave" technology which is used to store encryption keys in an even more hardened fashion. The Secure Enclave in newer Apple devices also includes a number of features to prevent the encryption keys from being stolen, and these protections are altogether separately implemented from the operating system. As the iPhone 5C does not have this built-in, protective measures are instead built into iOS to provide the same effective level of protection. The same attack, if performed on an iPhone 6 or newer, would be ineffective as the integrity of the Secure Enclave would not be affected in any way by an iOS firmware upgrade.

Porting cjdns to the Ubiquiti EdgeRouter

cjdns is experimental software that aims to produce an end-to-end encrypted IPv6 network that guarantees security and privacy. A routing algorithm loosely based on Kademlia is used to establish routes to other nodes in the network. Having recently come to own an Ubiquiti EdgeRouter X, I started to wonder how easy it might be to port cjdns to the ER-X.

The operating system on the ER-X, known as EdgeOS, is actually a fork of the Vyatta virtual router system, which itself is Debian Linux-derived. The system is built around a dual-core MIPS processor with 256MB DDR3 RAM and a further 256MB of NAND flash storage. There are five Ethernet ports, including one supporting Power-over-Ethernet (PoE), all of which can be joined to a hardware-driven switch. Certainly more than capable of stepping up to the job.

The first step of the process was to build an environment which can be used to cross-compile the cjdns binary itself to the MIPS architecture of the ER-X. As it turned out, a Debian Jessie environment proved suitable for this, using the crossbuild toolchains. The build system packaged with cjdns itself already includes some cross-compilation support, so a few easy steps later, I wrote a Makefile that would build cjdns using the MIPS toolchain. Fairly easy sailing so far.

(As I later found out, building for the EdgeRouter X was significantly easier than building for the EdgeRouter Lite, due to the fact that the ER-L uses a 64-bit MIPS architecture instead of the 32-bit one used by the ER-X. The Debian embedded crossbuild toolchains don't seem to have any support for the 64-bit MIPS architecture, so in the end a contributor on GitHub dug out an altogether different toolchain from Codescape.)

However, building the cjdns executable itself was only a minor part of the battle. Vyatta-based systems, EdgeOS included, have a command-line configuration interface (known as vyatta-cfg) which allows the configuration of the router and its various components. The cjdns package had to fit into this in order to be user-friendly, otherwise the user of the software would need to manually edit the cjdns configuration files - not ideal). 

The vyatta-cfg system actually draws all of its supported configuration commands from a folder structure stored on the system itself, in which every configuration node is defined with a number of options including the types of values that should be accepted, and what to do with those values once they were added to, updated in or removed from the system configuration. Not knowing really where to start with this, I figured it would be easiest to start with an existing Vyatta package and to modify the contents. I later discovered that actually, vyatta-cfg is actually fairly-well documented

Having defined the options that should be available to configure cjdns was still not enough. After all, the vyatta-cfg system still didn't know how to generate a configuration that would be suitable for cjdns to parse. (For the record, the cjdns configuration file is a JSON file which made it somewhat easier to manipulate.)

The final part of the puzzle was to write a script that could take a variety of inputs from the vyatta-cfg system and to use it to modify the cjdns configuration file by itself, adding, changing or removing values based on the input to the Vyatta command line. I chose to write this script in Python largely for two reasons: one was because I wanted to reinforce my Python skills a little, and the other because it seems to be already fairly widely in use within Vyatta/EdgeOS. It seemed like a logical choice.

Finally, all of this was pulled together into a Debian package and the net result is a package that can be deployed to the EdgeRouter in order to provide cjdns functionality. At present the necessary functionality to set up cjdns peerings is present, both over UDP and using Ethernet beacons, and configuring the firewall is also there. There are still some features missing, such as configuring IP Tunnel and specifying Ethernet peers by MAC address, however I plan to add these soon. There is also a fairly decided lack of input validation at present, so entering bad values will probably just result in cjdns failing to start.

I have been running this package on my ER-X for nearly a month now with very few problems. Sometimes the cjdns executable crashes (after all, cjdns is still alpha software), but I have found that the easiest way to get around this in the interim is just to configure a scheduled task within the CLI that checks every minute if the application is still running, and starts it if not. Not entirely ideal, but I haven't yet had the time to write the necessary boilerplate code to "supervise" the process correctly. 

I have open-sourced this project and it is hosted on GitHub, along with documentation how to build it using a Debian Jessie system and how to configure it once installed on the EdgeRouter: