This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • NIST Guy Who Came Up With Hair-Tearing Password Requirements Says He's Sorry

    The "NIST midlevel manager" who came up with the crazy password requirements – well, technically, recommendations. You know, must include special characters, uppercase and lower case letters, alphanumeric – says that he's sorry and that "much of what [he] did [he] now regret[s]."
    As the Wall Street Journal explains, Bill Burr was a manager at the NIST – not a security researcher – who was under a deadline to produce a document on password security. In addition to not being a security researcher, he was also hampered in his efforts by the lack of and access to data. In the end, he based his guide on an outdated white paper.
    And ever since, people all over the world have been struggling with passwords.  

    It Doesn't Work (But For the Lack of Trying… and Not)

    Burr should give himself a break. The reason why his requirements don't work is because people are quite tenacious when it comes to abusing loopholes in the digital realm. That, and the inexorable progress when it comes to the speed of computing hardware.
    The NIST document made its debut in 2003. We're living in 2017. When you consider that Moore's Law – the one regarding computer processing power, that it doubles every two years or so – is still valid as of right now, it means that today's processors are 128 times faster than those of 2003; password lengths, though, have barely budged from between 8 and 12 characters long.
    In addition, in the realm of brute-forcing passwords, pure CPU processing power has been surpassed by other approaches. GPUs have left them in the dust, as have distributed and parallel processing. In the face of tremendous brute-force processing power, there's only a handful ways to ensure that a password can retain its integrity in the face of attacks:
    1. Make the password longer,
    2. Increase the number of values for each character (e.g., lowercase alphabet is 26 values; upper and lowercase is 52 values; the addition of numbers to that is 62 values; etc.),
    3. Change your password frequently, or
    4. Slow down how quickly a password is processed (e.g., even if hardware can run through a gazillion passwords per second, the system is designed so that it can check one password per second).
    Data breaches the world over have shown that certain passwords are used over and over. Regardless of how long or crazily complicated a password is, if a sizable sample of the population uses the same passwords, #1 through #3 become meaningless.
    And, #4 becomes meaningless when you have data breaches the world over.
    People may complain that frequent password changes, complex passwords, etc. "don't work" but what's the option? Never change passwords? Make passwords as simple as possible?  

    Regarding That XKCD Comic…

    And, of course, the WSJ made a reference to the classic XKCD strip regarding "correcthorsebatterystaple" as a password.
    The problem with creating passwords using this approach is that, when enough people in the population start using it, it will become the weak link of passwords.
    As noted in the comic strip (which is a bit dated, from 2011), correcthorsebatterystaple has 44 bits of entropy, which is based on 4 words randomly chosen from a list of 2048 common words. It notes that it would take hundreds of years to break.
    A comparable way of looking at this is that it offers the same level of protection of a password that is 8 characters long, each character chosen from a list that is made from lower and uppercase alphabet letters; all numbers from 0 to 9; and four special characters.
    Here's the thing: researchers have shown that they can brute-force passwords with 10 characters or less within a couple of weeks. Indeed, passwords have to be about 22 characters long or so to pass muster.
    So, hitting on correcthorsebatterystaple wouldn't take hundreds of years; I doubt if it would take a week – using an iPhone, no less. Could people use words from a bigger, thicker dictionary? Sure. But they won't. Mesothelioma will show up – and its spelling be correctly recollected from memory – as often as Tr0ub4dor&3 (There is the advantage, though, that mesothelioma can be looked up in a dictionary).
    Of course, you could also use the same 2048 words but make the password longer (more than 4 random words)…but the equivalent to the 22 characters I mentioned above would be 12 randomly picked words. All of a sudden, it's not so easy to remember anymore.
    Take a bow, Mr. Burr. It's not that your guidelines don't work; it's just that technology razes everything in its path, and most humans are terrible at remembering anything that is unfamiliar and beyond a certain length


    Related Articles and Sites:

  • Customs and Border Protection Admits They Cannot Search Remote Data

    Earlier this week, the US Customs and Border Protection (CBP) responded to Senator Ron Wyden's inquiries regarding electronic device searches at US borders (more specifically, airports). As numerous media outlets have relayed, CBP "admitted" that they do not have the authority to search data that is "solely" in the cloud, data that is not the on a device itself but could easily be accessed via a smartphone.
    The implication, it appears, is that CBP does not want to risk accessing information that could exist in servers located on proper US or foreign soil – that is, outside of their own jurisdiction – and which could require a proper warrant.
    But aside from that, CBP reiterated that they have the right to conduct searches on data storage devices. The inclusion of the word "solely" in the response, experts surmise, means that emails, text messages, and other information that exist both in the cloud and a device is fair game.
    In addition, CBP apparently admitted:
    that travelers can refuse to unlock their devices or hand over their passwords, but if they do so, CBP officials have the right to detain the device. []

    A Couple of Things of Note

    As interesting as the above may be, taking a look at the actual letter(PDF) had plenty of surprising things to reveal that wasn't covered elsewhere.
    To begin with, it appears that CBP can search your belongings for absolutely no reason ("do not require a warrant or suspicion") – it wasn't "just a feeling" that they were doing it, it's actual policy. In addition, they will limit when they'll search a device's contents based on geographic location. In a footnote, the following can be found:
    Border searches of electronic devices do not require a warrant or suspicion, except that following the decision in Cotterman v. United States, 709 F.3d 952 (9th Cir.2013), certain searches undertaken in the Ninth Circuit require reasonable suspicion of activities in violation of a law enforced or administered by CBP.
    The implication here is that, somehow, entering the US via the west coast guarantees a little more rule of law than entering the US from elsewhere (the Ninth Circuit is comprised of Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, and Washington, as well as Guam and the Northern Mariana Islands).
    In addition, the letter pointed out that searches of devices are "exceedingly rare… less than one-hundredth of one percent of travelers arriving" to the US.
    This means that devices searches are less than one in 10,000 (which translates to 0.0001 or 0.01%); it also implies that searches are somewhere close to this number. That does seem rare indeed. Except, let's put that in context, shall we?
    According to the US's own government data (PDF), 77.51 million international visitors traveled to the US in 2015. For Americans going abroad, it was 32.78 million; one assumes most of them will return. Applying that 1 in 10,000 figure above, it translates to approximately 11,000 devices searched. It might be relatively small to the number of people entering the US, but it's a pretty big number in its own right. I mean, can you imagine 11,000 phones laid side by side? Where do they even store all this stuff?
    For an everyday comparison, take the instance of car crashes. According to this site, over 37,000 people die in road crashes each year. There are about 323 million people in the US. That means 1.15 in 10,000 people die in car crashes every year in the US. Those figures are pretty close to the number of devices searched by CBP.
    Now, ask yourself, does it feel to you as if car crash deaths are exceedingly rare in the US?  

    One Final Thing

    In a question, the Senator asked whether (my emphasis) "CBP is required to first inform the traveler that he or she has the right to refuse to disclose a social media or email account password or device PIN or password"?
    The CBP's answer, while long, does not address the issue. It would appear that the answer is "no, there is no such requirement."
    Not sure why you'd perform verbal jujitsu instead of coming right out and saying it. It wouldn't be unexpected of people who can perform "border searches of electronic devices [that] do not require a warrant or suspicion."
    Related Articles and Sites:
  • Australia Looking To Compel Electronic Message Decryption

    Last week, Reuters and other sources reported that the Australian government has proposed laws that would compel companies to provide access to encrypted information. Obviously, asking for such data is conditional upon taking all the proper legal steps.  

    A Growing Demand

    Governments the world over have been clamoring for access to encrypted data for years now, for decades if you want to be technical about it: ever since the "encryption wars" of the 1990s. But, demands have particularly intensified in recent years, usually under the banner of "we can't tell what terrorists are doing," an argument maintained by upstanding democracies as well as disreputable nation states.

    Some propose the inclusion of backdoors to encryption, which is universally rejected by security professionals: in an era when even unwitting mistakes can be leveraged to break into encryption, a purposefully built-in cryptographic defect will undoubtedly create bigger problems – especially if it's known, without a doubt, that there is one to be exploited.

    (On a societal level, backdoors present a different problem to governments supposedly accountable to the people they represent: your government officials proposing to force companies to secretly build a way to spy on people. Even if the government were to follow all protocol when doing the spying, it would be hard to shake off comparisons to ***, fascists, Cold War-era Eastern Europe, North Korea, etc. because legal protocols were followed in these cases, too).  

    Not a Back Door, But Security Suffers Nonetheless

    Australia's proposed legislation is heavily based on the United Kingdom's "Snooper's Charter." However, it differs in its decryption requirement, with the UK's version not having such a clause.

    Thankfully, back doors are not required to meet the rule of law. For example, a copy of each message ever sent via an encrypted app could be kept stored somewhere. The encryption engine used in the apps could remain intact, ensuring that messages that are intercepted cannot be cracked. This means, however, that the app provider would have to act as a middle man, and that the message would be encrypted twice: once when being sent to the app provider and again when the latter sends the message to the recipient.

    If the provider is subpoenaed, it's a matter of producing the decrypted message to the authorities. (One assumes that these messages will be stored in an encrypted state until necessary).

    This, however, introduces a second problem: knowing that perfect security is impossible, the provider is charged with the unachievable duty of protecting every single one of those messages from hackers, be they internal, external, or government-sanctioned (foreign or otherwise).

    Furthermore, it does nothing to address the original problem that started this entire "encrypt everything" mindset: finding out that governments, Australia included, have been collecting, processing, mining, and generally spying on people. What's even more suspicious is that, when challenged or about to be challenged about these practices, the programs were shut down.

    Sounds like something out of the Bourne Trilogy.  

    Short-lived Gains

    Looking at the long-term consequences, it looks like the law will be for naught. The argument for favoring the law is that governments are having a hard time finding, tracking, and neutralizing terrorist activities; and that it's not the government's intention to pry into private lives on a massive, automated scale.

    Fair enough.

    But, aren't terrorists using these encrypted apps because they're encrypted? It's not as if they chose a communication app for its choice of emoticons and stickers and, as a happy coincidence, it also features end-to-end encryption.

    Consequently, if governments the world over decide to enact similar laws to Australia's proposed, wouldn't it force terrorists to use something else? Perhaps create an encryption app of their own that can be side-loaded on a smartphone? It wouldn't necessarily be easy, but with enough resources, it's not impossible. For example, Mexican drug cartels have been known to build their own cellular network; creating a private communications app would be less daunting and cheaper… and its operation would be harder to detect.

    So, the end result would still be the same – governments having a hard time finding, tracking, and neutralizing terrorist activities – but with increased costs to providers and increased security risks for law-abiding people all around.

    This, however, does not seem to faze the Australian government. Assuming, of course, that they've even considered it at depth. Australian Prime Minister Turnbull had this to say regarding math and cryptography:

    "Well the laws of Australia prevail in Australia, I can assure you of that. The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia," he said.
    This was later clarified:
    "I'm not a cryptographer, but what we are seeking to do is to secure [technology companies'] assistance. They have to face up to their responsibility. They can't just wash their hands of it and say it's got nothing to do with them."
    Of course, tech companies are not saying that, that "it's got nothing to do with them." If anything, tech companies are essentially stating the obvious. In terms of an analogy, they're saying that they cannot create a gun that only shoots the bad guys, and when trained on the good guys, the gun won't work. Encryption is the same, and the moment you start making compromises on how it works, it's bound to create other problems. Regarding guns, it would be limited and minimal, but when it comes to encryption, the effects could be global.  

    Two Moves Ahead?

    However, perhaps the end game to the legislation is forcing criminals off well-protected, well-financed apps and devices.

    Some of the brightest minds in the world are working on or contributing towards the security efforts of companies whose lifeblood is the internet. Criminals' technical knowhow and financial resources are microscopic in comparison to what these companies can marshal for data security. Piggybacking on their efforts, and for gratis to boot, is a win-win for bad guys.

    Understandably, governments should be incensed. It would be preferable that those in the shadows use their (comparatively paltry) resources on creating "top-notch" security – thus shifting funds away from, say, weapons – than getting it for free.

    And, while creating a communications app might be relatively easy, creating a secure means of communication with perfect privacy is not. If the best minds in the world are not involved, there's a very good chance that easily exploitable flaws will be found… and that's familiar ground for intelligence organizations.

    So, even if criminals and terrorists do veer off towards uncompromised encrypted communications because of the law, the end result could very well not be the same.

    The only problem? You can't legislate math and science; in the long run, they win. The problems pointed out by mathematicians, scientists, cryptologists, data security professionals, and others will rear their ugly heads sooner or later.

    The question is whether it will have been worth it.


    Related Articles and Sites:

  • Schools In EU Could Face Heavy Fines For Data Breaches

    Beginning in May 2018, schools in EU member countries (including the UK despite Brexit) must comply with the new General Data Protection Regulation (GDPR). Not doing so would mean they could be subject up to 4% of their turnover, a figure that created quite the buzz when it was announced for businesses earlier this year (some news sites debated the implication for notorious privacy-stragglers like Facebook and Google).  

    Walk, Don't Run

    Poring over data breaches of the past ten years, it can be readily surmised that the education sector is not very good at protecting itself from data security incidents, be it hacking intrusions or lost data storage devices. It must be pointed out, however, that it's usually because they lack the resources to do so: schools trying to stretch their shrinking budgets cannot afford the latest and greatest in technology (leading to banks of old computers running long-abandoned software that is virtually impossible to secure), much less a proper IT security staff.
    In the short-term, it looks like GDPR could cause more problems than solve them, especially because schools are unaware of their responsibilities. In the UK, the Information Commissioner's Office (ICO) has provided some tips on what must be done – but as it's usually the case, this is not to be taken as a checklist where you can cross off items and declare yourself compliant.
    There is much to do, according to an expert interviewed by
    • Replace out-of-date IT equipment and ensure warranties exist for current equipment.
    • Designate a data protection officer.
    • Ensure a formal contract exists with data processors, which need to meet industry standards.
    • Document where data goes and how it is used.
    There are other experts, however, who advise not to rush into things as there are some issues that need to be resolved. On the other hand, they, too, advise to "start 'preparatory tasks.'"  

    Other Side of the Pond

    Meanwhile, in the US, a school district in Maryland has opted to stop collecting Social Security numbers for students:
    Director of Technology Infrastructure Edward Gardner, who oversaw the development of the new data policy, said the school system would not collect student Social Security numbers "unless explicitly necessary," and he could not think of a reason it would be.

    This may very well be the best approach to data security: if you don't need it, don't collect it. Far too many organizations take the approach of "collect it first and deal with it later." The problem is, of course, that it never gets dealt with at all. At least, not in the sense of securing the data – be it encrypting, scrubbing, deleting, etc.

    Unsurprisingly, a data breach ensues somewhere.

    Related Articles and Sites:

  • UK ICO to SMEs: Data Protection Laws Apply to You

    The United Kingdom's Information Commissioner's Office (ICO) has slapped Boomerang Video Ltd. (BV), a company that rents out video games, with a £60,000 fine. The monetary penalty is the result of a 2014 data breach in which personal details of 26,000 people were stolen.
    The fine deserves another look because BV's data breach was the result of an attack; it is not an instance of the "breachee" having a hand in the data breach, e.g., never changing the default password or using software that was out of date. Nothing that foolish.
    At the same time, BV certainly could have done much, much better to secure their online presence.

    SQL Injection Attack

    As the ICO notes, the breach took place via a SQL injection attack. This in turn allowed hackers to guess a password "based on the company's name," allowing access to the company's servers. Of course, once inside, all sorts of shenanigans took place.
    The hacker (or hackers) was aided by certain practices that BV engaged in, as listed by
    • Boomerang Video failed to carry out regular penetration testing on its website that should have detected errors.
    • The firm failed to ensure the password for the account on the WordPress section of its website was sufficiently complex.
    • Boomerang Video had some information stored unencrypted and that which was encrypted could be accessed because it failed to keep the decryption key secure.
    • Encrypted cardholder details and CVV numbers were held on the web server for longer than necessary.
    The above is not a full list (for example, they also stored cards' security codes, which are prohibited once payment is processed). But, it already paints quite a picture.

    Surprising Requirements?

    What may be surprising to most Britons is the level of security awareness a business must have, even if they happen to be a small- or medium-sized enterprise. SQL attacks, password complexity, penetration testing, securing encryption keys… these are not terms one is generally familiar with. You may hear it here and there once in a while, maybe even have a passing knowledge of what it may entail.
    But actually doing it? Some of the listed practices lie firmly in the realm of professionals who charge a lot of money for their services. Unsurprisingly, business that are not necessarily raking it in do not seek or engage the necessary help that is required to protect their clients (and to meet the law's standards).
    On the other hand, BV's website debuted in 2005, and "remedial action" to secure the site was taken in 2015. That's a long time to go without checking whether things are secure, especially considering what the internet has morphed into: among other things, a speedy region where data crimes blossom with greater severity every passing second.
    The lesson to be parted with in this instance comes not in the insight you can glean from the nature of BV's digital sins and the monetary fine it was levied with, but from the ICO's enforcement manager's own words:
    "Regardless of your size, if you are a business that handles personal information then data protection laws apply to you.

    "If a company is subject to a cyber attack and we find they haven't taken steps to protect people's personal information in line with the law, they could face a fine from the ICO. And under the new General Data Protection Legislation (GDPR) coming into force next year, those fines could be a lot higher."
    The government is sending a signal, loud and clear, and in oh-so-many ways. Are businesses listening?  
    Related Articles and Sites:
  • EU Proposes End-to-End Encryption and Other Security Measures

    Last week, the European Parliament's Committee on Civil Liberties, Justice, and Home Affairs released a draft proposal that would require the use of end-to-end encryption. It would also strike legal attempts to force backdoors in encryption software or weaken the security of services given by communications providers.
    Amendment 36
    Service providers who offer electronic communications services should process electronic communications data in such a way as to prevent unauthorised access, disclosure or alteration, ensure that such unauthorised access, disclosure or alteration is capable of being ascertained, and also ensure that such electronic communications data are protected by using specific types of software and encryption technologies.
    Amendment 116
    The providers of electronic communications services shall ensure that there is sufficient protection in place against unauthorised access or alterations to the electronic communications data, and that the confidentiality and safety of the transmission are also guaranteed by the nature of the means of transmission used or by state-of-the-art end-to-end encryption of the electronic communications data. Furthermore, when encryption of electronic communications data is used, decryption, reverse engineering or monitoring of such communications shall be prohibited. Member States shall not impose any obligations on electronic communications service providers that would result in the weakening of the security and encryption of their networks and services.

    Many of the proposals, but expressly the above two, run counter to certain governments' recent actions that would cripple encryption and other security measures for all in the name of fighting terrorism and other crimes.

    It is a welcome breath of sanity for a world that increasingly appears to be regressing back to an imagined time of stability.  

    Does This Mean the Bad Guys Are Protected?

    Of course, there will be those that, in typical knee-jerk fashion, will cry that we're giving the bad guys an upper hand. Nothing could be further from the truth.

    Laws protecting privacy always make an exception for illegal activities, and the EU provides exceptions for those who seek to abuse the system. For example, while Amendment 116 would make it impossible to decrypt any text messages that are stored in a smartphone (the smartphone itself is not protected, it seems, since the law specifically mentions "electronic communications data" – that is, information that is exchanged between two or more people), an exception would kick in if the messages were part of an investigation.

    What the new amendments will do is further cement the protections long afforded to law abiding citizens, and prevent those who would slowly decimate the same under one pretext or another.  

    Read the Fine Print, Though

    Some media outlets covering the amendment mention that this means the EU is recommending (some even go as far as saying banning) backdoors. This claim needs a little clarification, it seems, since it seems overly broad.

    In each instance of the amendments that were referenced, the term "electronic communications data" and "electronic communications provider" is included. Thus, it would appear that while backdoors are being given a red light, it is limited to encryption for data-in-motion. There is nothing here to suggest that the same is being extended to data-at-rest encryption, the type of cryptography that is generally used for securing all the contents of a laptop, for example.


    Related Articles and Sites:,34809.html

More Posts « Previous page - Next page »