This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Global Malware Emergency Shows Why Backdoors Are Dangerous

    The big data security news this week is, of course, the WannaCry ransomware situation that reared its head last Friday, continued to grow over the weekend, and threatened to really become something had it not been for a serendipity: a kill-switch, possibly a mistake, baked into the malware.
    Many organizations and traditional news outlets have covered the situation from every angle possible, including:
    Of all the different reports and thought-pieces, though, this one might be the most controversial:
    One could be excused for thinking that Microsoft is just engaging in PR, passing the buck, refusing to shoulder responsibility, etc. After all, the ransomware hijacks a flaw that's been present in every single Microsoft operating system since Windows XP. That makes it a 15-year old flaw… and it means that Microsoft had 15 years to identify it and fix it. It certainly paints a bad picture for Redmond. However, blame cannot fall on Microsoft alone: for Act II, the same or different hackers could decide to target a flaw that's found in Apple's operating system or any of the various flavors of Linux. What's to stop them? After all, this latest attack, global in nature, was enabled by the leak of NSA hacking tools. And if rumors are true, the agency must have methods for exploiting weaknesses besides those found in Windows.
    It's not a secret that intelligence agencies make use of flaws (in fact, some have accused them of "hoarding" flaws). And now, unsanctioned hackers are getting into the game as well, in a big way, thanks to those same hoarded flaws leaked via WikiLeaks.
    An argument could be made that the flaws couldn't have been exploited if, instead of keeping a tight lid on them, the government had alerted companies of the flaw so they could fix it, or if the government had actually managed to keep a tight lid on things. But then again, agencies like the NSA are not in the business of identifying flaws and having these patched up. That's not their raison d'être. In fact, disgusting as it may be, it feels a little silly to criticize them for doing exactly what they were chartered to do. It'd be like criticizing a dominatrix for inflicting pain and humiliation.
    Another reason why you can't put all the blame on the government: there is some heft to the observation that Microsoft worked to fan the flames by coming up with a patch back in March, but allowing only paying customers to access it. One could argue that, NSA hoarding flaws or not, the situation would have been dampened if Microsoft hadn't tried to monetize the patch.
    Regardless of who you think is "responsible" for what happened, the disaster shows exactly why a security backdoor is a bad idea.

    Flaws => Backdoors

    Last year, the FBI took Apple to task for refusing to cooperate with a certain demand: that the company somehow provide a backdoor to encrypted iPhones. Of course, the FBI never said outright that they wanted a backdoor. But, in the end, it was exactly what they were arguing for.
    The counterargument from Apple and the rest of the technology sector was that backdoors cannot be completely controlled and thus will never be safe, a tune that data security professionals have been singing since the early 1990s. The tech sector's argument went so far as to imply that nothing less than the security of the free world was at stake. Many called such arguments spurious and melodramatic. An overhyped situation just like the Y2K bug scare (although, some argue that it's because a scared-witless world poured billions into the problem to fix it in time that the threat never materialized).
    As long as the situation remained theoretical, such criticism had a foot to stand on. But, with the temporary shutdown of an entire country's healthcare network (the UK's National Health Service) in our rearview mirror, it's hard to imagine that the tech sector's arguments can still fall on deaf ears. When it comes to data security, unintended flaws and purposefully placed backdoors are essentially the same because they lead to the same situation: at some point, someone who shouldn't know about it is bound to find it and exploit it.
    The many data security scares in the past, for all their coverage in the media, scarcely managed to turn fast-held opinions on the "need" for a backdoor. Some went as far as stating that they were sure that the brilliant minds behind encryption would find a way to create a secure backdoor that is inaccessible by the bad guys. (Despite the fact that the same brilliant minds were emphatic that it couldn't be done).
    One wonders how they could have been (and, possibly, currently are) so deluded. Perhaps it was because there wasn't a visceral-enough crisis to jolt their thoughts on what could be. Or, perhaps they thought that what could be wasn't really going to happen, at least not in their lifetime. This latest development will hopefully work to dampen their misguided but well-intentioned enthusiasm for hamstringing security, at least for the time being.


    Related Articles and Sites:

  • Sextortion Case Treads A Well-Worn Path: Are Passwords Protected Under the Fifth?

    A case of "sextortion" – blackmailing someone over naked footage (digital footage, more specifically, to reflect the times we live in) – between Instagram celebs has again dredged up the decidedly non-superfluous legal quagmire that's been repeatedly visited since at least 2009: Is forcing a defendant to spit out his or her password a violation of the Fifth Amendment?
    In the latest case, the answer appears to be "no"…for now. As is usually the case, the decision is going to be appealed.  

    Providing Passwords is Self-Incrimination, No?

    According to, one Instagram celebrity tried to extort $18,000 from another Instagram celebrity. Long story short, the extorter and her boyfriend were arrested. The authorities have the incriminating text messages but apparently want to "search for more evidence" and asked a court to compel the two defendants to produce their smartphones' passwords. (It wasn't specified what that extra evidence is).
    The judge in charge OK'ed the request:
    The ruling was based on a recent decision in the Florida Court of Appeals that ordered a man suspected of taking illicit photos up women's skirts to give up his four-digit passcode to authorities.
    The odd thing, though, is that decisions to the contrary, as described in this Washington Post opinion piece, can be found as well. The link's content, it should be pointed out, argue why that particular decision was incorrect. However, the author also reverses himself the next day.
    Needles to say, the situation surrounding the production of passwords is fraught with problems, constitutional and otherwise.
    Like the many cases before the sextortion one, it's obvious that the details of this case need to be weighed carefully (it goes without saying that, ideally, that should always be the case). What's important is not that the Florida Court of Appeals ordered a man to reveal his passcode; rather, the focus should be on why the appellate court came to that decision. For example, in past cases involving the forced revelation of passwords or encrypted data, a significant factor was whether the "foregone conclusion" principle applied. For an excellent layman's distillation of this principle, read the Washington Post piece mentioned above.
    In this extortion case, it seems that the government has a winning hand: not only do they know that the encrypted phones belong to the defendants, I imagine that they know what they're looking for. Namely, the naughty pics and videos that were used in the extortion. So, it's not a fishing expedition; the foregone conclusion applies, and as long as the warrant is written out correctly, there shouldn't be any problems. Of course, that last part is the crux of the matter, isn't it?

    Related Articles and Sites:

  • HIPAA/HITECH Doesn't Require You To Be Perfect, But It Does Expect You To Follow The Rules

    A couple of recent Department of Health and Human Services (HHS) legal settlements emphasize paperwork over security, showing that a healthcare entity's approach to safeguarding data must be holistic: yes, you need to use encryption, and lock doors, and hide screens from potential medical data peeping-toms…but you also need to make sure that you've followed protocols regarding the creation of policies and other actions deemed obligatory by the HHS.
    Not doing so "will cost you."  

    $31,000 For Not Producing a Business Associate Agreement

    According to, the Center for Children’s Digestive Health (CCDH), an Illinois-based pediatric center (their website is, appropriately enough,, was fined more than $30,000 for being unable to produce a business associate (BA) agreement. The document is supposed to contractually guarantee that the BA will properly guard patient data, among other things.
    Per my reading of the HHS's resolution agreement, not having this document effectively means that the HIPAA covered-entity (CCDH, in this instance) illegally disclosed sensitive patient info to a third party.
    What prompted the HHS to see if the BA agreement existed? The BA in question, FileFax, Inc., was caught discarding hundreds of medical files in a dumpster. Unsurprisingly, this prompted everyone, from the HHS to the Attorney General, to see if FileFax was storing any other sensitive info (an, undoubtedly, whether these were properly secured).  

    $400,000 For Lack of a Risk Assessment

    Similarly to CCDH, the Metro Community Provider Network (MCPN) in Denver, Colorado settled with the HHS over what feels like paperwork; more specifically in this case, for not conducing a risk assessment.

    Apparently, a hacker obtained thousands of PHI (protected health information) in 2012 via phishing, the con where a person sends email pretending to be someone the victim knows and trusts. It looks like the phishing attempt was strongly enabled by the hacker accessing MCPN's employee email accounts.

    The government has gone after MCPN purportedly for the lack of a risk assessment. Again, a risk assessment is not something that one traditionally files under the banner of "data security." And, it is dubious whether a risk assessment would have revealed the vulnerability used by the phisher. But, it's importance is not unjustified. After all, if you don't know where your weaknesses lie, how are you going to defend yourself against them?

    HIPAA / HITECH has always impressed that a security risk assessment and other "non-active security procedures" are an important part of securing a covered-entity's patient data. And, they're backing it up with a message that many can understand.

    One wonders when everyone will get it. (When one reads of cases like this and this, the answer appears to be, "not soon.")


    Related Articles and Sites:


  • Tennessee Updates Law That Required Notification For Encrypted Personal Data Loss

    In 2016, Tennessee created something of a legal furor when it became the first state to require data breach notifications (DBN) even if the lost or stolen data was protected with encryption. Earlier this month, a new law took effect that "clarifies [this] confusion" for companies: they are not required to send DBNs if the data was encrypted – assuming that the encryption was not compromised as well. For example, if the encryption key was also breached.  

    Cognitive Dissonance? Or Merely Not Understanding What Encryption Does?

    When Tennessee's amendment to its breach notification law was passed last year, it came as something of a shock to many. There were many milestones in 2016 – as there are every single year, admittedly – and among them was encryption. Specifically, the strength of encryption: last year was when Apple and the FBI went to court over encryption, due to the latter's demand that Apple compromise the strength of the cryptographic protections on iPhones. The demand was a result of the FBI's inability to get into the San Bernardino shooter's smartphone (as well as others, as it turned out).
    The FBI stopped their lawsuit at the last minute, saying that they had found a way into the phone after all; some claimed that the FBI folded strategically, since it looked like Apple would win and create a precedent-setting case.
    Despite the lack of a solid conclusion, it was a milestone regardless: the media covered the situation with unprecedented detail; more people than ever tuned in and learned about encryption and its impact in modern society's digital works; and, perhaps most importantly, politicians who loudly clamored for Apple to bow down to the FBI's demands started backpedaling after finding out why encryption has to be as strong as it can possibly be.
    The case was a culmination of many encryption-related episodes, such as the global adoption of encrypted internet connections by the top social media sites and communications app-makers making changes to software code so even they can't access a client's private communications.
    So, finding out that Tennessee wouldn't consider encrypted data to be secured came like a bolt out of the blue. Especially when:
    The 2016 amended law, however, still mentioned in another section that encryption was a positive means of protecting data. This created confusion for companies... (
    Of course, if one thinks about it, this is not necessarily contradictory. A strongbox is also a positive means of protecting data: think of a dossier placed inside a bank vault. If that dossier is stolen, well, it should be a reportable data breach. If the documents are stolen, by definition the protection is gone.
    And, because of how encryption works, that's where this analogy breaks down: if you will, under encryption, the dossier is the bank vault. Heck, each sheet of paper in the dossier can be the bank vault. In other words, if encrypted data is stolen, the thief still has to find a way to break into this particular vault called "encryption."
    Chances are that 99.999% of the time when data is stolen or lost, encrypted content can be accessed only if the thief also has a key (or a password, which is essentially a proxy for the encryption key). Based on this year's amendment, it looks like Tennessee's governing body was trying to address this inherent "weakness" in encryption when it passed its law last year: if the thief has a key, he has access.  

    Perfectly Valid Concern

    As any security professional – and now, most lay people in the US – will tell you, encryption is one of the best ways to protect data. It's not the only way, and it's not infallible, but it is one of the best. Some may even say it is the best way. But again, it doesn't mean it's not infallible. There are ways to get past encryption:
    • Guess the encryption key or the password to the encrypted content.
    • Steal the encryption key or the password.
    • Physically threaten a person for the encryption key or the password.
    • Carry out said threat on a person (but make sure he's conscious so you can get the key or password once they cry uncle).
    • Plant malware on a computer so that you don't have to do any guessing, stealing, or threatening. Technology at work.
    • Do an analysis of the encryption used to see if there are any inherent weaknesses that can be exploited (not for the average person; can be difficult even for government agencies awash with black ops slush funds). Especially if someone leaks said weaknesses on the internet.

    As you see, there aren't too many ways but, with the exception of that last one, it is relatively easy to get past encryption… assuming you can fulfill certain conditions – conditions that are simple but potentially difficult to carry out. (Or, not difficult at all, which is why, when you're going to fire someone, you should rescind from him access to your company's resources before letting him know he's being let go).

    Yet, it seems that most data breach notification laws were passed without taking into consideration things like the above. If stolen data was already encrypted, it was given safe harbor from DBNs.

    In fact, in certain cases, breached data was given safe harbor from DBNs even if encryption was not used because the law had defined encryption too broadly. So, despite violating the spirit of the law, ROT-13 encryption would have met the conditions for excluding oneself from DBNs. This, despite it not being encryption in any sense of the word.

    Tennessee's foul-up may have caused confusion and consternation for many over the past year, but it should be applauded for what it was: a law that further empowers constituents of that state.

    Related Articles and Sites:
  • Israel Introducing Data Breach Notification Law

    It was reported last week that Israel introduced mandatory data security and breach notification requirements into its law books. The law is expected to go into full effect next year.

    Business of all types – be they global, multinational companies or the barber shop down the street – will be affected by the new regulations. But not equally.

    At, an expert notes that there will be four "security level" categories which appear to be divided either by the number of people who can access the information or by the nature of the business itself. For example, the aforementioned barbershop's data security requirements would be different from data brokers (and even these are subdivided by the number of records that are stored).  

    Encryption Required?

    Of the four security levels, the lowest one (that is, the least onerous one to a business) is the sub-basic level:
    up to 3 persons with access permission –mild requirements, including a database description document, annual review of redundant data, basic physical security, reasonable means to prevent unauthorized access, keep records of data breaches, appropriate measures with portable devices (e.g. encryption) and secured internet communications. (my emphasis)

    The higher security levels build on top of this. And while encryption is given as an example (not as a requirement) pertaining to "appropriate" security measures for portable devices, it's pretty obvious that it doesn't stray too far from being a requirement.

    Indeed, on the internet, it actually is a requirement. The law stipulates that "secured internet communications" must be used, and the only way to secure the to and fro of data flows on the internet is via an encrypted connection. Or, if an encrypted connection is not possible or available, by encrypting data before it's being sent out (e.g., cryptographically securing an attachment before sending it via email).  

    Breach Notifications Where Appropriate

    Data breach notifications to the government will be mandatory, but only if one pertains to the mid- or high-security level. And even then, the former only needs to report "substantial breaches" whereas the latter will need to report every breach they encounter.

    The government may force a business to get in touch with clients who were affected by the data breach, if it is deemed necessary and appropriate.

    Overall, it's a little different from what people are used to in the US when it comes to data privacy and breach notification laws. However, if you're doing a lot of business with Israeli companies, you will have to follow it.

    Which is not a particularly bad proposition since it will possibly allow you to meet EU requirements as well: Per, the passing of the Israeli law coincides with the European Union's own privacy laws that go into effect in 2018.  


    Related Articles and Sites:

  • New Mexico Now Has A Data Breach Notification Bill

    New Mexico will be the latest US state to add a data breach notification law to its books. Once the bill officially becomes a law, only two states – Alabama and South Dakota – will remain outsiders to the crazy idea that people should be notified if their personal data is hacked.

    You can read the bill in all its glory at this link (it's a PDF file), but the introduction to it gives you a good idea of what's up:


    Possibly Problematic

    There is a potential problem, though. One of the definitions (my emphasis for the below) for the purposes of the bill:
    "personal identifying information": (1) means an individual's first name or first initial and last name in combination with one or more of the following data elements that relate to the individual, when the data elements are not protected through encryption or redaction or otherwise rendered unreadable or unusable: [redacted]

    In the above, an effort is being made to preclude what is not personal information. For example, your SSN that was encrypted is not personal identifying information, and so its loss would be excluded from the data breach notification requirements.

    The problem lies in the passage "otherwise rendered unreadable or unusable," which could very well work against the spirit of the law. For example, the process of hashing data with a known one-way function renders information unreadable in a very technical sense. However, data transformed in this fashion is not considered secure because extracting usable information can be quite easy.

    You're probably very aware that there have been many data breaches in the last ten years or so. In most cases where stolen passwords were involved, the "security" behind said passwords was a hash – and, with the exception of a handful of instances, security professionals agreed that people needed to change their passwords ASAP, especially if the password was re-used at other sites.

    Why? Because hashing, unlike encryption or redaction (read: deleting stuff), can be defeated with enough trial and error. And computers are great at trial and error.

    The fact that the controversial passage is attached to the definition of personal identifying information, as opposed to the definition of encryption, doesn't change the situation because it leads to the same problem: since personal data that is "otherwise…unreadable" is not legally personal identifying information, it can be argued that hashed personal info (just like encrypted personal info) can be excluded from the purview of this law.  


    At Least They Got Encryption Right

    Including self-defeating language like this to the books is disappointing, especially when the drafters of the bill went through the trouble of defining encryption correctly:
    "encrypted" means rendered unusable, unreadable or indecipherable to an unauthorized person through a security technology or methodology generally accepted in the field of information security

    When data breach notification laws were passed in the past, there were instances where encryption was defined in such a way that equated it with hashing. Doing so is a security faux pas because companies could argue that their hashed data was "encrypted" per the legal definition, and thus be excluded from notifying customers.

    It bears repeating, hashing is not considered a proper security mechanism in the event of a data breach – it isn't "a security technology or methodology generally accepted in the field of information security."

    As time went by and lawmakers gained more experience and knowledge, the law correctly began to reflect what was and wasn't proper data security.

    It looks like we need to do better, however.


    Related Articles and Sites:

More Posts « Previous page - Next page »