This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

May 2017 - Posts

  • Target Settles With 47 Attorneys General Over 2013 Hack

    One of the biggest hacks in history was the Target credit hack of winter 2013, which affected approximately 60 million people. Four years later, Target is finally putting the situation behind, settling legal action brought to it by 47 states. The amount: $18.5 million.
    This does not include the many millions the Minnesota-based retailer paid to credit card company Visa, victims, banks, and others, which pushes the total amount of legal fines and settlements to well over $100 million. (It also doesn't include such intangibles like the hit to its brand's goodwill; money Target spent trying to find how it was attacked; money fixing up its security issues; etc).
    Data breaches are expensive to deal with, as the Target and other incidents reveal. So far, so normal: the news about Target's settlement is non-news.  

    Raising Eyebrows

    Except there is a twist. Under a section termed "Specific Safeguards" of the settlement, Target agrees to specific data security protocols.
    Granted, you can find similar language in other agreements signed by many companies: the company agrees to use encryption anywhere sensitive data is stored, it promises to do a better job training employees, etc. But what Target is agreeing to is much more specific in comparison. For example:
    • "TARGET's Cardholder Data Environment shall be segmented from the rest of the TARGET computer network," or,
    • "TARGET shall deploy and maintain controls, such as, for example, an application whitelisting solution, designed to detect and/or prevent the execution of unauthorized applications…"
    There's more of that where it came from.
    Siloing data? Whitelisting? This type of language you expect from IT, not a group of people who spent time trying to pass the bar. It's not inconceivable that IT experts were hired as part of the settlement drafting process, or that the AGs (or their underlings) know their way around digital data security, and thus the settlement language reflects that.
    However, such specific details were inexistent in the past. It almost seems as if, realizing that the past 10 years of suing companies over data breaches has changed nothing, the government is now taking charge and including basic data security specs that companies should follow, minimizing any wiggle-room and loopholes for the lack of concreteness in wording.
    There will be detractors to this: codifying certain technologies into law today causes problems if the law doesn't keep up with progress. Conceivably, you could run into a situation where an offending party is protected by law despite not implementing adequate security. An example: a law passes requiring that a certain encryption algorithm be used, but a vulnerability in the code that cannot be fixed is highlighted soon after. Most companies switch to a different type of encryption but not all do. Subsequently, these stragglers are hacked using that vulnerability but are legally protected because the law wasn't updated in time.
    The good news in the Target case is that a settlement, while having legal effect, is not law. So, no unintended consequences there.
    In addition, it sends a signal to other companies on what is acceptable and what isn't. If Attorneys General across the continental USA slammed a Fortune 500 company for, for example, not siloing data, then it's not inconceivable that they'll do the same when they re-encounter a similar situation. Pointing out specific practices and technologies in settlements should provide ammunition to IT executives who try to implement them in the enterprise but find themselves hamstrung by higher-ups.
    Related Articles and Sites:
  • Global Malware Emergency Shows Why Backdoors Are Dangerous

    The big data security news this week is, of course, the WannaCry ransomware situation that reared its head last Friday, continued to grow over the weekend, and threatened to really become something had it not been for a serendipity: a kill-switch, possibly a mistake, baked into the malware.
    Many organizations and traditional news outlets have covered the situation from every angle possible, including:
    Of all the different reports and thought-pieces, though, this one might be the most controversial:
    One could be excused for thinking that Microsoft is just engaging in PR, passing the buck, refusing to shoulder responsibility, etc. After all, the ransomware hijacks a flaw that's been present in every single Microsoft operating system since Windows XP. That makes it a 15-year old flaw… and it means that Microsoft had 15 years to identify it and fix it. It certainly paints a bad picture for Redmond. However, blame cannot fall on Microsoft alone: for Act II, the same or different hackers could decide to target a flaw that's found in Apple's operating system or any of the various flavors of Linux. What's to stop them? After all, this latest attack, global in nature, was enabled by the leak of NSA hacking tools. And if rumors are true, the agency must have methods for exploiting weaknesses besides those found in Windows.
    It's not a secret that intelligence agencies make use of flaws (in fact, some have accused them of "hoarding" flaws). And now, unsanctioned hackers are getting into the game as well, in a big way, thanks to those same hoarded flaws leaked via WikiLeaks.
    An argument could be made that the flaws couldn't have been exploited if, instead of keeping a tight lid on them, the government had alerted companies of the flaw so they could fix it, or if the government had actually managed to keep a tight lid on things. But then again, agencies like the NSA are not in the business of identifying flaws and having these patched up. That's not their raison d'être. In fact, disgusting as it may be, it feels a little silly to criticize them for doing exactly what they were chartered to do. It'd be like criticizing a dominatrix for inflicting pain and humiliation.
    Another reason why you can't put all the blame on the government: there is some heft to the observation that Microsoft worked to fan the flames by coming up with a patch back in March, but allowing only paying customers to access it. One could argue that, NSA hoarding flaws or not, the situation would have been dampened if Microsoft hadn't tried to monetize the patch.
    Regardless of who you think is "responsible" for what happened, the disaster shows exactly why a security backdoor is a bad idea.

    Flaws => Backdoors

    Last year, the FBI took Apple to task for refusing to cooperate with a certain demand: that the company somehow provide a backdoor to encrypted iPhones. Of course, the FBI never said outright that they wanted a backdoor. But, in the end, it was exactly what they were arguing for.
    The counterargument from Apple and the rest of the technology sector was that backdoors cannot be completely controlled and thus will never be safe, a tune that data security professionals have been singing since the early 1990s. The tech sector's argument went so far as to imply that nothing less than the security of the free world was at stake. Many called such arguments spurious and melodramatic. An overhyped situation just like the Y2K bug scare (although, some argue that it's because a scared-witless world poured billions into the problem to fix it in time that the threat never materialized).
    As long as the situation remained theoretical, such criticism had a foot to stand on. But, with the temporary shutdown of an entire country's healthcare network (the UK's National Health Service) in our rearview mirror, it's hard to imagine that the tech sector's arguments can still fall on deaf ears. When it comes to data security, unintended flaws and purposefully placed backdoors are essentially the same because they lead to the same situation: at some point, someone who shouldn't know about it is bound to find it and exploit it.
    The many data security scares in the past, for all their coverage in the media, scarcely managed to turn fast-held opinions on the "need" for a backdoor. Some went as far as stating that they were sure that the brilliant minds behind encryption would find a way to create a secure backdoor that is inaccessible by the bad guys. (Despite the fact that the same brilliant minds were emphatic that it couldn't be done).
    One wonders how they could have been (and, possibly, currently are) so deluded. Perhaps it was because there wasn't a visceral-enough crisis to jolt their thoughts on what could be. Or, perhaps they thought that what could be wasn't really going to happen, at least not in their lifetime. This latest development will hopefully work to dampen their misguided but well-intentioned enthusiasm for hamstringing security, at least for the time being.


    Related Articles and Sites:

  • Sextortion Case Treads A Well-Worn Path: Are Passwords Protected Under the Fifth?

    A case of "sextortion" – blackmailing someone over naked footage (digital footage, more specifically, to reflect the times we live in) – between Instagram celebs has again dredged up the decidedly non-superfluous legal quagmire that's been repeatedly visited since at least 2009: Is forcing a defendant to spit out his or her password a violation of the Fifth Amendment?
    In the latest case, the answer appears to be "no"…for now. As is usually the case, the decision is going to be appealed.  

    Providing Passwords is Self-Incrimination, No?

    According to, one Instagram celebrity tried to extort $18,000 from another Instagram celebrity. Long story short, the extorter and her boyfriend were arrested. The authorities have the incriminating text messages but apparently want to "search for more evidence" and asked a court to compel the two defendants to produce their smartphones' passwords. (It wasn't specified what that extra evidence is).
    The judge in charge OK'ed the request:
    The ruling was based on a recent decision in the Florida Court of Appeals that ordered a man suspected of taking illicit photos up women's skirts to give up his four-digit passcode to authorities.
    The odd thing, though, is that decisions to the contrary, as described in this Washington Post opinion piece, can be found as well. The link's content, it should be pointed out, argue why that particular decision was incorrect. However, the author also reverses himself the next day.
    Needles to say, the situation surrounding the production of passwords is fraught with problems, constitutional and otherwise.
    Like the many cases before the sextortion one, it's obvious that the details of this case need to be weighed carefully (it goes without saying that, ideally, that should always be the case). What's important is not that the Florida Court of Appeals ordered a man to reveal his passcode; rather, the focus should be on why the appellate court came to that decision. For example, in past cases involving the forced revelation of passwords or encrypted data, a significant factor was whether the "foregone conclusion" principle applied. For an excellent layman's distillation of this principle, read the Washington Post piece mentioned above.
    In this extortion case, it seems that the government has a winning hand: not only do they know that the encrypted phones belong to the defendants, I imagine that they know what they're looking for. Namely, the naughty pics and videos that were used in the extortion. So, it's not a fishing expedition; the foregone conclusion applies, and as long as the warrant is written out correctly, there shouldn't be any problems. Of course, that last part is the crux of the matter, isn't it?

    Related Articles and Sites: