This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • WikiLeaks Shows That Encryption Works, Even Against Spooks

    Last week, the world saw another bombshell announcement from WikiLeaks. Per their tweets and resulting confidential data dump, it was readily apparent that the CIA had amassed techniques for breaking into many kinds of digital devices imaginable: smartphones and computers, yes, but also things connected to the internet, like smart TVs (perhaps they've looked into hacking internet-connected refrigerators as well).
    But, unlike the initial announcements that were passed around like crazy, it looks like the CIA does not have easy access to the encrypted data. If anything, one could say that a large reason why Langley has so many hacking tools is that they need to get around encryption somehow. Seeing how encryption is nearly impossible to break – it is possible, apparently, but a number of conditions must be met, including the physical presence of the device – it becomes easier to hit cryptography where it is weakest: before the data is encrypted (or once it's decrypted).

    The Inherent "Weakness" in Encryption

    There is an inherent "weakness" in cryptography: encrypted data cannot be read by human beings (or machines, for that matter) in encrypted form. The information has to be decrypted at some point if it's to be useful to someone; that is, so it can be read, copied and pasted, processed, etc.
    And that's where the CIA are targeting their efforts: since the encrypted data has to be decrypted at some point, let's read it then but no sooner. Mind you, other intelligence agencies around the world are probably doing this as well; it can't only be the guys in Langley, especially when you consider that they're one of the best financed and equipped SIGINT bodies in the world, and they are having problems breaking encryption.
    This is good news and bad news. It's good news because you know that encryption works. Your encrypted laptop was stolen at the airport, and the device contained sensitive information? You can rest easy, knowing the odds of a data breach are at the nanoscale end of things.

    Lack of Transparency is a Net Negative for All

    At the same time, it's bad news because, for the CIA to do their job, they can't reveal the software weaknesses they're exploiting; doing so would lead to companies patching up these problems.
    Considering that exploiting these weaknesses is technically the easier way to spy on someone's communications, logic dictates that others must be taking advantage of this weakness as well; people tend to go for the low-hanging fruit.
    This, in an indirect manner, makes the CIA complicit in weakening security for all Americans, because, undoubtedly, the same weaknesses they're hiding from the public is being used by others to spy on Americans, more specifically Americans in power. (Sure, officials at the highest echelons of government have to have their devices vetted – but the past couple of years have shown this is not how it actually works in real life, a fact that reached a fervor this past US election).

    This is Why Backdoors are a Bad Idea

    The silver-lining on all of this may be that the government is nailing the coffin on a contentious issue: encryption backdoors.
    The CIA's actions prove what academics have argued for nearly three decades, if not more: a security weakness is an invitation to be exploited. The hardware and software industry did not tell the CIA about the security defects in its products; rather, the agency just knew there must be some (because there's always a weakness somewhere) and found them on its own.
    The above is your proverbial search for a needle in a haystack (or needles in haystacks), except that you don't know whether said needles exist. You make an assumption that they do and go from there. If after some time you don't find any needles, you move to the next haystack.
    Now imagine what would happen if the US government had succeeded in requiring, by law, a backdoor in encryption. You know that backdoor is somewhere. If you will, the US required that needle to be placed in the haystack. It's a matter of time until you find it; time spent looking for it is not time wasted.
    Thankfully, the nonsense regarding backdoors was quickly laid to rest. Now, if only the CIA would agree that keeping known vulnerabilities a secret is a bad idea… just like they did when it comes to encryption backdoors.
    Related Articles and Sites:
  • Michaud Case In Playpen Hack Gets Dropped By Feds

    One of the most controversial US legal actions in the past couple of years, arguably, is the FBI's approach in arresting hundreds of child pornographers who were frequenting a site in the Dark Web. Because surfing the nether regions of the internet requires the use of a special, secure browser called Tor, the FBI exploited a weakness in the Tor browser to identify the site's frequenters.
    The security flaw remains a closely guarded secret, unknown outside specific law enforcement circles. While the FBI is loath to classify it as such, making use of the flaw has the underpinnings of a malware installation. The authorities prefer to call it a "network investigative technique" (NIT). Needless to say, there's a debate on the legality of exploiting the security loophole.
    More than 100 people are being prosecuted as part of the FBI's sting operation. One of them was Jay Michaud, whose case is possibly the first to have spotlighted the situation. Yesterday, the Department of Justice announced that they are dropping their case against Michaud, whose lawyers strongly contested the legality of the FBI's tactics. Not surprisingly, this latest development is attracting controversy as well.  

    Drop Charges vs. Reveal Exploit

    Why did the DOJ drop their case against Michaud? As the prosecutor for the DOJ noted, to avoid disclosing how the NIT works:
    "The government must now choose between disclosure of classified information and dismissal of its indictment," Annette Hayes, a federal prosecutor, wrote in a court filing on Friday. "Disclosure is not currently an option. Dismissal without prejudice leaves open the possibility that the government could bring new charges should there come a time within the statute of limitations when and the government be in a position to provide the requested discovery."
    In May 2016, the judge overseeing the case ordered the government to reveal the source code behind the NIT.
    This week, the government has decided that letting a "suspected" child pornographer go – possibly temporarily; dismissing a case "without prejudice" means that the same case can be brought back in front of a judge – is a better deal than revealing how the FBI is doing what they do. This decision has caused a lot of talk and speculation, including:
    • The NIT is used extensively in everything, including foreign spying. They're letting the small fish go so they can keep pursuing the really big fish.
    • The FBI and DOJ don't know what they're doing: what is the use of a tool that lets criminals walk because of the same tool that led to their identification and arrest?
    • The guys behind the Tor browser will find the security flaw before the statute of limitations runs out. The government can then reveal the flaw they exploited and once again pursue their cases.
    They all sound plausible, but that last one sounds like a pretty good strategy because, when it comes to child porn, there is no federal statute of limitations (and supposedly the majority of such cases are prosecuted at the federal level). In other words, the government could wait as long as necessary before bringing Michaud to court, be it a month or a decade.
    They could even pursue other cases, possibly set precedent that benefits the DOJ, and come back for Michaud.
    In the meantime, over 100 people have pleaded guilty to charges related to child porn regardless of the legal status of the NIT– and, the thinking probably goes, even more will do so down the line, as long as the NIT can be used. In which case, it's obvious that the DOJ knows exactly what it is doing.
    Related Articles and Sites:
  • Data Breach Results In Loss Of $350 Million in Yahoo-Verizon Deal

    Last week, Verizon finally decided to go forward with the acquisition of Yahoo, the perennial would-be comeback internet search and media company.
    The deal, announced last year, saw an unusual delay when Yahoo revealed that it had been hacked, the largest data breach in history as of then. This was followed a couple of months later by the discovery and announcement of another internet intrusion and data theft at Yahoo. And thus, the internet's Purple One is now the dubious record holder of two of the biggest data breaches in US history.
    Yahoo's revelation of the hack has been accompanied by rumors that the top brass had known about the intrusion for years and declined to announce it, in an effort to maintain its public image.  

    Due Diligence or Lax Security Management?

    The timing of the breach announcement certainly was conspicuous. It could be read as either (a) Yahoo happening upon the data breach as they carried out their day-to-day operations or (b) that either Yahoo or Verizon found out about it as they were doing their due diligence as part of the acquisition or (c) Yahoo was finally forced to give up the ghost on the situation.
    While nothing has been proven yet, there are currently twenty-plus lawsuits, plus an SEC investigation that is currently under way.
    In addition, one cannot forget the fact that the US does not have a federal law governing data breaches and the notification thereof. Instead, laws that cover such issues exist at the state level, for approximately 45 states. The FTC has also been known to pursue internet-related data breaches with a certain level of enthusiasm. And let's not forget the state Attorney Generals who have taken a great interest in such issues as well.
    Basically, if the allegations that Yahoo sat on the data breach and did nothing for years is revealed as true, there's is no lack of potential ways for Yahoo (or its reincarnation) to be in trouble.  

    $350 Million Discount

    Which appears to have worried Verizon greatly. For months now, there were rumors that the communications company would nix the deal. Last week, though, Verizon announced they'd go ahead with the acquisition provided that they get a steep discount ($350 million) and that most of the lawsuits' burden is shouldered by what remains of Yahoo when the business is concluded.
    For investors, this is better than nothing, of course. As a business concern, Yahoo has been treading water for years now and investors have been anxious for a way to unload their shares (other than selling it in the open market, that is). But, Verizon's discount represents close to a 10% haircut. That's a huge number in financial circles. Undoubtedly, there are many angry investors with deep pockets who will be looking to recover their perceived losses one way or another. If the allegations about management's collusion are revealed to be true… well, let's just say that it's not going to be pretty.  

    Another First

    This is one of the few cases where a data breach has directly affected a company's financial well-being. That is, it's not due to a civil lawsuit, or the authorities bringing an action against a company, or a situation where a company announces a data breach and people don't care; like at Target, which hasn't been affected by its "greatest hack in history" data breach. The same customers continue to shop there.
    Nope, this is a clear-cut case where a company has a data breach (two, to be exact) and is directly penalized for it.
    You can expect to hear more from this fiasco.
    Related Articles and Sites:
  • Horizon BCBSNJ HIPAA Charge Over Two Laptops Settled For $1.1 Million

    Horizon Blue Cross Blue Shield of New Jersey (Horizon BCBSNJ) has settled a data breach that affected approximately 690,000 New Jersey residents. This data breach was noted on this blog not too long ago: In January, the Third Circuit Appellate Court declared that a lawsuit against the insurer could proceed because the "improper disclosure" of personal data is a violation of FCRA.
    That is, if unauthorized people gain access to personal information via a data breach, it is in violation of federal regulations concerning credit reports (which businesses use to vet out people, be they employees, potential clients, etc).
    As it turns out, concurrent to the above legal toil, Horizon BCBSNJ was being investigated by the Department of Health and Human Services. In the end, the company decided to settle for $1.1 million (that's $550,000 per stolen computer). And, the report accompanying the settlement uncovered more details on what happened.  

    Two Macs Stolen, Their Security Cables Cut

    The data breach occurred when two Macintosh laptops were stolen from the eighth floor of Horizon BCBSNJ's offices. These computers were password-protected but not encrypted. They were tied to their desks using security cables. There was no additional security.
    Which is odd because all Macs since the early 2000's came with FileVault, Apple's free encryption software for computers. Why was this not used? After all, it's free and doesn't impact a Mac's performance when turned on.  

    Oops. Uh-oh. Oops, Again

    It turns out that the insurer's IT department didn't know these two computers were out there. In fact, these two and 100+ more laptops did not show up on the IT department's radar because they "had been obtained outside of the company's normal procurement process."
    This is understandable. Not excusable, mind you, but understandable. Keeping track of inventory is one of those impossible quests. Even the military fails at it, and they're trying to account for dangerous stuff. Like warheads. That laptops are not accurately inventoried should not come as a surprise.
    Of course, it's because of reasons like these that IT departments generally tend to secure a device before it's released into the wild. In the earlier half of this decade, people were still arguing that one should conduct an investigation into how a machine would be used, who would be using it, what kind of information would be stored in it, etc., and then decide what type of security to install on it, if any.
    Others pointed out that that particular approach is a pipe dream because nothing ever happens the way it should (aka, a variation of Murphy's Law). Time has shown again and again that the cynical outlook is the correct one when dealing with the real world.
    Which brings us to Oops #2: it turns out that the laptops at the center of the data breach belonged to employees who were not supposed to be handling protected health information (PHI). And yet, these laptops contained PHI. Murphy strikes again.  

    Doing the Best You Can

    You've got to feel it for Horizon BCBSNJ: they had implemented encryption across all machines after the company had experienced a data breach that involved a stolen laptop with sensitive information. They had announced the completion of that particular security project in May 2008. They had taken the time to encrypt both laptops and desktop computers.
    And, five years later, the company fell victim to what is essentially the same problem: a data breach borne from laptop theft.
    But, this is the wrong way to look at the situation. Not being privy to Horizon BCBSNJ's internal data, the following can only be speculation, but they've probably averted a great deal of similar data breaches since 2008. After all, laptops are lost and stolen quite frequently, and the company does have over 5000 employees. And, the bigger the organization you are, the greater the probability (some might say certainty) that you'll be missing a laptop each year. That the insurer did not have to report a data breach, stemming from a missing computer, until 5 years later is, in a weird way, something to be congratulated on.
    As security professionals say, there is no perfect security. You can only minimize, as much as you can, the chances of being affected by a data breach. Horizon BCBSNJ could have done better, obviously. But knowing now what we do about the incident, and considering what we've seen in the data security field in the past 10 years, it could be argued that the insurer did a pretty good job (with room for improvement).
    Related Articles and Sites:
  • Australia Finally Gets A Data Breach Notification Law

    The Land Down Under is finally getting a data breach notification law. This should come as a surprise to many since (a) many would have assumed that Australia already has one and (b) it's 2017 – unless you're a war-ravaged country, chances you have a breach notification law. Because that's how bad things are on the internet.

    And despite the country taking it's time on formulating a notification law that they can live with, one has to wonder if they've thought things through.  

    Applies to Entities Covered by the Privacy Act

    If you will, the new data breach notification law is an extension of Australia's Privacy Act because the new legislation applies only to those entities that are governed by it. That is people – real or legal – that are NOT:
    • Doing less than AUS $3million in sales p.a.,
    • A political party,
    • Part of the government.

    If not one of the above, the new law applies to you.

    Now, the government excusing themselves from following the legal obligations they place on others is nothing new. Plenty of countries do it, although not all: in the UK, for example, the government also has to reveal their data security shortcomings, be it the National Health Service, members of the judiciary, etc.

    The USA has also done the same. The Veteran Affairs department is constantly embroiled in hacks and other breaches. Likewise, other US state and federal departments have gone public with data breaches.

    But then again, not all countries follow the same level of transparency. So, Australia can be excused if it feels like not leading by example. It will be in excellent company either way.  

    Turnover of $3 Million

    However, one has to take exception to not covering small businesses that make less than $3 million in a year. Hard-working, financially-pressed mom-and-pop stores should be given a break; anyone knows that, when hacked, doing right by a data breach law can be expensive and time-consuming. Even Fortune 500 companies have problems with it, and they have money and personnel to spare for such things. (Well, not really – but they easily find the money and personnel to take care of it).

    But, by excluding small businesses, there is the tacit implication that they couldn't be embroiled in a huge data breach, especially if they're not making much in the way of sales. What if you're a "successful" internet startup that's financing your venture on borrowed money? In that case, sales figures would be $0. Employee count could be less than twenty, which coincides with a small business . But your customer base is a gazillion.

    A breach of this business's customer database would be tremendous. (For example, Instagram had 13 employees when it was acquired by Facebook and, if memory serves, had zero dollars in sales because it was still funding itself via venture capital. Monetization didn't come until later).

    Under the circumstances, the Privacy Act would not apply to would-be Australian Instagrams (Instagrammers? Instagrammies?). Shouldn't an exception be made for such a small business?

    It seems like a clause that introduces dependencies on the number of people affected by the data breach should have been included (or kept) before the law was approved.  

    Related Articles and Sites:

  • Children's Medical Center of Dallas Pays $3.2 Million To Settle HIPAA Violations

    The Children's Medical Center of Dallas (Children's) recently settled with the US Department of Health and Human Services (HHS) over multiple failures to encrypt sensitive data in mobile devices. The settlement – $3.2 million dollars – is quite the figure, as is the timeline involved: It looks like an investigation could have been started as soon as July 5, 2013, and a final resolution was not reached until February 1, 2017.  

    Multiple Failures Over the Years

    As the HHS complaint shows, Children's had a number of data security breaches over the years.
    • Nov 2009 – loss of a BlackBerry. 3,800 individuals affected.
    • Dec 2010 – loss of an iPod. 22 individuals affected.
    • Apr 2013 – loss of a laptop. 2,462 individuals affected.
    But, it's not the number of data breaches that Children's has had over the years that HHS takes exception to. Rather, it's the fact that Children's knew that they had a bomb ticking in their hands and didn't do anything to rectify the situation… even as the bombs blew up time and again over the years. The need for better security was brought to Children's attention numerous times:
    • Strategic Management Systems Gap Analysis and Assessment, February 2007
    • PwC Analysis, August 2008.
    • Office of the Inspector General, September 2012.
    You'd imagine that a major hospital that's been recommended to secure their devices (and the data in them, more specifically) would do so as soon as possible. Instead, they waited until "at least April 9, 2013." Incidentally, that's a little after the HHS's final Omnibus Rule became effective, on March 26, 2013. As far as I can tell, Children's never had a problem after April 2013.  

    Interim Rules are Rules, Too

    Data security has always mattered under HIPAA. That almost no one really paid attention to it for nearly twenty years just goes to show how important HITECH was in forcing hospitals, clinics, and other medical practices to take it seriously.
    What really made people sit up and take notice was the 2011 fine of Massachusetts General Hospital. MGH paid $1 million to settle with the HHS over paperwork left on the subway. It affected less than 200 patients. And while all of this took place well before the Final Rule came into effect, monetary penalties had quite recently made it into the Interim Rules. MGH served as a preview of things to come, that the HHS meant business.
    And it worked. So many covered entities started looking into encryption and other data security technologies that it was like Christmas had come early for IT companies that specialized in the medical sector.
    I imagine that penalty was on the mind of Children's managers when they suddenly decided to start encrypting their data in 2013; the clock was ticking and they didn't exactly have a stellar record when it comes to not losing stuff. For their dallying, the hospital earned the fifth largest monetary penalty to date since HHS started fining people.  

    Security Issues Still Going Strong

    If I were a betting man, I would say that Children's will have plenty of company going forward. Unencrypted electronic devices that store protected health information are still getting lost today. With so many options for safeguarding patient data, it boggles the mind that this is still an issue.  


    Related Articles and Sites:

More Posts « Previous page - Next page »