in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Leading Self-Encrypting Drives Compromised, Patched

    Earlier this week, security researchers revealed that certain SEDs (self-encrypting drives) sold by some of the leading brands in the consumer data storage industry had flaws in its full disk encryption.

    Bad Implementation

    One of the easiest ways to protect one's data is to use full disk encryption (FDE). As the name implies, FDE encrypts the entire content of a disk. This approach to protecting files ensures that nothing is overlooked: temp files, cached files, files erroneously saved to a folder that was not designated to be encrypted, etc.
    There is a downside to full disk encryption: it can slow down the read and write speeds of a disk drive, be it the traditional hard-disk drive or the faster solid-state drive (SSD). In order for a computer user to work with the encrypted data it must be decrypted first. This extra step can represent a slowdown of 10% to 20%. Not the best news if you invested in SSDs for the bump up in read/write speeds.
    The downside mentioned above, however, is mostly true when software-based FDE is used; that is, you used a software program to encrypt the disk, like Microsoft's BitLocker. For SEDs, the "self-encrypting" portion of their name comes from the fact that an independent chip for encrypting and decrypting data is built into the storage device. That means there is no impact in reading and writing data. It does mean, however, that you've got a new point of failure when it comes to data security. If the chip is not secure enough, it could lead to a data breach.
    The researchers were able to extract the encrypted information by modifying how these chips behave. It was hard, time-consuming work but they figured out how to bypass the encryption entirely. In certain instances, they found that the data wasn't encrypted at all due to a misconfiguration. You can read the details here.
    If you read the paper, you'll notice that the data hack is not for the faint of heart. While certain security professionals have decried the incompetence in how the SEDs' encryption was implemented – and truth be told, they are right. Some of these workarounds are very Wile E. Coyote – finding these flaws would have been nearly impossible for mere mortals, non-professionals, and amateur hackers.
    Indeed, it's quite telling that it took academic researchers to shine the light on the issue.

    BitLocker "Affected" As Well

    Oddly enough, BitLocker, arguably the most deployed full disk encryption program in the world today, was affected by the SED snafu. How, you may ask, seeing that BitLocker is software-based while the security issue affects hardware-based encryption?
    By default, BitLocker hands the reins over to SEDs if disk encryption is turned on, assuming an SED is being encrypted. On the surface, deferring to the SED encryption makes sense. People don't care how their data is encrypted as long as it is encrypted, and foregoing software-based encryption means there is no performance hit. It appears to be win-win.
    (There is a group policy setting to override this behavior. Security professionals recommend that this setting be used going forward. Being security professionals, it makes sense they'd place more weight on security than performance.)

    Trade-Off: Speed vs. Transparency

    Relying on hardware-based encryption, however, means that you're relying on Samsung, Crucial, and other hardware manufacturers to implement encryption correctly. Have they? There isn't an easy way to know because they're not transparent about the design and implementation. The revealed vulnerabilities could be all there is to it… or could represent the tip of the iceberg.
    Hence the recommendation by the pros that software-based encryption be used: any solution that is worth its salt will ask NIST to validate it. Sure, the process is long and expensive; however, the ensuing uptick in business more than makes up for it. While NIST's stamp of approval does not guarantee perfect security (possibly not even adequate security), it does remove the possibility of terrible security implementation like the ones witnessed this week. And even if it's not validated, the ones that are transparent allow for examination. If something's is glaringly wrong, it will be found and noted by researchers.
    All of this being said, zdnet.com confirms that companies have either come out with firmware patches for the vulnerabilities in question or are working on it. Apply those as soon as possible, and rest easy (or easier) that your data will be safer by doing so.
     
    Related Articles and Sites:
    https://www.zdnet.com/article/flaws-in-self-encrypting-ssds-let-attackers-bypass-disk-encryption/
     
  • Anthem, Yahoo To Shell Out Additional Money Over Data Breaches

    This week saw additional headaches for two US companies involved in major data breaches (we're talking top ten in US history to date). Yahoo, now a part of Verizon, has agreed to settle a lawsuit for $50 million. In addition, Anthem, Inc. – the Indiana-based BlueCross BlueShield insurance company – has agreed to settle HIPAA violations by paying a $16 million monetary penalty to the US Department of Health and Human Services (HHS).
    Earlier this year, Yahoo's "other arm" – now known as Altaba, a separate entity from Verizon – settled with the SEC for $35 million. Likewise, just a couple of months ago, Anthem settled a lawsuit for $115 million.
    The final tally so far for the Yahoo breach: $85 million in settlements, over $30 million in lawyer fees (for the plaintiffs), and a $350 million haircut when Verizon acquired the company. That's a total of $465+ million.
    For Anthem: a total of $165 million.
    And let's not forget that these figures do not include what each company paid for their own defense (the numbers certainly must be in the millions).
    Conclusion: data breaches now suck for both breachees and breachers. It wasn't always like that.

    Historical Inflection Point?

    Ten years ago, a lawsuit centered around a data breach would have been tossed from court. Today, that hardly seems to be the case… although exceptions do exist, like Equifax. (Still, it's only been a little over one year since that particular data breach. Yahoo and Anthem's travails took years to be resolved, and with Equifax's data breach being in the top five information security incidents of all time, it's still too early to tell whether the credit-reporting agency will join the two companies' dubious circle of honor).
    It may, perhaps, be too early to declare that the days of conveniently ignoring data security, in the belief that there will be little to no blowback when it happens, are really over. Still, there are many signs that this is a watershed year, including:
    • People are leaving social media platforms or decreasing their use, mostly due to privacy and data security concerns.
    • Over the course of ten years, pretty much everyone has been affected by a data breach. Chances are that everyone knows someone who has been affected quite negatively. Even judges who in the past couldn't see what the big deal was. Nothing like hitting close to home to understand what's what.
    • Greater and greater fines are being imposed for data breaches, a direct result of continuing and ever-expanding information security incidents.
    • The EU passed this year some of the strongest privacy laws yet.  

     

    Related Articles and Sites:
    https://www.databreaches.net/anthem-pays-ocr-16-million-in-record-hipaa-settlement-following-largest-u-s-health-data-breach-in-history/
    https://www.independent.ie/world-news/yahoo-agrees-to-pay-50m-dollars-for-massive-security-breach-37451711.html

     
  • Google and Google+ : Data Breach or Not?

    This week's revelation that Google covered up a data breach connected to Google+, the much-unused Facebook-competitor, has spilled a lot of digital ink. Unsurprisingly, most of it is unsympathetic to Google. One exception was an article at theverge.com, where it noted that "the breach that killed Google+ wasn't a breach at all."
    And, on the face of it, it's true. As far as Google knows, the "data breach" (in reality a bug that could have allowed a data breach) was never exploited. Its logs show nothing. And, in order to make use of this exploit, a person had to request permission for access to an API (Application Programming Interface), which only 432 people did. In the end, Google estimates that 500,000 people could have been affected… if there was an actual data breach. We're talking theoretical potential here, not post facto possibility.
    And the most damning indication that this is not a data breach is the fact that the data that could have been exposed from the API bug wouldn't have actually triggered a breach notification. If you look through the list of data that could have been compromised, you'll see that they wouldn't qualify as sensitive or personal information under US data breach notification laws. Full names, email addresses, a profile pic? Unauthorized access to these do not merit a notification under any of the 50 US state laws dealing with data breaches.
    Again, on the face of it, it looks like no big deal. If you dig into the details, however, you'll see some problems. First off, Google can't really know what happened because they only keep two weeks' worth of logs; the bug, on the other hand, went unfixed for over two years. Who knows what happened prior to the patch, outside of the two-week period?
    Second, the fact that less than 450 people applied for the API is little comfort when you realize that the Facebook Cambridge Analytica situation only required one renegade API user. (Perhaps we could get some comfort that "only" 500,000 people could have been affected, but we don't really know where that figure came from. Is it based on the severely curtailed log data? Or the total connections that the API-requesters currently have? What if they dropped connections over the years, thus depressing the figure?)
    Still, despite the above, it looks like this data breach is not really a data breach. Facebook said the same self-serving thing about the Cambridge Analytica situation…but, in that case, data was exploited in an unauthorized manner.  

    EU is not US

    As noted above, the data that was accessible via the bug is not covered under data breach laws, at least not in the US. The US does not have an all-encompassing federal law; it's all done at the state level. And, it was only earlier this year that the final 50th state succumbed to the times and passed a data breach notification law (thank you, Alabama). Under these laws, what's defined as a "reportable" data breach is strictly defined. When you look at the data at the center of the breach, it's obvious that it doesn't really pass the "personal information" test: without more "substantial" information like SSNs, driver's license ID numbers, financial data, etc., Google is in the clear.
    In comparison, the EU has stronger privacy laws, but the bug was found before these laws were strengthened quite recently. Still, a case could be made that the potential breach required public notification within the framework of the older European laws. For example, the UK (before Brexit) gave this example on what constitutes "personal data" as part of the EU's Data Protection Directive:
    Information may be recorded about the operation of a piece of machinery (say, a biscuit-making machine). If the information is recorded to monitor the efficiency of the machine, it is unlikely to be personal data…. However, if the information is recorded to monitor the productivity of the employee who operates the machine (and his annual bonus depends on achieving a certain level of productivity), the information about the operation of the machine will be personal data about the individual employee who operates it. [section 7.2, personal_data_flowchart_v1_with_preface001.pdf]
    As you can see, within the EU, there is a gray area as to what personal data is. It could be that Google is not out of the woods yet, legally speaking. Bugs are Identified and Fixed All the Time As theverge.com article notes,
    There is a real case against disclosing this kind of bug, although it’s not quite as convincing in retrospect. All systems have vulnerabilities, so the only good security strategy is to be constantly finding and fixing them. As a result, the most secure software will be the one that’s discovering and patching the most bugs, even if that might seem counterintuitive from the outside. Requiring companies to publicly report each bug could be a perverse incentive, punishing the products that do the most to protect their users.
    Quite an accurate point. In addition, it should be noted that this literally is a computer bug and nothing more because it was discovered in-house. If a third party had found the security oversight and reported it to Google, it would have been a data breach: that person, as an unauthorized party, would have had to illegally access the data to identify the bug as such.
    In this particular case… well, you tasked someone with finding bugs and that person did find it. That's not a data breach. That's a company doing things right.  

    Hush, Hush. Sub rosa. Mum's the Word

    But then, why the secrecy surrounding it? Supposedly, there was an internal debate whether to go public with the bug, a debate that included references to Cambridge Analytica. If Google had been in the clear, the discussion would have been unnecessary.
    Or would it? With Facebook's Cambridge Analytica fiasco dominating the headlines at the time, Google couldn't have relished the idea of announcing an incident that, in theory, closely mirrors Facebook's – but has led to a different data security outcome. Thus, it is unsurprising that the bug, and the debate surrounding it, has been kept quiet. (It was a cover-up, some say. But again, Google didn't technically have a data breach. It truly was their prerogative on whether to go public).
    Now that the world knows of it, though, it has led to the same outcome: a global scandal; governments in the EU and US looking into the situation; another tech giant's propriety and priorities questioned (arguably, a tech giant that was held in higher esteem than FB); the alienation and angering of one's user base.
    Going public with the bug could be seen as a "damned if you do, damned if you don't" sort of situation. But, when considering what Google's been up to lately, like this and this, you've got to wonder what is really driving the company in Mountain View.  
     
    Related Sites:
    https://developers.google.com/+/web/api/rest/latest/people
    https://www.theverge.com/2018/10/9/17957312/google-plus-vulnerability-privacy-breach-law
    https://www.theguardian.com/technology/2018/oct/08/google-plus-security-breach-wall-street-journal
    https://www.zdnet.com/article/senators-demand-google-hand-over-internal-memo-urging-google-cover-up/
    https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-china-search-engine-censorship-leak-project-dragonfly-state-government-a8577241.html
    https://gizmodo.com/google-removes-nearly-all-mentions-of-dont-be-evil-from-1826153393
     
  • Equifax Already Had a Data Breach Before It Was Hacked In 2017

    According to wsj.com (paywalled), Equifax had already suffered a data breach before the data breach that made the company famous around the world. In 2015, two years before the hack that started with a bang and ended with less than a whimper, "Chinese spies" made off with "thousands of pages of proprietary information" that includes code, HR files, and manuals.
    For many, the use of the word spy in this context will set off visions of Chinese Matt Damons pulling a The Departed (or as they say in that neck of the woods, "Dee Dee-paaaah-ted"). In actuality, the breach appears to be unremarkably mundane: people being bribed with jobs and salary increases to walk out with proprietary information. It's the kind of thing that happens all the time. For example, that's Google's beef with Uber.  

    Why Are We Hearing About It Now?

    The US has a fractured mishmash of laws and regulations when it comes to data breaches, information security, and data privacy, instead of a comprehensive law. What this means is that Equifax's 2015 breach was not made public (legally) because it didn't involve personal information – at least, not in the way we think of it.
    HR files must, by definition, include personal info. However, these would be employee records, not consumer records… and the laws and regulations that have been passed so far, for the most part, involve consumer records or a variation thereof. It's the reason why, for example, HIPAA kicks in when patient data is put at risk but not when nurse and doctor info is stolen.
    As mentioned before, the breach was not made public earlier. This does not mean, however, that Equifax just sat on it. They did contact the FBI and they did carry out an investigation. That the company decided not to go public is understandable and entirely within their legal right. It should also be noted that going public in this instance wouldn't have helped out anyone: the message would essentially be "your employees could steal from you!!" Everyone knows this already. It might have mattered more if, for example, the message was "change your default passwords immediately!"
    But, in light of the hack that occurred two years later, it does raise questions.  

    Lessons Not Learned

    Earlier this month, the US General Accounting Office released a report on the 2017 Equifax data breach, aka, The Big One. Per fortune.com, the report:
    summarizes an array of errors inside the company, largely relating to a failure to use well-known security best practices and a lack of internal controls and routine security reviews.
    "Lack of internal controls and routine security reviews." You'd think that a company that suffered a guy walking off with the company's secret sauce to a potential competitor would have done something regarding internal controls and routine security reviews. That these were lacking in the two years bookmarked by the two data breaches speaks volumes of what Equifax thought was important.
    Thankfully, it looks like perhaps the credit reporting agency is finally taking data security seriously. But then, with everyone looking and keeping track of what they're doing, it'd be a bad idea not to.
     
    Related Articles and Sites:
    https://www.wsj.com/articles/before-it-was-hacked-equifax-had-a-different-fear-chinese-spying-1536768305
    http://fortune.com/2018/09/07/equifax-data-breach-one-year-anniversary/
     
  • Anthem Data Breach Settled for $115M, Despite Having "Reasonable" Security

    Last week, a federal judge approved a settlement – the largest to date when it comes to data breaches – that is historic and yet falls flat: Anthem, the Indianapolis-based insurer, has agreed to pay a total of $115 million to settle all charges related to its 2015 data breach.

    The breach, strongly believed to have been perpetrated by actors with ties to the Chinese government, began with a phishing attack. By the time the electronic dust settled, the information of 79 million people (including 12 million minors) had been stolen, including names, birth dates, medical IDs and/or Social Security numbers, street addresses, and email addresses.

    Needless to say, this information can be used to perpetrate all types of fraud.

    And while the judge overseeing the case has found the settlement to be "fair, adequate, and reasonable," critics have noted that the victims only get $51 million of the total settlement, which amounts to 65 cents per person. The rest goes to lawyers and consultants.

    What's surprising about this story is not that the victims are getting shafted; or that the lawyers are getting an ethically-dubious portion of the settlement; or even that Anthem settled out of court, a once unthinkable action. Then again, courts are warming up to the idea that victims of a data breach have suffered an injury that is redressable by law. (Chances are that if this lawsuit had been filed ten years ago, the defending corporation would have successfully argued to have it tossed from court).  

    Reasonable Security

    What is surprising is that all of this happened despite Anthem having had what experts called "reasonable" security measures at the time of the breach.

    What exactly is "reasonable" security? Is it tantamount to "good" security? Or perhaps it doesn't reach the level of good, but it's better than "bad" security, which in turn is better than no security? Its converse, unreasonable security, what would that be like?

    What constitutes "reasonable" security is not fleshed out, anywhere, in detail. But, we do know this: per the settlement, Anthem has to increase threefold their data security budget. Which is weird because (a) if you have to treble your budget in regards to security, maybe it wasn't reasonable to begin with? and (b) the flashpoint of the data breach – clicking on a phishing email that surreptitiously installed malware, which may or may not have been flagged by antivirus software – can hardly be prevented by spending more money.

    But even weirder is this:

    "The [California Department of Insurance examination] team noted Anthem's exploitable vulnerabilities, worked with Anthem to develop a plan to address those vulnerabilities, and conducted a penetration test exercise to validate the strength of Anthem's corrective measures," the department said in its statement. "As a result, the team found Anthem's improvements to its cybersecurity protocols and planned improvements were reasonable." [healthitsecurity.com]

    There's that "reasonable" word again. The company had reasonable security, got hacked, corrective measures were taken, and now the improvements are reasonable?

    If you're being hacked by what could potentially be the intelligence arm of a foreign state, perhaps you'd like something that's more than reasonable. Hopefully, the choice of words to describe what was implemented do not accurately reflect the effort, planning, and technical expertise that actually went into it.

    At the same time, it's hard to ignore the fact that data breaches like this are the perfect moral hazard:

    • The information that is stolen is tied to individuals. Any misuse of the data will affect these people, not the company.
    • A rotating cast of executives means that you don't necessarily plan for the long term. Especially if you're paid very well for being fired because of a data breach.
    • Financial penalties become meaningless if (a) they can be used to offset taxes, (b) happen to be a drop in the bucket (Anthem's 2017 revenue was $90 billion), and (c) the cost can be passed on to customers.

     

    Related Articles and Sites:
    https://healthitsecurity.com/news/judge-gives-final-ok-to-115m-anthem-data-breach-settlement
    https://www.govinfosecurity.com/interviews/analysis-anthem-data-breach-settlement-i-4083
    https://www.ibj.com/articles/70144-anthem-data-breach-judge-oks-huge-fee-award-but-not-as-much-as-attorneys-wanted
    https://biglawbusiness.com/anthem-115-million-data-breach-settlement-approved-by-judge/

     
  • Survey Says Data Breaches Result In Long-Term Negative Impact

    According to darkreading.com, a recent survey commissioned by CA Technologies has shown that there can be serious repercussions for companies that fall victim to data breaches. If the survey's conclusions are to be believed, about half of the organizations that were involved in a data breach see "long-term negative effects on both consumer trust (50%) and business results (47%)." Which is surprising, since the general feeling is that businesses involved in a data breach are not penalized at an appropriate level.
    For example, Equifax revealed a history-making data breach almost one year ago. Its stock price took a nose-dive, people were fired, financial penalties were proclaimed, people complained, lawsuit were filed, etc. Today, the stock price has recovered quite a bit from its one-year lows. Lawsuits are being battled in court, with the very real possibility of a summary dismissal; if not, the company will probably settle for an amount that will be a drop in the bucket for a company its size. The proclaimed penalties were withdrawn in exchange for Equifax upping their security. People don't complain as much as they grumble sotto voce. Year-over-year revenue is up at Equifax.
    All in all, it looks like Equifax has weathered this storm quite nicely. Such has been the basic pattern for major companies involved in data breaches since at least ten years ago.
    Once in a blue moon will you hear of a company that was so aversely impacted by a data breach that it made other companies sit up and take notice. But such instances are certainly far and few in between.  

    Survey Says…

    According to ca.com, among other things:
    • 48% - Consumers who stopped using the services of at least one organization due to a data breach.
    • 59% - Businesses that reported moderate to strong long-term negative impact to business results after a breach.
    • 86% - Consumers that prefer security over convenience.
    These figures are curious, especially the last one. It's known that people don't necessarily tell the truth on surveys, but the real issue in this instance is that a survey is but a snapshot in time. One need not doubt that nearly half the people surveyed stopped being a customer of a breached entity; however, it would be more informative to know how long they've been boycotting a company – one day, one week, one month, one year? – and whether they're still doing so when followed up some time later. (It should be noted that the survey did not define the length of "long-term" but one assumes it's longer than one year, in keeping with accounting terminology).
    Likewise for the figure on businesses negatively affected by a data breach. Equifax, for example, would have claimed that they were seriously affected if surveyed three months after their public outing; however, their answer would have been different one year later. And five years from now? Who knows?
    And then you have that counterintuitive 86% figure: a clear majority of people prefer security over convenience? That certainly is news, especially considering that people's actions have not supported such a conclusion over the past decade.  

    Strong Laws and Enforcement

    The concluding remarks of the survey, in a gist, are that companies need to improve their data security. (And, also, companies that are in the business of transacting personal information need to be more transparent about it. This was, after all, the year of the Cambridge Analytica scandal). Will companies improve their data security? Can they? The answer is yes.
    But not because of consumer demand.
    Consumers of goods and services have been raising hell over data breaches for a long time now. Data breach-related lawsuits that have been filed worldwide probably number in the thousands. Public spankings and shamings exceed that number. All of it to no effect. The only thing that's been shown to encourage attention to security is the passage and enforcement of laws.
    The world, due to its fractured nature, with each sovereign state approaching data breach ramifications in their own way, has become a living laboratory that reveals what works and doesn't when it comes to increasing data security and curbing data abuses.
    Simply put, companies respond to financial penalties, as can be witnessed from Silicon Valley's behavior toward China and Europe, or how the United States healthcare sector significantly increased their data security only after regulators started hitting them with million-dollar fines.
     
    Related Articles and Sites:
    https://www.darkreading.com/risk/48--of-customers-avoid-services-post-data-breach/d/d-id/1332452
    https://www.ca.com/us/company/newsroom/press-releases/2018/ca-technologies-study-reveals-significant-differences-in-perceptions-on-state-of-digital-trust.html
    https://www.ca.com/us/collateral/white-papers/the-global-state-of-online-digital-trust.html
     
More Posts Next page »