in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

April 2014 - Posts

  • Data Breaches: UK ICO Declines To Investigate Supposed Santander Email Breach

    The Information Commissioner's Office in the United Kingdom has declined to investigate Santander, the Spanish banking group, for a purported data breach.  According to theregister.co.uk, people who've set up emails that are strictly used for correspondence with Santander are being spammed with junk mail, lending credence to the theory that the bank's database was breached.

    The ICO, however, notes that there isn't enough evidence of a data breach.  Santander, for their part, have only stated that they are conducting an investigation into the allegations but have no uncovered a data breach to date.  The statement, however, was made back in December 2013.

    The Evidence

    It wasn't only Santander that was affected.  According to theregister.co.uk, email addresses registered with the UK Government Gateway and NatWest FastPay were also affected.

    Some of the spam emails include the last name of the recipients, information that is not present in the email address itself, indicating that a database which ties both an email address with personal information must have been breached.  (The other unsubstantiated accusation is that the information was sold by the bank to third parties.)

    The Counterargument

    The problem is that, of course, none of this is necessarily smoking evidence.  It's not unusual for people to set up a free email address with the intention of using it for one thing – I do it myself; I, for one, don't appreciate the spam that comes from legitimate business I sign up with.  I'd rather keep my personal inbox uncluttered without having to set up filters and whatnot – but end up using it for something else.

    Then, there is the possibility that a third party was breached.  For example, Santander may not have sold the information to a third party, but usually EULAs tend to contain a clause where information can be shared with partners.  What if a partner was breached?  Of course, under most legal statutes, Santander would be in the hock but still...Santander is not really the one breached.  If you're looking for a remedy, you won't find it by quizzing and prodding Santander.

    Last but not least, there is always the chance that the email provider was hacked.  Of course, this scenario is less likely under these specific circumstances seeing how all the complaints have one thing in common: Santander.

    Is the ICO Capitulating?

    Has the Commissioner decided to give Santander a break...or worse, bowing under pressure?  I don't think so.  The evidence – a unique email address combined with a last name – is quite tenuous.  If that were enough to identify the "breachee" then an argument could be made than an IP address is enough to identify an internet user; we all know the latter is not quite right.  Neither is the former.

    Related Articles and Sites:
    http://www.theregister.co.uk/2014/03/21/santander_email_spam_mystery/
    http://www.databreaches.net/uk-ico-decides-against-probe-of-santander-email-spam-scammers/

     

     
  • HIPAA Desktop Encryption: Sutherland Healthcare Solutions Breach Affects 340 K, Reward Offered

    Sutherland Healthcare Solutions (SHS), a billing contractor for the Los Angeles County, has offered a reward of $25,000 for the return of computers stolen from their offices.  The data breach was initially reported as affecting approximately 170,000 people; the number has been revised to 338,700.  All of this because HIPAA desktop encryption was not used to properly protect PHI.

    Eight Desktop Computers Stolen.  What About HIPAA?

    Previous reports on the SHS breach were vague on the details.  Further reporting two months down the line shows that the computers stolen from SHS offices are "computer towers," more specifically HP Pro 3400s.  According to the specs, the dimensions of this particular computer are 368 x 165 x 389 mm (or 14.5 x 6.5 x 15.3 inches) and weighing a little under 16 pounds.  In other words, it's the size of a big encyclopedia volume.

    Installing HIPAA data encryption software is a cinch.  And, the use of data encryption provides safe harbor from HIPAA's Breach Notification Rule.  So, why were these computers not protected?

    The argument is often made that desktop computers do not need encryption because (a) HIPAA technically doesn't require the use of encryption and (b) desktop computers are not easily stolen.  Furthermore, it would be incredibly easy to spot such a theft, preventing the breach from occurring while it happened.

    Except that that is not how it usually unfolds.  The article that covers the breach at latimes.com shows one man who's suspected of stealing the computers.  In the individual frames of the surveillance footage that were made available, he's holding a black bag that was undoubtedly used to moving the desktops to and fro, one by one.

    He probably made eight trips, at least – earlier reports noted that computer monitors were also stolen – meaning that there were at least eight individual instances where, in theory, he could have been stopped.  Anecdote may not be proof, but instances where desktop computers are stolen from offices are so common that the myth of "desktop computers cannot be easily stolen" should die a fiery death.

    Is Encryption Really "Not Required"?

    Now that we've covered aspect (b) of the argument, let's turn our eyes to aspect (a) of the " desktop computers do not need encryption" argument.

    Is encryption really not required under HIPAA rules?  The technical answer is no.  Under HIPAA Security rules, the use of encryption is an "addressable" issue, not a required one.  However, "addressable" differs from a layperson's definition.  "Addressable" under HIPAA really means "it is required unless you can prove that something else will work just as well."

    Consider this other found at hhs.org on whether encryption is mandatory under the Security Rule (my emphases):
    No. The final Security Rule made the use of encryption an addressable implementation specification...and must therefore be implemented if, after a risk assessment, the entity has determined that the specification is a reasonable and appropriate safeguard in its risk management of the confidentiality, integrity and availability of e-PHI. If the entity decides that the addressable implementation specification is not reasonable and appropriate, it must document that determination and implement an equivalent alternative measure, presuming that the alternative is reasonable and appropriate. If the standard can otherwise be met, the covered entity may choose to not implement the implementation specification or any equivalent alternative measure and document the rationale for this decision.
    As you can see from the above, encryption is not required...but you need to use an "equivalent and alternative measure" to secure the data.  What people are confusing is interpreting "encryption is not mandatory" with "data security is not mandatory."  The latter is required, the former not...but, then again, the latter is required if one wants to take advantage of the safe harbor under the Breach Notification Act since only encryption and data destruction are apply.

    Related Articles and Sites:
    http://www.latimes.com/local/lanow/la-me-ln-sutherland-data-breach-

    http://losangeles.cbslocal.com/2014/04/03/25k-reward-for-stolen-computers-containing-patients-medical-records/
     
  • Kentucky Data Breach Law Signed

    The number of US states that haven't signed a data protection law has dropped to three.  According to pogowasright.org, the state of Kentucky is the latest state to sign a bill that is aimed at protecting personal data of Kentuckians.  Like many similar state laws, the use of data encryption provides safe harbor from reporting data breaches to consumers.

    Safe Harbor, Personal Information Defined

    Like many state laws concerning data security and data privacy, the law makes exceptions for information protected with encryption software.  First, a "breach of the security system" is defined as:
    unauthorized acquisition of unencrypted and unredacted computerized data that compromises the security, confidentiality, or integrity of personally identifiable information maintained by the information holder as part of a database regarding multiple individuals that actually causes, or leads the information holder to reasonably believe has caused or will cause, identity theft or fraud against any resident of the Commonwealth of Kentucky
    The one twist I can immediately make out is that the law requires the breach to be directly linked to ID theft or lead the "information holder to reasonably believe" it will happen.  I can understand the need to put limits – after all, most data breaches fizz out with nothing happening – but the latter requirement literally puts the fox in charge of the hen house.  Wouldn't it be in most information holders' interest to believe that ID theft will is not in the cards when data is lost or stolen?

    Second, the law clearly states that the breach of unencrypted data will be followed with a notification "in the most expedient time possible and without unreasonable delay."  The logical conclusion is that information that is encrypted does not require a data breach notification (which is only natural, seeing how the breach of a security system has been defined).

    Student Data Also Protected

    Being at the tail-end of the breach legislation game has its own rewards.  The Kentucky legislature has made it a point to ensure that student data is protected.  Among other things, it is now illegal to "process student data for any purpose other than providing, improving, developing, or maintaining the integrity of its cloud computing service."

    This is no doubt directed to certain services that acknowledge data-mining student information for profit, financial or otherwise.

    No Breach Law, More Expensive Insurance Policies

    An interesting factoid that I learned while reading of Kentucky's data breach law, courtesy of whas11.com:
    insurance companies were charging Kentuckians more for cyber-security policies in the absence of any state laws requiring such notification after incidents such as the Target and Neiman Marcus data breaches.
    I cannot even begin to fathom why this would be so, but apparently it's a thing.  Assuming this has a causal link with legislation, I guess this is another reason why the US should have a federal data breach law.

    Related Articles and Sites:
    http://www.pogowasright.org/ky-governor-beshear-signs-data-protection-bill-into-law/
    http://www.whas11.com/news/politics/Beshear-signs-data-protection-bill-into-law-254797181.html
    http://www.lrc.ky.gov/record/14RS/hb232.htm
     
  • Canada Digital Privacy Act: $100,000 Fine For Not Reporting Data Breaches

    Canada is set to introduce a new bill that would make it illegal to not report data breaches to people affected by it, or for failing to report said breach to the Privacy Commissioner.  The new law, called the Digital Privacy Act, will be a much needed amendment to PIPEDA (Personal Information Protection and Electronic Documents Act), which already provides guidelines to things like the use of data encryption for protecting personal data.

    The big element in this news is that fines of up to $100,000 could be handed out by the Privacy Commissioner's Office.  With the stick, however, you should look for the carrot.

    Encryption Software and Other Security Tools Get Boost

    Data encryption has been pointed out as a means of protecting personal information from data breaches under the PIPEDA.

    Among other things, the Digital Privacy Act appears to reinforce the need to use such tools. (Does this come as a surprise, though?  We are living in the digital age, after all).  For example, the following definition is being added to PIPEDA:
    "breach of security safeguards" means the loss of, unauthorized access to or unauthorized disclosure of personal information resulting from a breach of an organization's security safeguards that are referred to in clause 4.7 of Schedule 1 or from a failure to establish those safeguards.
    Why would this be a thumbs-up for full disk encryption and mobile device management and other forms of data security?  Because these tools are expressly designed to prevent the loss, unauthorized access, or unauthorized disclosure of personal information.  Encryption is a security safeguard (which the original PIPEDA legislation makes abundantly clear).  Since a computer remains encrypted once it is encrypted, its security safeguard is still in place if a laptop computer were to be lost.  Result: not a breach of security safeguards.

    This ties in to the following requirement that is being introduced by the new bill (my emphasis):
    An organization shall report to the Commissioner any breach of security safeguards involving personal information under its control if it is reasonable in the circumstances to believe that the breach creates a real risk of significant harm to an individual.
    With "significant harm" being defined as (my emphasis):
    For the purpose of this section, "significant harm" includes bodily harm, humiliation, damage to reputation or relationships, loss of employment, business or professional opportunities, financial loss, identity theft, negative effects on the credit record and damage to or loss of property.
    The one avant-garde aspect of this Act is that it has classified identity theft and credit records as factors "harm".  As far as I know, this is the only case in the world so far where this is so.  For example, in the U.S., the effects of a data breach on a person's credit record is not exactly viewed as a cognizant harm.

    Questionable Loophole

    It's not all milk and honey when it comes to the Act, though.  Many people who have experience with "harm threshold" clauses are probably not going to be too crazy about the "creates a real risk of significant harm" portion.  After all, who determines what is a "real risk"?  However, Canadians have thought of everything:
    The factors that are relevant to determining whether a breach of security safeguards creates a real risk of significant harm to the individual include

    (a) the sensitivity of the personal information involved in the breach;
    (b) the probability that the personal information has been, is being or will be misused; and
    (c) any other prescribed factor.
    The use of encryption software ensures that the real risk of significant harm is virtually eliminated; however, if not, I guess a company could argue their case to the commissioner.

    Related Articles and Sites:
    http://www.parl.gc.ca/HousePublications/Publication.aspx?Language=E&Mode=1&DocId=6524312&File=24#1
    http://www.itbusiness.ca/news/businesses-could-face-fines-of-100000-per-individual-digital-privacy-act/47931
     
  • HIPAA Security: Michigan Department of Community Health Data Breach Affects 2,595

    One of the most unfortunate types of data breach cases I come across are those that involve instances where PHI encryption for laptops was used but still resulted in a HIPAA data breach, like the following one at the Michigan Department of Community Health.

    According to the entry at phiprivacy.net, an employee of the Ombudsman’s Office at State Long Term Care experienced a burglary.  The thief took a laptop computer and a flash drive.  The former was protected with encryption software, as many covered entities have done in light of the final Omnibus Rule.

    The flash drive, however, was not encrypted.  A total of 2,595 people, living and deceased, were affected by this latest PHI breach.

    Personal Data Stolen, Laptop and Drive Not Recovered

    The stolen information included people’s names, addresses, dates of birth, SSNs, or Medicaid IDs (although not all were affected).  The burglary occurred around January 30th, with MDCH learning about it on February 3rd.  As of the press release announcing the data breach, April 3, the devices were not recovered.

    And, chances are that they won’t be.  For the flash drive, the chances of it being recovered are close to nil.  For the laptop, assuming some other type of laptop security protection software was employed in addition to the encryption -- such as a tracker -- the odds of recovery are higher but can still be pretty low.

    For example, the use of Absolute Software trackers could lead to a 75% recovery, if you believe the manufacturer’s claims.  The only caveats here are that (a) we have absolutely no idea how long it takes to recover the device (is a day, a week, a month?), (b) it’s not foolproof.  The use of a Faraday cage-like device or going into a basement may be enough to defeat the technology, and (c) recovery falls outside of the safe harbor requirements under HIPAA.  This last one requires the use of encryption (or data destruction) to go into effect.

    Still, the technology is much more impressive than conventional tracker software for your laptop, which generally tends to “track” the last known IP address, seeing how these devices don’t come with a GPS module, and are not as accurate in terms of pinpointing a device’s location.

    Full Disk Encryption Has Blind Spots

    With HHS (and their OCR branch) heavily promoting the use of HIPAA encryption, it’s kind of hard for the layperson to understand what went wrong here.  Encryption appears to have been used, but one still had a data breach.  Yes, the flash drive was at fault, but...wasn’t the data encrypted when it was transferred from the computer?

    In order to make sense of what’s going on here, one needs to understand the basic concepts of the underlying technology.  There are many different types of encryption.  There is full disk encryption, which was probably used to protect the laptop’s contents.  As the name implies, full disk encryption (usually abbreviated to FDE) encrypts the entire content of a hard drive.  To be more specific, it encrypts the hard drive itself; because data is stored in the hard drive, they end up protected, too.

    This distinction is very important, as it explains why the flash drive’s contents were not protected.  Since it’s the hard drive that’s encrypted and not the data, when you copy the data over to another storage device, such as a flash drive, that information is not encrypted anymore.

    Is there a way to encrypt the data?  Absolutely.  There are technologies that encrypt data, such as file encryption.  Generally, it’s a less optimal way of protecting an entire device’s data.  Thus, they tend to work in tandem: FDE for the laptop, file encryption for any files that are making it off of the laptop.  This way, one of the top weaknesses of FDE is shored up.

    Related Articles and Sites:
    http://www.phiprivacy.net/michigan-department-of-community-health-notifying-2595-individuals-of-breach/

     

     
  • FTC And Data Breaches: Fandango, Credit Karma Settle Charges

    The site databreaches.net notes in one posting that two companies, Fandango and Credit Karma, have agreed to settle charges with the Federal Trade Commission (FTC).  Both companies were brought to task for misleading consumers – more specifically, for promising to protect customer data when they hadn't.  Both cases revolved around the use of mobile data encryption (technically, the implementation of Secure Sockets Layer, SSL); however, how they ended up breaching their customers' trust is a bit different.

    Fandango

    Fandango, the company that makes it easy for you to reserve movie tickets via the internet, was accused of "deceptive representations" because its mobile app's privacy policy promised that "[a client's] information is securely stored on [the client's] device and transferred with [the client's] approval during each transaction."

    As it turns out, the mobile app had a problem: it wasn't validating SSL certificates.  This is a bad thing because it allows "man-in-the-middle" (MITM) attacks, instances where a user is connecting to the internet via a hotspot provided by a third party.  The third party could be an internet access providing Samaritan...or he could be recording your data as it passes through his equipment, including credit card numbers, passwords, etc.

    The FTC made the argument that Fandango should have known better than to release the app in this state, seeing how there are free and cheap solutions that seek out these kinds of vulnerabilities.

    Furthermore, Fandango didn't have an adequate method for receiving vulnerability reports.  Apparently, a security researcher had attempted to contact the company about the security lapse, but his email was flagged due to the use of the word "password" and was logged as a password reset request email, triggering an automated email message for resetting passwords.  So, even if someone wanted to help them out, there was no way to do so.

    (I get the feeling that the security researcher must have informed the FTC of the vulnerability, because...how else would the FTC have found out?  The FTC complaint makes it clear that they were the ones that ultimately informed Fandango of the issue.)

    Credit Karma

    Credit Karma also ran into the same problem as Fandango: SSL certificates were not being validated.  Thankfully, Credit Karma had the appropriate channels for receiving news of this security vulnerability: a user alerted them of said problem.

    It turns out that this company had outsourced the development of their Apple iOS mobile app, and during the process of building it, had authorized the contractor to,
    use code that disabled SSL certificate validation "in testing only," but failed to ensure this code’s removal from the production version of the application.
    When Credit Karma released their Android app, they ran into the same problem again.

    One may feel inclined to blame the outside developers.  The thing is, in both instances, "in-house security engineers" looked into the issue and released an update for the app.  In other words, Credit Karma had the manpower to actually test things out (which they probably did, at least for those areas that the company deemed important).

    Missing it the first time around, it's understandable.  But the second time around?  Fool me once, fool me twice....

    Then, there is the further issue concerning authentication tokens: the gist of this leg of the story is that a security review by the in-house engineers uncovered it.  Why didn't they uncover it before?  Possibly because they didn't have a review prior to the apps' original release.

    What If....?

    The two cases raise interesting questions.  First, would both companies have been served by not promising to protect client data?  For example, by not having a privacy policy at all, or having one but not explicitly making promises, such as "we use AES encryption to protect your data," which is what Credit Karma did.

    Second, would Credit Karma have been protected if they didn't have any in-house software engineers who could have reviewed the code when contacted about the security failings?  After all, it's not unusual to find an app developer who's outsourced all of the coding and is the "developer" in name only.

    The answer to both is probably "no."  Even if Credit Karma didn't have in-house security engineers, ultimately the responsibility rests with them.  The FTC would have gone after Credit Karma regardless and, as is usually the case in such matters, Credit Karma would have the option of going after its outsourced developers.  It's all about kicking the can down Responsibility Lane.

    Indeed, who's to say they haven't?  After all, the contractor should have known better, and Credit Karma did only authorize it temporarily for testing purposes only.

    As to the first question, it appears that there aren't any legal requirements at the federal level that mandate the creation of a company privacy policy, but there are fifty states and a handful of US territories out there, and who knows what one might unearth.

    Related Articles and Sites:
    http://www.databreaches.net/fandango-credit-karma-settle-ftc-charges-that-they-deceived-consumers-by-failing-to-securely-transmit-sensitive-personal-information/
     
More Posts « Previous page - Next page »