This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

September 2009 - Posts

  • Data Encryption Software: Ohio Policyholder Information Data Breach Notification Rules - Ohio Department of Insurance

    Update (28 SEPT 2009): A copy of Bulletin 2009-12 can be found at

    According to a newly released press release by the Ohio Department of Insurance, insurance companies doing business in Ohio will have to contact the Department of Insurance if they discover a data breach.  This new requirement will go into effect on November 2, 2009.  As far as I can tell, there is no safe harbor for using data protection software like data encryption to protect the data.

    What's "Personal Information" Under OH's Rules?

    Under the new rules, an insurance company must notify the Ohio Department of Insurance (ODI) within 15 days of discovering that that personal information belonging to policyholders have been stolen or lost.

    The department seems to have taken a page from the data breach laws passed by most of the states in the US.  According to the ODI's definition, personal information consists of a person's first and last name (or, first initial and last name), combined with the following:

    • Social security number, or
    • Driver’s license number or state identification number, or
    • Bank/credit/debit card or account number

    No Safe Harbor Or Permitted Delays?

    Supposedly, the announcement is detailed in "Bulletin 2009-12," but I am unable to find a copy of it anywhere. (Google, thoust hast failst me-th!)

    From the "personal information" definition above, it's quite apparent to me that the ODI has referenced the different data breach notification laws that have sprung over the last couple of years.  I was expecting to see safe harbor and delay clauses if encryption software was used to protect policyholder data, or if law enforcement asked for a delay in announcing the breach while an investigation is ongoing.

    However, I was unable to get my hands on the actual rules, so I'm not sure if the above were omitted from the press release, or actually not present in the rules.

    Differs From Ohio State Law

    The general (state?) data breach notification rules for the state of Ohio allow an exemption from announcing a breach if the information is protected via encryption.

    Furthermore, they have a little more leeway, permitting the disclosure to be delayed by 45 days.  (Well, not really...what the law actually forbids is taking more than 45 days to notify Ohioans about a data breach; but, you know that's not how most companies will interpret it.)

    So, as an insurance company, should you use full disk encryption like AlertBoot to protect your agents' laptops?  Even if it turns out that the ODI requires notification of the loss or theft of encrypted data?

    I'd say "yes." First off, there's no guarantee that ODI will force an insurance company to go public about a data breach.  If that's the case--and I'm no lawyer, but common sense seems to dictate it--then Ohio state law provides safe harbor.

    More importantly, though, if and when a data breach does occur, an insurance company's policyholders will be happier knowing that their data was encrypted than not protected at all.

    Related Articles and Sites:,insurancecompanies.aspx

  • Cost Of Data Security Breaches: It Takes Longer Than Expected To Notify The Affected

    Okay, so you've had a data breach because a) an employee somehow lost a computer and b) you didn't use laptop encryption software to secure that computer's data.  It's time to announce a data breach and contact your affected clients.

    But wait!  It's not as straightforward as you might believe.  I was reviewing some of the data breaches I've covered in the past couple of years and wanted to point out something that's not readily mentioned when it comes to the cost of data breaches: the logistics behind contacting affected clients.

    Revisiting GE Money

    Nearly two years ago, GE Money found that a backup tape with sensitive information was missing from storage.   It just disappeared--poof!--from a secure facility being supervised by Iron Mountain (their facilities are extremely well protected, from what I understand.  They also have that Tolkien/Lord of the Rings thing going on with that name, perhaps inviting trouble...).

    According to MSNBC's coverage, it took GE two months to figure out who was affected.  Understandable, seeing how 650,000 people were affected (including 150,000 of their SSNs).

    Now, this is where it gets interesting: according to MSNBC, "since December [of 2007], the company has been notifying consumers in batches of several thousand and telling them to phone a call center set up to deal with the breach. The notification is expected to be completed next week [end of January 2008]."

    This is in addition to the letters GE had to send out to affected people.  (Otherwise, how would they know where to call?)

    Doing The Math

    Notifying 650,000 customers in batches of several thousand?  That works out to 217 batches.  If assuming "several thousand" weighs towards the tail-end (i.e., 3999), it still works out to 163 batches.  That's 15,000 to 16,000 notifications being sent out per day if the process took two months (assuming only weekdays were involved, it means 45 days).

    If only 20% of letter recipients call in, that's approximately 3,000 people per day (I'm assuming that it takes the same amount of time for the mail to be delivered across the US--I know, an unrealistic assumption).

    Furthermore, if these people spend on average 10 minutes per call...that translates to a total of 500 hours.  Assume that call center workers are on the job for 8 hours with no breaks, and you'll need 63 people to handle all daily calls.

    63 people, 8 hours a day, 45 days, $10/hour to pay the 63 people...that's $226,800.  And, it's for the people answering phones only.  Obviously there are other costs involved (and a company would be crazy to set up an operation like this, instead of outsourcing it, so costs are going to be even higher).  That can't be cheap.

    And it's truly a sunk cost--I mean, once you've paid for it and used it, that's it; it's not going to prevent future breaches, nor does it mean you'll never pay again for an emergency call center the next time you've got a data breach.

    Which is why alternatives, such as preventing a breach from happening by using full disk encryption like AlertBoot, doesn't seem so bad when you consider what you have to deal with in the aftermath of a breach.

    Related Articles and Sites:

  • File Encryption Software: My Two Cents On Rocky Mountain Bank Data Breach has a story about a Wyoming Bank (Rocky Mountain Bank, RMB) that has sued Google in order to identify a Gmail account holder.  The story is small potatoes, but it has taken a life of its own (Streisand effect), and I thought I'd make a light comment on it (the first being, a bank should use data encryption software like AlertBoot to protect any sensitive info).

    Wrong Attachment To Wrong E-mail Address

    The gist of the story is: a bank employee sends an e-mail with the wrong attachment to the wrong Gmail address.  The employee tries recalling the e-mail to rectify the situation, and when that doesn't work, e-mails the recipient, asking the person to delete the e-mail and to contact the bank to discuss the situation.  Not unexpectedly, the recipient does not reply back.  Bank sues Google to find the identity of the account holder.

    Now, I would have assigned a huge probability that nothing would come of the above actions:

    • I have my bank's e-mails go straight to the junk-mail folder myself (they never send anything of importance, really), and never bother to check them

    • Rocky Mountain Bank sounds like a made-up phisher's scam name.  I wouldn't open an e-mail from them, especially because I don't bank with them

    • I've got waaaay too many Gmail accounts that I've set up.  And, just like other people that have done the same, I don't access them, ever; don't remember their passwords; don't even remember the username!  These were set up exactly because I don't want to hear from banks, merchants, and other companies that I do business with.  Just because I "agreed" to hearing from them doesn't mean they're not sending me spam.

    However, unlike many of the blogs commenting on the story, I do not find the bank employee's actions downright stupid (except for attaching the wrong file to the e-mail).

    Granted, recalling e-mails usually doesn't work.  And, even if the recipient were to claim that he deleted the file, it would be just that--a claim.  Where's the guarantee that he's not selling the information?  So, far, all of the employee's actions sound...well, stupid.

    On the other hand, if you're the bank and you've decided to ensure that there is no further data breach, what else can you do but go after the recipient?  And, you certainly can't go ahead and sue Google for the account holder's ID without trying to contact the recipient first.

    I mean, the PR nightmare from that would have been greater than the above.  The above makes the bank look like it's run by amateurs, but not contacting the recipient first would have made them look like evil amateurs.

    For what it's worth, I'd say the bank did the best they could under the circumstances, although I certainly would have preferred to have read that they had used file encryption on the attachment.

    (What I really would have preferred is to have read nothing--i.e., there was no wrong e-mail address and no wrong attachment involved.  Personally, though, it's a waste of time to wish a systemic risk will not be there.  And the risk that an e-mail will be sent to the wrong person is always there...)

    Related Articles and Sites:

  • Laptop Encryption Software Not Used, Madoff Victims At Risk (Even More Than Before) has linked to a letter that spells more potential problems for investors who were defrauded in the Bernie L. Madoff saga.  The letter from AlixPartners, the court-appointed claims agent that's liquidating Madoff's company, is directed to the New Hampshire Attorney General, and explains that a laptop computer with personal information was stolen from an employee.  Laptop encryption was not used to secure the contents, although password-protection was used.

    Information From 1995 And Earlier

    If you'll remember, Madoff ran a Ponzi scheme for decades.  No wonder, then, that the lost laptop contained information "from 1995 and earlier...including the individuals' names, addresses, Social Security numbers, and/or BLMIS account numbers (which are now defunct)."

    (BLMIS stands for Bernard L. Madoff Investment Securities, and at this point, it's highly doubtful that their accounts would serve for anything.  Except, maybe, for a good laugh, twenty years from now, when memories start to fade and time tends to put a golden sheen on most rememberances...)

    That Darn Employee's Vehicle Again!

    The laptop was stolen from an employee's locked vehicle.  It appears that the theft itself was not targeted because other cars in the area were burglarized that day as well.

    Of course, this depends on what that "area" happened to be: if it was at the local watering hole for lawyers, it could very well be that the thefts were, indeed, targeted.  While a particular lawyer may not have been targeted--or the actual information that they deal with--thieves could have been after sensitive and/or private data all the same, knowing that they increase their chances of getting quality data by targeting cars in that particular "area."

    (It's not a secret that, for example, spies will hang around towns surrounding military bases.  Plenty of soldiers will visit the local bars, and OpSec may not be on one's mind once they're hammered.  If you need data, you go where the data is.)

    Managed Disk Encryption - Not So Hard, Even With 900 Employees

    The presence of password-protection is a mixed-blessing at best.  It could prevent thieves from accessing data.  If these thieves were morons.

    The problem with the times is that anyone and everyone now knows that there is money to be made in selling private and personal data.  My guess is that the thieves that stole AlixPartners's laptop know this.

    If they're bent on getting information because they managed to filch a laptop, password-protection will merely serve as a small obstacle to getting to the data.  If the thieves were targeting laptops for the data, then it will not be an obstacle at all (in that they already know what to do.  The "obstacle" is taking the time to find out how to bypass password-protection, which is not too hard once you know how to use Google.)

    Seeing how AlixPartners seems to deal with sensitive data all the time, it's surprising that they don't have better data protection programs in place.  In the above case, for example, the use of whole disk encryption would have been preferable (perhaps even prudent).

    And while their company size of 900 employees would have posed somewhat of a challenge to getting all of their computers encrypted, it would not have been an insurmountable one.

    Centrally managed encryption software, like AlertBoot, allows companies to deploy and manage computers' encryption from a central location.  For example, AlertBoot's encryption reports could have let the IT department know that the above laptop was not protected, and prod IT personnel to do something about it.

    Related Articles and Sites:

  • Application Control Software: A Reminder Of Its Importance, Curious George, And The Lovesick

    • Lovesick malware-spreader hits hospital
    • Curious George is a bad monkey.  He's spreading malware

    Hard disk encryption  is one of the best methods for protecting your data.  However, it's not the "be all, end all" of computer data security.  And here are a couple of notable cases that show you why.

    Ohio Hospital A Casualty

    A man was arrested after he inadvertently infected a hospital's network with spyware.  This man, apparently a jealous and obsessive guy, had originally meant to spy on his ex.  He sent her a spyware program via e-mail, and somehow convinced her to install it.

    However, the ex did this while at work and the program got installed in a computer belonging to the hospital.  The lovelorn-man started to get screenshots of the computer's screen, including "medical procedures, diagnostic notes and other confidential information relating to 62 hospital patients.  He was also able to obtain e-mail and financial records of four other hospital employees as well," according to

    Curious George Is Curious About Your Computer

    Earlier this week, the Curious George website hosted by PBS was spreading malware.  A pop-up message appeared when visiting the site, asking for a username and password.

    This pop-up, however, was a conduit for infecting the visitor's computer with malware: notes that the malware was targeting unpatched software applications (Adobe Acrobat Reader, AOL Radio AmpX, AOL SuperBuddy, and Apple QuickTime were named).

    Software Application Program Control

    Disk encryption is a great resource when it comes to preventing data theft that results from the loss of a computer or other device.  But, it cannot help against the above two instances.

    (Perhaps the use of file encryption software, where individual files are protected, would work.  But, that, too, cannot prevent a screenshot from being taken while you're accessing your bank account while on-line).

    And, as we can see from the above instances, "trusting" the computer user doesn't work--either because they're ignoring the rules or because they don't realize that something untoward is happening.  Especially in the Curious George case...aren't we asking a little too much from our tykes and tots "to be aware of our computing environment," advice that is frequently given out to websurfers?

    In instances like these, the only method of combating an information breach is preventing malware from installing in the first place.  Microsoft, for example, has the User Account Control (UAC) mode in the Vista operating system that can prevent a silent install from taking place.

    There are other solutions out there as well, most of them geared towards corporate and SMBs users.  These solutions can prevent non-administrators from running certain applications via whitelists or blacklists, including malware installers.  They're especially useful because they allow these "lists" to be managed easily, from a central location (a concept that AlertBoot also uses for its centrally managed encryption software).

    Related Articles and Sites:

  • Data Encryption: HIPAA Data Breach Notification In Effect SEPT 23, 2009

    The Health Insurance Portability and Accountability Act (HIPAA) has a new notification requirement that goes into effect next week, on September 23.  This requirement was part of the HITECH Act, which in turn was part of the American Recovery and Reinvestment Act of 2009.  Safe harbor is provided for healthcare providers and other covered entities if sensitive data is protected, such as with the use of hard drive encryption.

    Some Regulation Details Of Interest

    • Affected people must be notified of a breach "as soon as reasonably possible," generally no later than 60 calendar days from the discovery, unless law enforcement requests otherwise

    • If 500 or more people are affected by the data breach, the Department of Health and Human Services (HHS) and the media must be notified as well

    • If a business associate to the covered entity discovers a breach, the covered entity must be notified (I assume it means the covered entity has to deal with the consequences)

    • Health information that was secured via encryption software or that was destroyed does not require a notification if there is a breach (kinda hard for one to exist for destroyed stuff...)


    There are people who are unhappy about the new regulations: they think they're not strong enough or go far enough. 

    For example, under the new rules, a covered entity determines whether a data breach meets the HHS's harm threshold: a risk assessment is conducted by the covered entity that experienced the data breach to see if,

    ...there is a significant risk of harm to the individual as a result of the impermissible use or disclosure. In performing the risk assessment, covered entities and business associates may need to consider a number or combination of factors...[my emphasis.  p.42744, Federal Register / Vol. 74, No. 162]

    Of course, the above is letting a fox guard the chicken coop: if a covered entity decides that there was no harm, there is no need to notify anyone--and, it's usually in the interest of a covered entity to find that a breach results in no harm (even if that's not the case).  There seem to be other loopholes, in my view, that would allow a healthcare provider to abstain from notifying people of a data breach.

    On the other hand, I'm reading through these regulations, and I'm starting to get a headache trying to follow everything.  Maybe that's the point.  At some point, one's gotta realize that it's cheaper and more efficient to encrypt their data, and take advantage of the encryption safe harbor, than to hire a lawyer to see whether a breach notification is necessary or not.

    Related Articles and Sites:

More Posts « Previous page - Next page »