This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

April 2013 - Posts

  • BYOD: How Stickers On Smartphones (Do Not) Secure Intellectual Property

    Forrester Research sent out a press release a couple of weeks back, noting how MDM is a "heavy-handed approach" to BYOD security.  They also noted that IT professionals will move away from MDM because "they don't want to manage employee-owned devices" and will be looking to mobile virtualization.

    I have a different opinion on this.

    Security Stickers to the Rescue?

    I was talking to a buddy of mine who works for a semiconductor firm (which will go unnamed).  He showed me a sticker that was covering his smartphone's camera and mentioned how BYOD was making inroads into the workplace.  Overall, he was happy about using his personal device at his job, but also noted how the security in place was for show only.

    I inquired whether they were using a mobile device management solution or something along those lines.  He said that they weren't (he's not with IT, so there's always the possibility that there is a data loss prevention solution in the backend coordinating information security).  The only things that this particular company was using as "protection," he said, were stickers that were placed over the smartphone's camera to prevent information from leaking via snapshots.

    Now, the thing about these stickers is that they can only be used once.  They're relatively hard to peel off and, once off, they won't stick to anything else.  The way it works: before he goes into a zone where cameras are forbidden, a guard physically checks to make sure there is a sticker over the camera; if there isn't one, a sticker is affixed.  The guard also checks cameras when people are leaving.  It's up to the employee to peel off the sticker; some keep it in place until they need to use the camera.

    Why this fails as a security measure:
    (1) Peel off the stickers enough times and you'll have enough gluey residue that remains behind on the smartphone that allows one to re-use a sticker (or stick any other thin material like paper) at will.  Since guards only make sure that the sticker is in place (and don't pull on it to test its strength), it's not as effective as it looks.  

    (2) My particular friend uses one of those smartphone cases that come with a flap (like a book).  The guards always check the rear-facing camera but don't seem to realize that there is a front-facing camera underneath the cover.  There is nothing that prevents him from using this camera.

    Virtualization/Containerization Good but Something Needs to Back It

    And this is why I don't think that MDM will be pushed aside even as companies opt for mobile virtualization or containerization.  Containerization has its share of problems, certainly.  I've heard plenty of stories of a particular containerization solution that's not Bad.  From a purely logical standpoint, containerization is an elegant solution to the problem of mixing personal and corporate information on a person's device.

    However, the truth of the matter is that, no matter how elegant the solution, at one point hands will have to get dirty.  Even if virtualization or containerization is used, how does one prevent data leaks like the one at my friend's company?  The only way is to go in there and disable the employee's camera, at least while on the job.

    Related Articles and Sites:
  • Smartphone And Tablet BYOD Security: Because Physical Attacks Cannot Be Discounted

    Many websites reported earlier in the week that Vudu, a video-streaming company that's owned by Walmart, reported a data breach.  Furthermore, Vudu recommended that users of the service reset their passwords, especially if their passwords are reused on other online sites.  These are usually the words of a company that was hacked online, such as with a SQL injection attack.

    With Vudu, however, it's different: burglars broke into the Santa Clara, California-based company on March 24, 2013 and stole computer hard drives.  The data breach was limited by practicing adequate security, although the hard drives were not protected with the likes of full disk encryption such as AlertBoot.  This goes to show the need for proper data security on all devices, including smartphones, tablets, and laptops.  The threat is not just virtual.

    Customer Data Compromised

    Vudu revealed that the stolen drives contained the following information: customer names, email addresses, physical mailing addresses, Vudu account activity, dates of birth, the last four digits of credit cards, and "encrypted passwords."

    Despite all the things that Vudu did correctly, it fell flat in one area: it didn't notify clients until two weeks after the break-in.  In their FAQ, Vudu clarifies that they needed to "reconstruct the information" and that "law enforcement requested that [Vudu] delay notification."

    I include quotes for "encrypted passwords" because they're probably not encrypted as much as they are hashed.

    What's the difference, you may ask?

    Encrypted Passwords Generally Not Encrypted

    Generally, "encrypted passwords" are not really encrypted.  If they were, they wouldn't be easy to guess or figure out.  Indeed, it's the reason why devices like iPhones, iPads, and Android smartphones all use disk encryption.  The use of encryption makes it virtually impossible to gain unauthorized access to the data in the devices (and, thus, is one of the core aspects of AlertBoot's mobile device management and security solution, although users of AlertBoot can manage many different aspects associated with mobile security to suit their needs).

    Whereas the implication here, with Vudu strongly urging password changes, is that the passwords could be guessed, meaning that the passwords were hashed.  A "hash" is when a password is passed through an algorithm and comes out looking nothing like its input.  Sounds like encryption, except for two things:
    • You can't convert a hash back to its original input (with encryption, you can).
    • There's a 1-to-1 correlation between the input and the hashed output.  So, if the password is "blue" and the hashed output is "920jf3no23nfoiwjfc9sjvasjd293r2," then the hashed output will always be "920jf3no23nfoiwjfc9sjvasjd293r2" for "blue" with no exceptions.

    You don't need a ridiculous amount of foresight to see how this could be an Achilles heel: all you need to crack the security is to prepare a list of inputs and outputs, and compare hashed passwords to this list.  This is why if you're hashing passwords you also need salt them: include random characters so that the output becomes different.

    For example, "blue," "blue1," and "blue11" will all lead to extremely different outputs.  Make your salt unique and keep it a secret and, the theory goes, your passwords will be safe.  Not a bad theory, but the real world has a way of throwing a wrench in the works.

    The problem is that different users often use the same password.  You've seen the lists of words that shouldn't be employed as passwords because they're so commonly used: "password," "God," "12345," and "love", among others.  Not only can you count on these popular passwords to show up on hashed password lists, if you total them up, they tend to be in the top 20.

    For example, let's say that you're trying to identify two hashed passwords, 8nuv89ybt7rc32rp9824 and AF23o9fasDSf0sjwfe.  You know one of them is "love" and the other is "theQu1ck8" but you don't know which one is which.  But, 8nuv89ybt7rc32rp9824 shows up 500 times and AF23o9fasDSf0sjwfe shows up once.  Obviously the former corresponds to "love."

    Encryption: Nothing Compares

    Unlike hashes, encryption uses unique "encryption keys" to convert data.  What are the odds of two encryption keys being identical? Lower than the odds of your body spontaneously combusting right now.  The only way to "guess" an encryption key is brute force it; that is, go through every single one of them until you find it.  According to some calculations, the universe will be a cold, homogeneous mush devoid of entropy before that happens.

    That's some pretty powerful stuff.  You don't want to be caught without backing up individual encryption keys, then, or finding out that you can't find the right one to unlock a device.  Encryption key management is one of the most harrowing aspects of ensuring good data security, (and is infinitely made easier via the use of AlertBoot).

    Related Articles and Sites:
  • Hospital BYOD: 89% Of Healthcare Workers Use Personal Smartphones for Work

    BYOD and the medical sector are a match made in heaven...and in hell.  Consider the possibilities: the elimination (or more realistically, the minimization) of paperwork; the real-time synchronization of patient data; the savings in time, money, and complexity.  No wonder 89% of healthcare workers reported using their personal devices at work in a Cisco survey.

    But then, consider the consequences: the potential increase in HIPAA/HITECH breaches; the potential loss of reputation from privacy breaches; the increased risk of lawsuits...  If an organization in the medical sector is not using MDM and other BYOD solutions but engaging – either officially or otherwise – in BYOD, they're exposing themselves to a lot or unnecessary risk.

    Cisco Survey Reveals Worrisome Stats

    According to, answers to a Cisco survey of healthcare workers revealed the following (mind you, it doesn't look like it's a "rogue" situation where employees are bringing in their own devices against an organization's policies; these are work environments where BYOD is embraced to a degree or other):
    • 89% use personal smartphones for work purposes.
    • 41% don't have a password on their personal device.
    • 53% access unsecured Wi-Fi on personal smartphones.
    • 86% smartphones are not set up for remotely wipe.

    It is further observed by that,

    Considering how easily smartphones can be used to receive and transmit large volumes of electronic protected health information (ePHI) and how often personal smartphones are lost or stolen, healthcare organizations that utilize BYOD programs without adopting appropriate security measures could be creating a serious privacy risk.
    Hear, hear!  You'll recall that one of biggest fines that the OCR/HHS has ever levied involved leaving patients' documents in the Boston T.  The incident compromised the data privacy of 66 patients.  Considering how much more information can be stored, carried, and lost on a smartphone, a HIPAA covered-entity (and their business associates.  They account for 20% of all HIPAA data breaches, after all) should really be looking into the use of MDM and other BYOD security solutions for smartphones, tablets, external drives, and laptops.

    Cloud-based MDM: An Additional Point of Failure?

    I have come across situations where HIPAA covered-entities don't view the use of cloud-based solutions like AlertBoot as a palatable answer to their BYOD problems.  Why?  Increased risk.

    More specifically, the risk of a HIPAA breach stemming from the cloud.  It's understandable: one's using BYOD and MDM software to lower the risks of a PHI loss.  The use of the cloud, however, tends to increase the risk of a breach because the cloud is really a bunch of servers "out there somewhere."  What could be worse for ePHI security than your patient data "out there somewhere"?

    But this is only the case if the cloud solution requires the transfer of PHI.  With AlertBoot's MDM – which is cloud-based and completely transparent in terms of cost: i.e., you won't find any surprise expenditures like having to buy a control server – PHI never leaves a device.  Unlike the cloud when used for back up purposes (where PHI must be copied), an MDM solution like AlertBoot's would never touch the data that's on a user's device...unless it has to be wiped remotely.

    So, AlertBoot's cloud-based MDM and BYOD security actually represents a tremendous value coupled with no additional data breach risks.

    Related Articles and Sites:
  • Apple iMessage Security: Can The Government Really Not Access It Or Is It Bull?

    Last week, reported that the US government's surveillance was being hampered by Apple's iMessage.  Today, I see reporting on accusations that this could be the government's attempts to engage in disinformation.  Personally, I think this is a case of people seeing a conspiracy where there is none.  Why?  Because I wasn't under the impression that the government couldn't access iMessage chats after reading Cnet's article.

    Not a Secret: iMessages Sync Across Apple iDevices thoughtfully summarized Cnet's article:
    CNET had a story revealing a "leaked" Drug Enforcement Agency (DEA) memo suggesting that messages sent via Apple's own iMessage system were untappable and were "frustrating" law enforcement.
    And followed it by revealing that:
    In reading over this, however, a number of people quickly called bullshit. While Apple boasts of "end-to-end encryption" it's pretty clear that Apple itself holds the key -- because if you boot up a brand new iOS device, you automatically get access to your old messages.
    That's right.  You're able to see your old iMessages in a new device.  Indeed, if you have more than one device from Apple, you'll see that the chats are synchronized: you can start an iMessage chat on your iPhone, continue it on your iPad, then check up on it on your iPad mini or iPod Touch, and return to it via your iPhone.  This feature, if I'm not wrong, was highlighted in one of Apple's commercials.

    Apple's iMessage: Secure from Government Poking?  Maybe... But It Isn't Meant to be

    Now, the ability to synchronize your iMessages across the board obviously indicates that Apple is able to get to it, and we can infer from this that the government can force the company to hand over the information via a warrant or otherwise.

    And yet, it's not unfair to say that government could feel stymied by iMessage, at least for the time being.  Consider the following:
    • iMessage has end-to-end encryption.  The celebrated BlackBerry messages also feature end-to-end encryption.  The difference between BB and Apple, though, is that BlackBerry (the company) does not know which encryption key is used (the "enduser administrator" sets it).  Apple has made no such promises.
    • iMessage uses TLS.  Transport Layer Security is the successor to Secure Sockets Layer (SSL).  Easily put, it's the crypto that ensures your online banking sessions are secure, or that your credit card numbers aren't hijacked while you're buying stuff online.  This same encryption also secures your iMessages.  While researchers are constantly teasing out potential problems (like this one), TLS is powerful crypto be design.  You can bet that it's harder than not to crack iMessages.
    • Apple is not a telco.

    That last one probably contributes most to the DEA's problems.  As's original reporting noted,

    telecommunications providers [are required] to build in backdoors for easier surveillance, but [this] does not apply to Internet companies, which are required to provide technical assistance instead.
    Think about it: there's no substantial difference between a text message and an iMessage.  The major difference is that the former is delivered by a telco and the latter by Apple (well, technically iMessages also go through telcos.  After all, they own the fiber that allows intercontinental communications possible.  But the encryption for iMessages is handled by Apple, making the presence of telcos in the mix a moot point).

    They're Just Like Us (But with Guns)

    When you consider how much information flows in, out, and within the US, you can bet the government's surveillance operations are automated as much as possible, not just in terms of data analysis but also in terms of acquisition.  What do you imagine the DEA's complaint would sound like if they had to veer away from their tried and true (and, in their minds, easy) way of monitoring communications?  They'd probably sound like me when I have to manually fill in my tax forms as opposed to using Turbo Tax. (Impossible!  Frustrating!  Will be the downfall of civilization as we know it!)

    Likewise, let's say you're in "middle management" at the DEA.  You find that your agents aren't doing as good a job as they could because they failed to notice the gaps in the "text messages" they acquired from telcos.  What do you do?  Well, you write a memo and distribute it, with helpful pointers and comments on where the challenges lie.

    That's what I see when I read the DEA's leaked memo.  I don't particularly think it was written for a notorious or subversive purpose – albeit, it is perhaps worth considering why it was leaked.

    On the other hand, was it leaked?  Look at what's written at the very top of the DEA's leaked message: Unclassified.

    Related Articles and Sites:


  • iPhone Data Security: Does The "Data Wipe" Functionality Not Work?

    Wired Magazine claims in its headlines that one should "Break Out a Hammer" because "You'll Never Believe the Data 'Wiped' Smartphones Store."  The implication is that information remains behind on smartphones when they are wiped.  While there is nothing factually wrong with the article, the headline is misleading.  You might as well proclaim, "Dinosaurs dangerous to mammals the size of humans."  Nothing factually wrong with that observation, either....

    No, you don't need to break out a hammer.  Or a gun.  Or your drill.  You just have to be smart about your smartphones.  Not only does this mean securing them with a passcode; configuring your settings so that things don't run automatically; using antivirus (if available); and potentially using MDM software to manage smartphones and tablets, it also means knowing what your phone can and cannot do.

    What Do You Mean by "Wiping Data?"

    Why is Wired recommending that people trash their smartphones as opposed to wiping them?  Because, just like laptop computers, wiping data doesn't necessarily mean "wiping data."  You know, just like the word "theory" has a different connotation when used in science vs. everyday use.

    As it turns out, and this shouldn't be news to readers of this blog, sometimes trace data (or more.  Much more) is left behind when a device's data is "wiped."  If by wiped, you mean you've deleted files... well, you haven't really wiped anything.  And, we already know that if a laptop computer is reformatted, then most of the old data is still in place: the process of reformatting just preps the device for a new installation, and has nothing to do with data security, which requires overwriting of data.  This is, of course, true for smartdevices as well (and why wouldn't it be?).

    Plus, it turns out that flash-based storage systems cannot have their data easily overwritten like their magnetic-platter counterpart, meaning that wiping data on solid state drives is something other than wiping data, regardless of which method you're referring to: file deletion, formatting, or data overwrites.

    In order to penetrate the enterprise market, companies like Apple and Google have designed their devices to ensure that data is wiped when, well, when you wipe it.  For example, it's not uncommon knowledge that Apple's devices make use of AES-256 hardware encryption.  Lose the encryption key – which is what happens when you wipe your iPhone – and your data is gone.  There's a caveat, though: it only applies to relatively modern iterations of the iPhone.  To be more specific, the iPhone 3GS and onward (something Wired was forced to acknowledge after going live with the article):
    Update 04/01/13 13:22: Story updated to note iPhone 3GS and newer models use a hardware encryption key.
    Same goes for Android OS.  Older devices didn't have full disk encryption, but FDE has been a standard feature since Ice Cream Sandwich (i.e., Android 4.0.  It was actually available in Android 3.0, but potatoes poh-tah-tohs).

    Old Phones: They're Old

    The phones that Wired tested were the following:
    • iPhone 3GS  2008 to 2010
    • LG Dare        2008 to 2010
    • LG Optimus  2010 to present
    • Motorola Droid 2009 to present
    The article only made reference that they were "old" so I had to look up when they first became available and when they were discontinued.  As you can see, these are old phones, especially when you consider that smartphones are really computers in disguise and their hardware hasn't quite been keeping up with software upgrades.  For example, I know of no one that uses a 3GS because newer versions of iOS slow the phone to a crawl.

    If you're going to comment on the state of security for smartphones...well, why only test old phones?  I mean, it's not as if they didn't have newer phones.  Wired went out of their way to get old phones (my emphasis):
    We rounded up every old phone we could scrounge up from around the office and asked the owners to wipe them.
    What's the verdict on new or newer phones' security?  We can't tell from the article because they were excluded.  Why?  Did the "new phone" lobby get to them?

    It's Not Even a Phone Issue

    Then there is the fact that one of the security issues is not a phone issue per se.
    Take the two Motorola devices. Both were wiped, and neither had much to speak of stored in their built-in memory, just some application data with no personally identifiable fingerprints.

    But one user left his micro SD card in the phone. Although the contents of the card were deleted, the card had not been formatted. This, apparently, meant the files were recoverable.
    So, this old-ish phone's wipe functionality worked correctly, but the micro SD storage was overlooked.  Uh, hey... how about not leaving your micro SD card in the phone when you get rid of it?  How exactly is this a phone problem?

    (Plus, what's this thing about formatted SD cards being "wiped"?  Researchers have found that, apart from applying full encryption and then losing the encryption key, there's no real way to completely wipe data from flash-based storage.)

    Don't get me wrong.  Sensitive information on external media is a big issue, especially for companies that are embracing BYOD.  It's the reason why AlertBoot Mobile Security, offers the ability to force encryption on Android phones where SD cards are present.  But at the end of the day, the end user also has to have an active hand in securing data.  If they're going to forget their SD card in the phone, are they going to remember to wipe a phone before selling it or giving it away?

    Related Articles and Sites:
  • UK BYOD Security: Should You Report A Security Incident To The Information Commissioner's Office?

    As Bring Your Own Device programs make their transition from "hot trend" to "accepted business practice" across the world, one cannot escape the feeling that, at some point, companies will hurt their thumbs and find that "something wicked this way comes." If they decide to engage in BYOD without the right MDM protection for smartphones and tablets like AlertBoot, that is, and end up with a data breach on their hands.

    When the time comes, should one report the incident to the appropriate agencies?  In the UK, for example, should an organization voluntarily report a data breach to the Information Commissioner's Office (ICO)?  The following finding may discourage you from doing so.

    84% of ICO Fines are for Self-Reported Incidents

    According to, eight out of the ten monetary penalties issued by the ICO in 2012 involved data breaches where the violator reported the incident.  If any were under the impression that the agency that's charged with enforcing the Data Protection Act of 1998 is soft on organizations that forthrightly come clean, they're sadly mistaken.

    Field Fisher Waterhouse, a law firm that did the analysis, noted that,
    84% of fines were for incidents that the organisations themselves had reported, demonstrating that self-reporters "are not given immunity from enforcement"
    and expressed concern that "this may deter organisations from owning up to data breaches," according to  A partner with the firm emailed the website and pointed out that "many controllers will be deterred from coming forward due to fear of fines and the absence of positive incentives" and, indeed, "that businesses [do] not feel obliged to report incidents themselves."

    And while the person quoted above works for a law firm and I don't, if I may put in my two cents: not only do they not feel obliged, they aren't even obligated – there's no legal requirement to do so for most.  The last time I checked, under the law, it's only a service provider that needs to notify the ICO, with "service provider" defined as:
    a provider of any electronic communications service that is provided so as to be available for use by members of the public. This definition will cover, but is not necessarily limited to, telecommunications and internet service providers.
    Also included in the above are the NHS Trusts, which is why they often show up on the news section of the ICO's website and bear the brunt of the monetary penalties.

    So, Do You Report Yourself?

    If a company or organization is legally required to do so, the answer is a loud, unequivocal "yes."  But what if you're not?  The answer is still yes.

    The key question is, I guess: how many of data breaches that the ICO has come across in 2012 are self-reported?  If the answer is 84%, then a 84% penalty rate for self-reporting organizations is par for the course.

    What the above report by Field Fisher Waterhouse does not take into account is the number of instances where one self-reported a breach and didn't get penalized financially.  It's a matter of statistics: we know that 84% of fines in 2012 went to self-reporting entities.  We also know that only a handful of the total are assessed with a penalty.  But is that unnaturally high when you consider the entire pool of data breaches in 2012?

    If self-reporting companies represent a mere 50% of the entire pool, then a 84% rate is certainly high.  If they represent 95% of the pool, then 84% is low.  On the other hand, if a total of 15 companies were fined but over 700 breaches came across the ICO's radar, the percentages would appear meaningless regardless of whether they're representative of the total pool or not.

    Other considerations: were the group of self-reporting companies penalized at a higher or lower rate than the group of companies that didn't do the reporting?

    Remember: there are lies, damned lies, and statistics.

    Related Articles and Sites:


More Posts « Previous page