in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

January 2012 - Posts

  • Laptop Encryption Software: Waterloo Region District School Board Computers Stolen

    The Waterloo Region District School Board in Ontario, Canada has announced a data breach.  It's its third breach in less than a year.  The announcement, however, leaves much to be desired.  They imply that something like disk encryption software was used, but don't come out and say it outright.

    Look, Waterloo Region District School Board: I don't know what they've been telling you, but using encryption software like AlertBoot to protect your data is not a crime.  And, its efficacy is not affected by pointing out you used it.

    December 1 Break-In, 9 Laptops Stolen

    According to numerous sources, the Waterloo Region District School Board (WRDB) has filed a press release stating that there was a break-in at the WRDB head office on December 1, 2011.  A thief or thieves smashed a window and stole nine laptop computers used by the center's staff.

    The board declined to make public what type of information was stolen or how many were affected, although they have indicated that it involves students' personal information.

    Most of the coverage mentions that the laptops had "security system that would require inside knowledge to bypass" (video, swo.ctv.ca) and that "it's a layered process" (therecord.com).  However, it's not really specified what this is, exactly.  Both computer encryption software and password-protection fit the description of such a security system, but the latter is not considered a "security system" by professionals, whereas laypeople do consider it so.  Which is a mistake.  I've already noted before why password-protection is not security.

    I did find one site, cambridgetimes.ca, where it's claimed that the board released a statement that "these computers use industry-standard encryption."  I have yet to find corroborating sources.

    What's the Hush-Hush Surrounding Encryption About?

    I'm not sure what to make of cambridgetimes.ca coverage.  If encryption was used, why is it not mentioned by all the other sites that have covered the story?  It seems to me that pointing that "the laptops were encrypted" would be a far better description over "security system that would require inside knowledge to bypass," which is confusing because it could refer to so many things.

    Could it be that cambridgetimes.ca jumped to conclusions and assumed that such a "security system" meant "encryption"?  That doesn't seem to make sense, either.

    But what really doesn't make sense is people's penchant for declining to mention the use of encryption.  I might be biased because of who I work for, but it appears to me that the use of full computer encryption software ought to be trumpeted from the roofs by companies that use it and are subsequently involved in a data breach.

    After all, break-ins, burglaries, thievery, hold-ups, carjackings, car thefts, and any number of crimes where laptops and computer equipment are stolen will not stop in the foreseeable future.  In such cases, only the use of encryption guarantees* that the thieves won't access data.  What could better calm down people than letting everyone know that their information is impossible to get to?

    (* I must include a caveat here: assuming something stupid wasn't done, like somehow attaching the encryption password to the stolen laptop.)


    Related Articles and Sites:
    http://datalossdb.org/organizations/4513-waterloo-region-district-school-board
    http://swo.ctv.ca/servlet/an/local/CTVNews/20120106/wrdsb-school-board-computers-stolen-120106/20120106/?hub=SWOHome
    http://www.cambridgetimes.ca/news/local/article/1277308--computers-stolen-from-school-board
    http://www.therecord.com/news/local/article/650845--computers-with-personal-info-stolen-from-school-board

     
  • AlertBoot's Stance On SOPA And PIPA: We're Against Them

    SOPA and PIPA.  In Spanish, one is the word for soup.  The other means pipe (or refers to sunflower seeds).  Pretty tame stuff.  Combine them together, and you have sunflower seed soup served in a pipe.

    In English, though, they're the reason why a good portion of the Internet has blacked out today in protest.  SOPA is an abbreviation for "Stop Online Piracy Act" and PIPA stands for "Protect IP Act," with the IP referring to "intellectual property."

    As the names imply, both of these bills have the purported goal of stopping online piracy.  This, though, is not why the bills are being protested.  After all, most people, including myself, are in agreement that piracy is a problem and something must be done about it, be it the illegal sale of movies or the illegal manufacture of pharmaceutical drugs.  (The Motion Picture Association of America supports the bills as does Pfizer.  Politicol News provides a list of the bills' supporters.)

    You have your detractors, but it's my opinion that most people support copyright holders' rights.

    SOPA and PIPA, though, are not the way to go, at least not in its current state.  These bills are so badly legislated that its effects go beyond copyright and piracy issues.  It's already been noted the chilling effects SOPA and PIPA would have on free speech, on due process, and even on international law.

    In fact, the blackout has attracted so much attention and media coverage that I don't feel that I could add to the general subject.

    Instead, let me illustrate how the two bills could affect a company such as AlertBoot, a managed encryption services company that is growing as an SMB and that still retains its entrepreneurial spirit.

    Freezing Out Entrepreneurship

    As a cloud-based encryption software provider, it's hard to believe that AlertBoot would ever fall under the auspices of SOPA and PIPA.  After all, we don't provide content.  And we don't link to content.  We protect it.

    As a cloud-based data security solution company, we are also working on offering other services that complement encryption: secure on-line backups -- because, let's face it, backups are an integral part of good data security practices -- and secure document sharing and collaboration, which is one way of ensuring that sensitive data do not fall into the wrong hands.

    These up-and-coming offerings are, as I mentioned already, complementary to what we already do.  They're also an outgrowth of what we already do.  Had our business not revolved around using the cloud to deploy disk encryption software for laptops, desktops, external drives, etc., there's a good chance we wouldn't be exploring and building out services in this space.

    What does this have to do with SOPA and PIPA?  Well, for one, as a data security company, we know the value of encryption, and apply it liberally to everything we do.

    We also know that a big, if not the big, reason why a client would choose to migrate to the cloud is linked to privacy: as the service provider, we've got to guarantee that no one, including us, can access our clients' data without their authorization.  You could say we learned from the Dropbox controversy.

    So, we encrypt everything to ensure that only authorized people can access the content in the cloud.  This runs counter to SOPA and PIPA because, if the bill is passed, AlertBoot will need to police the content of our customers.  And yet, we are locked out by design.

    I guess one way around it is not to start the project to begin with.

    Leading to a Less Secure Environment

    The other solution to this legal conundrum, then, lies in not locking ourselves out.  Creating a backdoor of some sort, if you will.  Problems abound with such an approach.

    First, what company would store its sensitive data with another that, by law, has to police the former's information?  Patrolling the clouds for copyrighted content means reading through each and every file.  A company like salesforce.com -- an online customer relationship management software provider -- couldn't possibly exist in its current state with SOPA and PIPA.  In fact, I doubt such a company could have possibly launched to begin with: the content of customer databases are jealously guarded secrets.  The mere hint that some outsider will go through one's database would be enough to kill the project.

    Google's free email service stands, perhaps, as the antithesis to my argument above.  After all, ads are displayed based on the content of the email you have received (in Google Apps for Business, the default setting for ads is to not serve them.  This doesn't mean there isn't an engine running the background analyzing content, though), and many businesses have elected to sign up despite the implications of content monitoring.

    But, many in the same position have decided not to use it due to the same implication.  Plus, Google has to continuously assure its clients that no human ever reads the content of emails.  My understanding is that, for as far as user privacy is concerned, there is no monitoring going on.  Google wouldn't be able to make such a claim if the current SOPA and PIPA bills pass as they are.  I wouldn't be surprised if there was an exodus, planned or instant, from Google Apps for Business if the bills were to pass.

    Second, the problem with a backdoor is that the same backdoor that allows us to gain access to clients' content has the potential to become the vector for a data breach.  As the passing year has shown, the last thing you want in your web presence is a weakness that hackers can exploit, be it those with financial misappropriation in mind or part of an online activist collective.

    There were incidents like Sony's data breach, where the weakness was traced to fixes and patches that had been available for a while, meaning Sony was in the wrong, even if it was the victim.  But there were also incidents where non-script kiddie hacking skills were required, such as the HB Gary breach, where traditional hacking and social engineering were the foundation of the successful data breach.

    A backdoor is a very serious, potential risk no matter form it takes.  It's one of the reasons why the US government was finally dissuaded from passing a law requiring crypto vendors to install backdoors into their algorithms: there was no way of knowing who would eventually exploit it. (The issue pops up during times of imminent danger, but so far, sanity has prevailed).

    While SOPA and PIPA are not directly advocating a less secure computing environment, its language forces the industry to take a road towards either a less dynamic, vibrant environment or one that is inherently hazardous.  This, in an era where people are clamoring for a safer computing environment.

    Ripe for Abuse

    Of course, such arguments are countered with the usual "legitimate sites and business need not fear.  The bills target those that are actively profiting from piracy."  If that's true, why not make it clear under the law?  Why have it written so broadly that it's raising all this ruckus?  And, what makes you a legitimate business?

    Such palliative assertions are cold comfort for those who fall victim to the underbelly of legislation and political machinations.  And, you can expect machinations.  After all, companies abusing and bending the law for its own private goals is not news.

    Heck, forget about bending the law; sometimes you they're the law unto themselves.  Here's a case where Universal Music Group claims that they can take down whatever content they want from YouTube, even if it's not theirs.

    Or, take as an example the link I provided earlier, about the UK  student who is being extradited to the US.  His offense?  Creating a page that linked to pirated material (and profiting from visitors to his page on their way to some other site).  I'm not about to debate the merits of the case.  It's quite obvious that the student in question has a, ahem, wild streak.

    I'd like to merely point out that there are many companies that essentially do the same thing and potentially link to the same sites, but their content coverage is broader that pirated content.  Yes, I'm talking about search engines, the Googles, Bings, and Yahoo!s of the world.  How come they're allowed to continue with business as usual while some guy is arrested?  Is it because the kid didn't have his papers in order, because he hadn't registered himself as a legitimate business owner?

    The fact that all of his links lead to pirated content is irrelevant.  Currently, that's not a crime.  Under SOPA and PIPA, it will be.  One could say that he was rubbing it in the copyright holders' noses.  That's not a crime, either.

    In fact, I'd say that he was doing some nose-rubbing and showing copyright holders where they could find the people who were hosting pirated material.  From this point of view, the kid should have been thanked for consolidating a list of sites for content owners to go after.

    Obviously, SOPA and PIPA were not behind the student's arrest.  These are bills, after all.  But, the above stories are evidence of how broadly written laws can be abused.  And there are many such stories.

    AlertBoot: Against SOPA and PIPA

    SOPA and PIPA place an inordinate amount of power in the hands of a minority that have shown less-than-admirable qualities from time to time, the issue of who's in the right notwithstanding.

    While laws targeting piracy are needed, the current bills are -- in our view -- not the answer.  Allowing SOPA and PIPA to become law as-is will be a Pyrrhic victory of sorts, unable to stem the tide of piracy while dragging down budding and growing industries along with it.

    AlertBoot is against SOPA and PIPA.

    If you're in agreement, visit this page and let your voice be heard.

    Posted Jan 19 2012, 02:48 AM by tim_maliyil with no comments
    Filed under: ,
     
  • Data Security: Why Salting Password Hashes Is Required But Has Limitations

    The big news in the data security arena this week is, of course, the hack at zappos.com.  Thankfully, data encryption software was used to protect credit card numbers at Zappos, so the fallout from the data breach is curtailed to what is generally considered "less sensitive" data (but, as more and more articles point out, the definition of what is deemed "sensitive data" is changing, and now may include what's traditionally considered "less sensitive" data).

    I'm not going to rehash what's been literally covered by hundreds of news outlets.  Instead, I'd like to explore passwords, hashing, and salting a bit.  I've come to realize that, so far, I've been lucky when it comes to proclaiming "call to change passwords" = "no salting."

    • Zappos asks customers to change passwords - What is hashing?
    • Salted passwords elevate security... - Why password salts are necessary
    • ...But are not the end all, be all - Compromising salted passwords

    Zappos Asks Customers to Change Passwords

    One of the things that Zappos required of its customers is to change their passwords.  The company has also asked customers to change passwords at other sites if their Zappos password was reused elsewhere.

    Normally, such a request is due to the lack of password salting during the hashing process.  If you're new to the world of password security, hashing is a formula for converting text into some other text.  But, it has its particularities:

    • One-way only.  Hashing is designed to make it hard (impossible) to figure out what the original text was.  For example, YouAreAUser123 could be converted to $12@f23fW2^1bsASFsd, and there's no way to convert it back.  Because of this, hashing is known as a one-way function or a one-way algorithm.  I've seen infographics that compare it to making sausage: once you have the end product, it's impossible to figure out which specific animal it's constituted from (yes, it's quite the disturbing analogy.  But it sticks with you and illustrates the point very well).
    • Unpredictable.  Although an algorithm is involved, meaning there's an underlying structured formula, the resulting converted text is quite unpredictable.  For example, you can't figure out what the hashed result of "ha" will be based on the hashed results for "h" and "a", or "ah", or "hb", etc.  This also means there will be a world of difference between YouAreAUser123 and YouAreAUser122.
    • Always the same.  On the other hand, a particular hash algorithm will always give the same output given the same input.  So, for example, if two users decide to use YouAreAUser123 as their actual passwords, their hashed values would be $12@f23fW2^1bsASFsd, using the example from the first bullet above.

    That last point is why the theft of unsalted passwords results in companies strongly suggesting that users change their account passwords everywhere: since the hashing algorithm is not kept secret, and it will always end in the same result, hackers could take a list of words, hash them, and compare their own results with the hashed passwords they've stolen.

    If they see a match, they can look up the unhashed password in their original list.  This is the concept behind rainbow tables, a table of pre-computed hash results.

    What about salted passwords, though?

    Salted Passwords Elevate Security...

    Salting is the process of adding extra characters to the password.  Because the presence of one extra character -- or changing one character (as in the YouAreAUser123 v. YouAreAUser122 example I've given) -- results in completely different hashed results, it means hackers will have a harder time figuring out the original non-hashed password.

    For example, let's say that the salt is "a", added to the beginning of each password.  In that case, the user submits his password as YouAreAUser123.  The actual password ends up being aYouAreAUser123 and this is hashed for a completely different outcome.  The user keeps using his original password, of course.  The salting is done by the company's servers.

    When hackers breach this particular password database, they won't be able to use a pre-configured table to look up the original passwords because the hashes will not match up at all.  If salting is used to secure passwords, then, conceivably, users don't need to change their passwords because of a data breach.

    However, this only works as long as the salt is kept secure.  If the salt is also exposed, it's short shrift for a hacker to attach the salt and generate a list of hashed passwords.

    Certainly, the hacker will have to wait for the passwords to be generated; however, it's the computer doing the heavy lifting.  After the initial setup, a hacker can just sit back and relax.

    ...But are Not The End All, Be All

    But, there's still a way to figure out passwords even if the salt is NOT compromised.  It hadn't occurred to me sooner because it's success has been eliminated to a large degree in the world of encryption software, but salted passwords can be compromised via frequency analysis.

    Frequency analysis involves seeing how often something pops up, making an educated guess, testing it, and generally trying to solve a puzzle.  For example, with certain early encryption systems, one could guess the underlying message because "e" is the most recurring letter in the English language, followed by "t" and "a," respectively.  So, if "z" shows up the most in an encrypted text, followed by "b" and "a," then "z = e", "b = t", and "a = a".  It's not always as straight forward as that, but it worked, generally.

    The same can be done with hashed passwords because people don't use secure passwords.  Time and time again have we seen people using passwords such as "password1", "iloveyou", "trustno1", and others that regularly show up on hacked password lists.

    So, a hacker can compile the frequency of particular hashes showing up; list the top 20 or so; and guess via trial and error as to which hash might correspond to which password (the presence of salt doesn't matter).  In Zappos's case, it's implied that 24 million passwords were compromised, so after a little testing, hackers should have figured out the passwords of (possibly) tens of thousands of people.

    So, even if Zappos had salted their passwords, it stands to reason it would recommend that customers change their passwords.

    One could, of course, use variable salts...but this detracts from the real issue: using strong, secure passwords.  If everyone were to use complex passwords, the need for salting would not exist (in theory, at least).


    Related Articles and Sites:
    http://en.wikipedia.org/wiki/Cryptographic_hash_function

     
  • Data Encryption Software: Yet Another Article on Yet Another Authentication Scheme

    Or, YAA On YAAS.  The site msnbc.com is carrying another article on futuristic password killer initiatives by the military.  Passwords are, of course, of great interest to those dealing with data encryption (such as yours truly, at AlertBoot) since they're usually the points of failure when it comes to information security.

    And, yet, one wonders whether passwords can really be killed.  It appears to me that it's not a matter of developing "better" ways of authenticating people.

    Same Old Story, New Solutions, Same Old Results

    As msnbc.com notes, "today's world requires countless passwords," and goes on to note that

    the U.S. military wants to eliminate clunky passwords by creating a security system that actively recognizes individuals based on computer keystrokes, language patterns or even typing speed....focus[ing] on the behavior of each individual reflects an interest in each person's "cognitive fingerprint" left behind by how the mind processes information.

    There is nothing new about such initiatives.  In my brief time at AlertBoot, I've covered:

    I'm probably missing quite a number of other developments that have been floated and shelved over the years.  It doesn't matter what form it takes, it's always those passwords that end up being used by most organizations when it comes to authenticating people, sometimes exclusively.

    Even with solutions that offer something other than passwords for authentication (including encryption software, which provides physical tokens for identification), passwords are always there, either as part of a two-factor authentication scheme or as a backup in case the user loses the token.

    Passwords are Problematic

    Passwords, though, pose many problems.

    First, passwords can be weak.  As forbes.com notes in this article, users don't necessarily choose strong passwords to begin with.

    Second, passwords can be shared.  When something can be so easily shared, it's problematic as an authentication scheme. Plus, it cannot be easily "unshared."  For example, if the CEO gives her secretary the password to her computer because of an emergency, how do you wipe it from the secretary's brain after the one time?  You can't, the password has to be changed. (Anyone manufacture those MIB neuralyzers, yet?)

    Third, passwords can be hard to memorize. An issue that is meant to counter the first point, administrators can force users to create password that are too complex to memorize.  This is not a problem per se, but leads to....

    Fourth, password resets can be expensive or be another point of weakness.  Resetting passwords is, despite its simplicity, kind of expensive.  One can't just reset a password for anyone -- you've got to be able to determine that people are who they claim to be.  This involves expenditures, such as hiring people or services for the express purpose of helping people with their password issues, or using self-service password resets that pose its own problems.

    Microsoft recently published a paper on the "resilience of passwords," and how, despite professionals having predicted its demise for decades, it's still going strong -- and will do so for the foreseeable future.

    The Power of Free?

    A number of words popped out at me during a brief and quick skim of the Microsoft paper: "Mis-aligned incentives can cause desirable solutions to fail."  I'll go over the paper this week, but it seems to me that this is probably one of the main reasons why passwords have prevailed despite their shortcomings.

    Think about it.  Passwords aren't a physical entity, so:

    1. It costs nothing to generate them
    2. There are no transportation/delivery costs
    3. Sending and generating a replacement costs much less than physical delivery
    4. Users can't "forget" to bring it

    I realize #3 and #4 above appear contradictory to what I wrote before.  Think of it in the following manner:

    For #3, if an organization decides to go with the "butt verification" system, passwords are never sent anywhere.  One assumes that telecommuting is not really an option: you'll have to move your buns to where the computers are.  No passwords to lose, no passwords to generate.  The costs associated with the loss of passwords are nil.

    However, if some other method of verification is used over passwords, such as tokens, then the costs associated with replacing them are formidable over those of generating and delivering a new password.  Both require that one somehow confirm that the intended are the recipients, but password generation doesn't involve physical delivery.  So, there are options that can be cheaper or dearer than replacing passwords.

    For #4, a user could forget a password, just like one can forget their tokens back at home.  But, this reflects the limitation of the English language.  A token can be "forgotten" in various ways: it can be forgotten (left behind at home), it can be misplaced (don't know where I left it, although I'm sure it's at home), or it can be stolen by force.

    Passwords can be forgotten, but can't be misplaced.  They can be stolen, but so can tokens.  In a sense, passwords have a leg up on tokens.

    Anyhow, this post is not meant to be an all-comprehensive monograph on the subject.  I'm just trying to point is that, as long as passwords remain the cheapest alternative -- in monetary terms -- they're going to be around for a very long time.


    Related Articles and Sites:
    http://www.msnbc.msn.com/id/46016329/ns/technology_and_science-innovation/#.TxS0oaVSQbc
    http://www.forbes.com/sites/davidcoursey/2011/11/21/25-worst-passwords-of-2011-revealed/
    http://research.microsoft.com/pubs/154077/Persistence-authorcopy.pdf

     
  • Drive Encryption: Data Lost In Transit Is Now #2 Reason For Data Breaches

    According to the Identity Theft Resource Center, hacking is now the leading cause of data breaches followed by data lost in transit (laptops, external storage devices, USB flash disks, etc) and insider theft (#2 and #3, respectively).  All the more reason why encryption software should be used.

    419 Publically Disclosed Breaches in 2011

    According to informationweek.com, the Identity Theft Resource Center (ITRC) compiled the numbers of all 419 publically disclosed breaches in 2011 and found that the number one reason for a data breach was hacking (26% of all incidents) followed by "data on the move" (18%) and insider theft (13%).

    While the report hasn't been released yet (informationweek.com got an advance copy), I think the report could be slightly contentious based on one passage:

    Last year, data breaches triggered by hacking--defined by the ITRC as "a targeted intrusion into a data network," including card-skimming attacks--were at an all-time high, and responsible for 26% of all known data breach incidents. [my emphasis]

    I'd still have to wait for the report to see the details, but I'm left wondering if card-skimming is really hacking.  It certainly fulfills the condition of being "a targeted intrusion into a data network" since ATMs are the public-facing endpoints of a network (banking, that is).  And it certainly is hacking, in the most traditional sense.

    And yet, it just doesn't feel like it should be lumped in there with the likes of the Sony data breach, which I'm sure is included in that category (biggest hacking incident in 2011).

    For one, the data was breached prior to it being entered into a network, or as it was being entered into a network.  That is, it's not a case where the hackers obtained customer information because a company had weak security in place. Plus, it doesn't even have to occur at the ATM.  For example, a rogue restaurant waiter network that uses tiny all-in-one card readers (such as in this demonstration) can easily cause a massive breach).

    On the other hand, a data breach is a data breach no matter how, where, and when it happened, or whose lack of security awareness was being exploited.

    Why is This a Problem?

    Why does this matter?  News organizations are bound to run with the headline, since it's the first time hacking is #1.  Since people in general don't read the nitty-gritty details, people might make the unfortunate assumption that they should invest in hacking prevention solutions at the expense of other areas, such as using encryption software on laptop computers.

    The thing is, the difference between 26% and 18% is not so vast that companies ought to be considering investing more in one area over another.  I can't blame ITRC for compiling its results in the manner it has, though.  Their focus is on providing "victim and consumer support as well as public education."  So, it makes sense for them to lump certain categories together, and guide the conversation as to what people should be looking out for, in order to protect themselves.

    I'll follow up to see if my concerns are unfounded in a future post.

    Related Articles and Sites:
    http://www.informationweek.com/news/security/attacks/232400252

     
  • Drive Encryption Software: 1/5 Of Breaches Occur By 3rd Party Recovery Services

    According to a number of sites covering the issue, the Ponemon Institute has released a survey showing that 21% of information security breaches occur when corporate data is being held by a data recovery service provider.  This is one of those areas where disk encryption software cannot help because, well, one's voluntary allowed a third party to access the data.

    I mean, how else is one supposed to recover data?

    769 IT Practitioners Surveyed

    Ponemon surveyed 769 CIOs and CISOs, and of the 87% who responded as having experienced a breach in the past couple of years, 21% said a breach occurred "when a drive was in the possession of a third-party data recovery service provider."

    Disk encryption was created for those instances where a third party has your device: namely, when it's stolen (yeah, most people wouldn't quite call thieves a "third party."  But if fits the definition).  However, in this case, the third party is doing something on behalf of the data owner.  And, generally, a third party like a data recovery service provider requires access to the hard disk.

    Otherwise, how's the service going to know if it's actually doings its job correctly or merely worsening the problem?

    In such situations, the only solution is to use a data recovery service provider that's been vetted.  As inforworld.com points out, though, this is not always possible: what if an employee's computer breaks down while on the road?

    Plus, even if a company is vetted, it doesn't necessarily mean that the employees will act faithfully according to the company's policies.  For example, I remember reading how the geeks at Geek Squad would copy content from computers that came in for servicing.  This certainly is not company policy.

    Perhaps I'm jumping the gun here, though, as most companies are not particularly interested in security:

    About 81 percent of the respondents said the speed of recovery was the most important factor in choosing a vendor and 75 percent said the ability to successfully recover data was the most important. Security-related concerns were not a priority for these respondents, according to the survey. [eweek.com]

    Seeing how 18% of data breaches can be traced back to a data recovery vendor, perhaps the placement of data security in the vendor factor totem pole ought to be thought over.


    Related Articles and Sites:
    http://www.infoworld.com/t/security/companies-prove-careless-when-enlisting-data-recovery-services-183816
    http://www.networkworld.com/news/2012/011112-data-recovery-254804.html

     
More Posts « Previous page - Next page »