in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Google and Google+ : Data Breach or Not?

    This week's revelation that Google covered up a data breach connected to Google+, the much-unused Facebook-competitor, has spilled a lot of digital ink. Unsurprisingly, most of it is unsympathetic to Google. One exception was an article at theverge.com, where it noted that "the breach that killed Google+ wasn't a breach at all."
    And, on the face of it, it's true. As far as Google knows, the "data breach" (in reality a bug that could have allowed a data breach) was never exploited. Its logs show nothing. And, in order to make use of this exploit, a person had to request permission for access to an API (Application Programming Interface), which only 432 people did. In the end, Google estimates that 500,000 people could have been affected… if there was an actual data breach. We're talking theoretical potential here, not post facto possibility.
    And the most damning indication that this is not a data breach is the fact that the data that could have been exposed from the API bug wouldn't have actually triggered a breach notification. If you look through the list of data that could have been compromised, you'll see that they wouldn't qualify as sensitive or personal information under US data breach notification laws. Full names, email addresses, a profile pic? Unauthorized access to these do not merit a notification under any of the 50 US state laws dealing with data breaches.
    Again, on the face of it, it looks like no big deal. If you dig into the details, however, you'll see some problems. First off, Google can't really know what happened because they only keep two weeks' worth of logs; the bug, on the other hand, went unfixed for over two years. Who knows what happened prior to the patch, outside of the two-week period?
    Second, the fact that less than 450 people applied for the API is little comfort when you realize that the Facebook Cambridge Analytica situation only required one renegade API user. (Perhaps we could get some comfort that "only" 500,000 people could have been affected, but we don't really know where that figure came from. Is it based on the severely curtailed log data? Or the total connections that the API-requesters currently have? What if they dropped connections over the years, thus depressing the figure?)
    Still, despite the above, it looks like this data breach is not really a data breach. Facebook said the same self-serving thing about the Cambridge Analytica situation…but, in that case, data was exploited in an unauthorized manner.  

    EU is not US

    As noted above, the data that was accessible via the bug is not covered under data breach laws, at least not in the US. The US does not have an all-encompassing federal law; it's all done at the state level. And, it was only earlier this year that the final 50th state succumbed to the times and passed a data breach notification law (thank you, Alabama). Under these laws, what's defined as a "reportable" data breach is strictly defined. When you look at the data at the center of the breach, it's obvious that it doesn't really pass the "personal information" test: without more "substantial" information like SSNs, driver's license ID numbers, financial data, etc., Google is in the clear.
    In comparison, the EU has stronger privacy laws, but the bug was found before these laws were strengthened quite recently. Still, a case could be made that the potential breach required public notification within the framework of the older European laws. For example, the UK (before Brexit) gave this example on what constitutes "personal data" as part of the EU's Data Protection Directive:
    Information may be recorded about the operation of a piece of machinery (say, a biscuit-making machine). If the information is recorded to monitor the efficiency of the machine, it is unlikely to be personal data…. However, if the information is recorded to monitor the productivity of the employee who operates the machine (and his annual bonus depends on achieving a certain level of productivity), the information about the operation of the machine will be personal data about the individual employee who operates it. [section 7.2, personal_data_flowchart_v1_with_preface001.pdf]
    As you can see, within the EU, there is a gray area as to what personal data is. It could be that Google is not out of the woods yet, legally speaking. Bugs are Identified and Fixed All the Time As theverge.com article notes,
    There is a real case against disclosing this kind of bug, although it’s not quite as convincing in retrospect. All systems have vulnerabilities, so the only good security strategy is to be constantly finding and fixing them. As a result, the most secure software will be the one that’s discovering and patching the most bugs, even if that might seem counterintuitive from the outside. Requiring companies to publicly report each bug could be a perverse incentive, punishing the products that do the most to protect their users.
    Quite an accurate point. In addition, it should be noted that this literally is a computer bug and nothing more because it was discovered in-house. If a third party had found the security oversight and reported it to Google, it would have been a data breach: that person, as an unauthorized party, would have had to illegally access the data to identify the bug as such.
    In this particular case… well, you tasked someone with finding bugs and that person did find it. That's not a data breach. That's a company doing things right.  

    Hush, Hush. Sub rosa. Mum's the Word

    But then, why the secrecy surrounding it? Supposedly, there was an internal debate whether to go public with the bug, a debate that included references to Cambridge Analytica. If Google had been in the clear, the discussion would have been unnecessary.
    Or would it? With Facebook's Cambridge Analytica fiasco dominating the headlines at the time, Google couldn't have relished the idea of announcing an incident that, in theory, closely mirrors Facebook's – but has led to a different data security outcome. Thus, it is unsurprising that the bug, and the debate surrounding it, has been kept quiet. (It was a cover-up, some say. But again, Google didn't technically have a data breach. It truly was their prerogative on whether to go public).
    Now that the world knows of it, though, it has led to the same outcome: a global scandal; governments in the EU and US looking into the situation; another tech giant's propriety and priorities questioned (arguably, a tech giant that was held in higher esteem than FB); the alienation and angering of one's user base.
    Going public with the bug could be seen as a "damned if you do, damned if you don't" sort of situation. But, when considering what Google's been up to lately, like this and this, you've got to wonder what is really driving the company in Mountain View.  
     
    Related Sites:
    https://developers.google.com/+/web/api/rest/latest/people
    https://www.theverge.com/2018/10/9/17957312/google-plus-vulnerability-privacy-breach-law
    https://www.theguardian.com/technology/2018/oct/08/google-plus-security-breach-wall-street-journal
    https://www.zdnet.com/article/senators-demand-google-hand-over-internal-memo-urging-google-cover-up/
    https://www.independent.co.uk/life-style/gadgets-and-tech/news/google-china-search-engine-censorship-leak-project-dragonfly-state-government-a8577241.html
    https://gizmodo.com/google-removes-nearly-all-mentions-of-dont-be-evil-from-1826153393
     
  • Equifax Already Had a Data Breach Before It Was Hacked In 2017

    According to wsj.com (paywalled), Equifax had already suffered a data breach before the data breach that made the company famous around the world. In 2015, two years before the hack that started with a bang and ended with less than a whimper, "Chinese spies" made off with "thousands of pages of proprietary information" that includes code, HR files, and manuals.
    For many, the use of the word spy in this context will set off visions of Chinese Matt Damons pulling a The Departed (or as they say in that neck of the woods, "Dee Dee-paaaah-ted"). In actuality, the breach appears to be unremarkably mundane: people being bribed with jobs and salary increases to walk out with proprietary information. It's the kind of thing that happens all the time. For example, that's Google's beef with Uber.  

    Why Are We Hearing About It Now?

    The US has a fractured mishmash of laws and regulations when it comes to data breaches, information security, and data privacy, instead of a comprehensive law. What this means is that Equifax's 2015 breach was not made public (legally) because it didn't involve personal information – at least, not in the way we think of it.
    HR files must, by definition, include personal info. However, these would be employee records, not consumer records… and the laws and regulations that have been passed so far, for the most part, involve consumer records or a variation thereof. It's the reason why, for example, HIPAA kicks in when patient data is put at risk but not when nurse and doctor info is stolen.
    As mentioned before, the breach was not made public earlier. This does not mean, however, that Equifax just sat on it. They did contact the FBI and they did carry out an investigation. That the company decided not to go public is understandable and entirely within their legal right. It should also be noted that going public in this instance wouldn't have helped out anyone: the message would essentially be "your employees could steal from you!!" Everyone knows this already. It might have mattered more if, for example, the message was "change your default passwords immediately!"
    But, in light of the hack that occurred two years later, it does raise questions.  

    Lessons Not Learned

    Earlier this month, the US General Accounting Office released a report on the 2017 Equifax data breach, aka, The Big One. Per fortune.com, the report:
    summarizes an array of errors inside the company, largely relating to a failure to use well-known security best practices and a lack of internal controls and routine security reviews.
    "Lack of internal controls and routine security reviews." You'd think that a company that suffered a guy walking off with the company's secret sauce to a potential competitor would have done something regarding internal controls and routine security reviews. That these were lacking in the two years bookmarked by the two data breaches speaks volumes of what Equifax thought was important.
    Thankfully, it looks like perhaps the credit reporting agency is finally taking data security seriously. But then, with everyone looking and keeping track of what they're doing, it'd be a bad idea not to.
     
    Related Articles and Sites:
    https://www.wsj.com/articles/before-it-was-hacked-equifax-had-a-different-fear-chinese-spying-1536768305
    http://fortune.com/2018/09/07/equifax-data-breach-one-year-anniversary/
     
  • Anthem Data Breach Settled for $115M, Despite Having "Reasonable" Security

    Last week, a federal judge approved a settlement – the largest to date when it comes to data breaches – that is historic and yet falls flat: Anthem, the Indianapolis-based insurer, has agreed to pay a total of $115 million to settle all charges related to its 2015 data breach.

    The breach, strongly believed to have been perpetrated by actors with ties to the Chinese government, began with a phishing attack. By the time the electronic dust settled, the information of 79 million people (including 12 million minors) had been stolen, including names, birth dates, medical IDs and/or Social Security numbers, street addresses, and email addresses.

    Needless to say, this information can be used to perpetrate all types of fraud.

    And while the judge overseeing the case has found the settlement to be "fair, adequate, and reasonable," critics have noted that the victims only get $51 million of the total settlement, which amounts to 65 cents per person. The rest goes to lawyers and consultants.

    What's surprising about this story is not that the victims are getting shafted; or that the lawyers are getting an ethically-dubious portion of the settlement; or even that Anthem settled out of court, a once unthinkable action. Then again, courts are warming up to the idea that victims of a data breach have suffered an injury that is redressable by law. (Chances are that if this lawsuit had been filed ten years ago, the defending corporation would have successfully argued to have it tossed from court).  

    Reasonable Security

    What is surprising is that all of this happened despite Anthem having had what experts called "reasonable" security measures at the time of the breach.

    What exactly is "reasonable" security? Is it tantamount to "good" security? Or perhaps it doesn't reach the level of good, but it's better than "bad" security, which in turn is better than no security? Its converse, unreasonable security, what would that be like?

    What constitutes "reasonable" security is not fleshed out, anywhere, in detail. But, we do know this: per the settlement, Anthem has to increase threefold their data security budget. Which is weird because (a) if you have to treble your budget in regards to security, maybe it wasn't reasonable to begin with? and (b) the flashpoint of the data breach – clicking on a phishing email that surreptitiously installed malware, which may or may not have been flagged by antivirus software – can hardly be prevented by spending more money.

    But even weirder is this:

    "The [California Department of Insurance examination] team noted Anthem's exploitable vulnerabilities, worked with Anthem to develop a plan to address those vulnerabilities, and conducted a penetration test exercise to validate the strength of Anthem's corrective measures," the department said in its statement. "As a result, the team found Anthem's improvements to its cybersecurity protocols and planned improvements were reasonable." [healthitsecurity.com]

    There's that "reasonable" word again. The company had reasonable security, got hacked, corrective measures were taken, and now the improvements are reasonable?

    If you're being hacked by what could potentially be the intelligence arm of a foreign state, perhaps you'd like something that's more than reasonable. Hopefully, the choice of words to describe what was implemented do not accurately reflect the effort, planning, and technical expertise that actually went into it.

    At the same time, it's hard to ignore the fact that data breaches like this are the perfect moral hazard:

    • The information that is stolen is tied to individuals. Any misuse of the data will affect these people, not the company.
    • A rotating cast of executives means that you don't necessarily plan for the long term. Especially if you're paid very well for being fired because of a data breach.
    • Financial penalties become meaningless if (a) they can be used to offset taxes, (b) happen to be a drop in the bucket (Anthem's 2017 revenue was $90 billion), and (c) the cost can be passed on to customers.

     

    Related Articles and Sites:
    https://healthitsecurity.com/news/judge-gives-final-ok-to-115m-anthem-data-breach-settlement
    https://www.govinfosecurity.com/interviews/analysis-anthem-data-breach-settlement-i-4083
    https://www.ibj.com/articles/70144-anthem-data-breach-judge-oks-huge-fee-award-but-not-as-much-as-attorneys-wanted
    https://biglawbusiness.com/anthem-115-million-data-breach-settlement-approved-by-judge/

     
  • Survey Says Data Breaches Result In Long-Term Negative Impact

    According to darkreading.com, a recent survey commissioned by CA Technologies has shown that there can be serious repercussions for companies that fall victim to data breaches. If the survey's conclusions are to be believed, about half of the organizations that were involved in a data breach see "long-term negative effects on both consumer trust (50%) and business results (47%)." Which is surprising, since the general feeling is that businesses involved in a data breach are not penalized at an appropriate level.
    For example, Equifax revealed a history-making data breach almost one year ago. Its stock price took a nose-dive, people were fired, financial penalties were proclaimed, people complained, lawsuit were filed, etc. Today, the stock price has recovered quite a bit from its one-year lows. Lawsuits are being battled in court, with the very real possibility of a summary dismissal; if not, the company will probably settle for an amount that will be a drop in the bucket for a company its size. The proclaimed penalties were withdrawn in exchange for Equifax upping their security. People don't complain as much as they grumble sotto voce. Year-over-year revenue is up at Equifax.
    All in all, it looks like Equifax has weathered this storm quite nicely. Such has been the basic pattern for major companies involved in data breaches since at least ten years ago.
    Once in a blue moon will you hear of a company that was so aversely impacted by a data breach that it made other companies sit up and take notice. But such instances are certainly far and few in between.  

    Survey Says…

    According to ca.com, among other things:
    • 48% - Consumers who stopped using the services of at least one organization due to a data breach.
    • 59% - Businesses that reported moderate to strong long-term negative impact to business results after a breach.
    • 86% - Consumers that prefer security over convenience.
    These figures are curious, especially the last one. It's known that people don't necessarily tell the truth on surveys, but the real issue in this instance is that a survey is but a snapshot in time. One need not doubt that nearly half the people surveyed stopped being a customer of a breached entity; however, it would be more informative to know how long they've been boycotting a company – one day, one week, one month, one year? – and whether they're still doing so when followed up some time later. (It should be noted that the survey did not define the length of "long-term" but one assumes it's longer than one year, in keeping with accounting terminology).
    Likewise for the figure on businesses negatively affected by a data breach. Equifax, for example, would have claimed that they were seriously affected if surveyed three months after their public outing; however, their answer would have been different one year later. And five years from now? Who knows?
    And then you have that counterintuitive 86% figure: a clear majority of people prefer security over convenience? That certainly is news, especially considering that people's actions have not supported such a conclusion over the past decade.  

    Strong Laws and Enforcement

    The concluding remarks of the survey, in a gist, are that companies need to improve their data security. (And, also, companies that are in the business of transacting personal information need to be more transparent about it. This was, after all, the year of the Cambridge Analytica scandal). Will companies improve their data security? Can they? The answer is yes.
    But not because of consumer demand.
    Consumers of goods and services have been raising hell over data breaches for a long time now. Data breach-related lawsuits that have been filed worldwide probably number in the thousands. Public spankings and shamings exceed that number. All of it to no effect. The only thing that's been shown to encourage attention to security is the passage and enforcement of laws.
    The world, due to its fractured nature, with each sovereign state approaching data breach ramifications in their own way, has become a living laboratory that reveals what works and doesn't when it comes to increasing data security and curbing data abuses.
    Simply put, companies respond to financial penalties, as can be witnessed from Silicon Valley's behavior toward China and Europe, or how the United States healthcare sector significantly increased their data security only after regulators started hitting them with million-dollar fines.
     
    Related Articles and Sites:
    https://www.darkreading.com/risk/48--of-customers-avoid-services-post-data-breach/d/d-id/1332452
    https://www.ca.com/us/company/newsroom/press-releases/2018/ca-technologies-study-reveals-significant-differences-in-perceptions-on-state-of-digital-trust.html
    https://www.ca.com/us/collateral/white-papers/the-global-state-of-online-digital-trust.html
     
  • FBI Director Says Legislation Possibly A Way Into Encrypted Devices

    Last week, FBI Director Christopher Wray said that legislation may be one option for tackling the problem of "criminals going dark," a term that refers to law enforcement's inability to access suspects' data on encrypted devices. The implication is that, in the interest of justice and national security, the FBI will press for a law that will guarantee "exceptional access" to encrypted information. This most likely will require an encryption backdoor to be built on all smartphones, possibly on all digital devices that store data.
    It should be noted that the FBI emphatically denies that they want an encryption backdoor. One hopes they have taken this position because they're aware of the security problems backdoors represent; however, it's hard to ignore the possibility that the FBI is in spin-doctor mode. Their Remote Operations Unit, charged with hacking into phones and computers of suspects, uses terms like "remote access searches" or "network investigative techniques" for what everyone else would call "hacking" and "planting malware." Mind you, their actions are legally sanctioned, so why use euphemisms if not to mask what they're doing?
    If turning to legislation smells of déjà vu to old-timers, it's because this circus has been in town before. It set up its tent about 20 years ago and skipped town a couple of years later. And while many things have changed in that time, the fundamental reasons why you don't want encryption backdoors has not.  

    A Classic Example of Why You Don't Want a Backdoor

    The FBI has implied time and again that they are in talks with a number of security experts that supposedly claim the ability to build "encryption with a backdoor" that cannot be abused by the wrong people. These security experts are not, the FBI notes, charlatans. Perhaps it is because of these experts that the FBI has not desisted from pursuing backdoors. This, despite the overwhelming security community's proclamation that it cannot be done.
    It should be noted that Wray was asked by a Senator at the beginning of this year to provide a list of cryptographers that the FBI had consulted in pushing forth their agenda. To date, such a list has not been produced.
    (As an aside, according to wired.com, Ray Ozzie, arguably one of today's greatest minds in computing, has recently and independently proposed a way to securely install a backdoor without compromising the security of encryption. Here's one of the world's leading security expert's take on it: the conclusion, in a nutshell, is that it's flawed and mimics unsuccessful solutions proposed in the past).
    What is it about backdoors that their mention result in knee-jerk reactions by the security community? The answer lies in the fact that they have been looking into this for a long, long time. In the end, it's the unknown unknowns that are the problem: Encryption solutions run into surprises (bad ones) all the time. No matter how well-designed, it's impossible to prevent stuff like this or something like this from happening.
    In June 2017, it was reported that over 700 million iPhones were in use. Not sold; in use. It can also be assumed that there are at least an equal number of Android devices in use as well. That would be a lot of compromised devices if a backdoor was in effect and a bug was introduced.
    These issues cannot be legislated away. Furthermore, bugs merely represent one situation where a backdoor can lead to disaster. Others include the deliberate release of how to access the backdoor (think Snowden or Manning or the leak of CIA hacking tools); the phishing, scamming, conning, or blackmailing of the custodians of the backdoor; and the possibility of stumbling across the backdoor. Granted, the last one is highly unlikely, even more so than the others…but so are the chances of winning the lottery. And there have been hundreds, maybe thousands, of them across the world.
    The point is that the chances of the backdoor being compromised are higher than one would expect.  

    Moral Hazard = FBI's Pursuit of the Impossible?

    One has to wonder why the FBI is so insistent on pursuing the impossible dream of an encryption backdoor that doesn't compromise on security. It would be easy to dismiss it as a case of legal eggheads not knowing math and science, or not having the imagination to ponder how badly things could go wrong.
    But perhaps it's an issue of moral hazard. Basically, there is very little downside for the FBI if a backdoor is implemented. Everyone knows that, if the FBI gets what it wants, they won't have direct access to the backdoor; it wouldn't be politically feasible. For example, prior to suing Apple in 2016, they suggested that Apple develop a backdoor and guard access to it. When the FBI presents an iPhone and a warrant, Apple unlocks the device. The FBI is nowhere near the backdoor; they're by the water-cooler in the lobby.
    The arrangement sounds reasonable until you realize that the FBI doesn't take responsibility for anything while reaping the benefits. The FBI does not have to develop, test, and implement the backdoor. Once implemented, it doesn't have to secure and monitor it. If there is a flaw in the backdoor's design, the FBI dodges direct criticism: they didn't design it, don't control it, etc. Last but not least, the onus is on the tech companies to resist foreign governments' insistence on being given access to encrypted data. Which you know will happen because they know the capability is there.
    It's a classic case of heads, I win; tails, I don't lose much.
     
    Related Articles and Sites:
    https://www.cyberscoop.com/fbi-director-without-compromise-encryption-legislation-may-remedy/
    https://www.theregister.co.uk/2017/10/05/apple_patches_password_hint_bug_that_revealed_password/
    https://www.ecfr.eu/page/-/no_middle_ground_moving_on_from_the_crypto_wars.pdf
     
  • Most of the Used Memory Cards Bought Online Are Not Properly Wiped

    According to tests carried out by researchers at the University of Hertfordshire (UK), nearly two-thirds of memory cards bought used from eBay, offline auctions, and second-hand shops were improperly wiped. That is, the researchers were able to access images or footage that were once saved to these electronic storage units… even if they were deleted.

     

    Free and Easy to Use Software

    Popular media would have you believe that extracting such information requires advanced degrees in computers as well as specialized knowledge and equipment. These would certainly help; however, the truth is that an elementary school student would be able to do the same. The researchers used "freely available software" (that is, programs downloadable from the internet) to "see if they could recover any data," and operating such software is a matter of pointing and clicking.
    In this particular case, the recovered data included "intimate photos, selfies, passport copies, contact lists, navigation files, pornography, resumes, browsing history, identification numbers, and other personal documents." According to bleepingcomputer.com, of the one hundred memory cards collected:
    • 36 were not wiped at all, neither the original owner nor the seller took any steps to remove the data.
    • 29 appeared to have been formatted, but data could still be recovered "with minimal effort."
    • 2 cards had their data deleted, but it was easily recoverable
    • 25 appeared to have been properly wiped using a data erasing tool that overwrites the storage area, so nothing could be recovered.
    • 4 could not be accessed (read: were broken).
    • 4 had no data present, but the reason could not be determined
     

    Deleting, Erasing Wiping… Not The Same

    Thankfully, it appears that most people are not being blasé about their data. They do make an effort to delete the files before putting up their memory cards for sale. The problem is, deleting files doesn't actually delete files. (This terminology morass is the doing of computer software designers. Why label an action as "Delete file" when it doesn't actually do that?)

    The proper way to wipe data on any digital data medium is to overwrite it. For example, if you have a hard drive filled with selfies, you can "delete" all of it by saving to the disk as many cat pictures you can find on the internet (after having moved all of the selfies to the trash/recycle bin on your desktop). This is analogous to painting over a canvas that already has a picture on it, although the analogy breaks down somewhat if one delves into technical minutiae.

    Incidentally, this is why encryption can be used to "wipe" your drive: Encryption modifies data so that the data's natural state is scrambled / randomized. When an encryption key is provided, the data descrambles so that humans can read it. Once the computer is turned off, the data returns to its scrambled state. So, if you end up selling a memory card with encrypted data but without the encryption key, then it's tantamount to offering for sale a memory card that's been properly wiped.

     

    More of the Same

    This is not the first time an investigation has been conducted into data found on second-hand digital storage devices. As the bleepingcomputer.com article notes, similar research was conducted in the past:
    A study conducted in 2010 revealed that 50% of the second-hand mobile phones sold on eBay contained data from previous owners. A 2012 report from the UK's Information Commissioner's Office (ICO) revealed that one in ten second-hand hard drives still contained data from previous owners. A similar study from 2015 found that three-quarters of used hard drives contained data from previous owners.
    And these are but a small sample of the overall number of similar inquiries over the years. The world has seen more than its fair share of privacy snafus, be it a data breach or otherwise. Despite the increased awareness on data security and its importance, the fact that we're still treading water when it comes to securing data in our own devices could signify many things:
    • People don't really care, even if they say they do.
      • No surprises there.
    • We are too focused on spotlighting the problem and failing in highlighting the solution.
      • News anchor: "Yadda yadda yadda…This is how they hacked your data. Be safe out there. And now, the weather." Be safe how? What do I do to be safe?
    • People interested in preserving their privacy do not sell their data storage devices; hence, studies like the above are statistically biased to begin with.
      • Essentially, researchers are testing the inclinations of people who don't really care about privacy or don't care enough to really look into it (a quick search on the internet will show you how to properly wipe your data).
    • Devices sold were stolen or lost to begin with, so the sellers do not have any incentive to properly wipe data.

    Whatever the reasons may be for the continued presence of personal data on memory storage devices, regardless of how much more aware we are of privacy issues, one thing's for certain: It's not going away.

     

    Related Articles and Sites:
    https://www.bleepingcomputer.com/news/security/two-thirds-of-second-hand-memory-cards-contain-data-from-previous-owners/

     
More Posts Next page »