in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Equifax Already Had a Data Breach Before It Was Hacked In 2017

    According to wsj.com (paywalled), Equifax had already suffered a data breach before the data breach that made the company famous around the world. In 2015, two years before the hack that started with a bang and ended with less than a whimper, "Chinese spies" made off with "thousands of pages of proprietary information" that includes code, HR files, and manuals.
    For many, the use of the word spy in this context will set off visions of Chinese Matt Damons pulling a The Departed (or as they say in that neck of the woods, "Dee Dee-paaaah-ted"). In actuality, the breach appears to be unremarkably mundane: people being bribed with jobs and salary increases to walk out with proprietary information. It's the kind of thing that happens all the time. For example, that's Google's beef with Uber.  

    Why Are We Hearing About It Now?

    The US has a fractured mishmash of laws and regulations when it comes to data breaches, information security, and data privacy, instead of a comprehensive law. What this means is that Equifax's 2015 breach was not made public (legally) because it didn't involve personal information – at least, not in the way we think of it.
    HR files must, by definition, include personal info. However, these would be employee records, not consumer records… and the laws and regulations that have been passed so far, for the most part, involve consumer records or a variation thereof. It's the reason why, for example, HIPAA kicks in when patient data is put at risk but not when nurse and doctor info is stolen.
    As mentioned before, the breach was not made public earlier. This does not mean, however, that Equifax just sat on it. They did contact the FBI and they did carry out an investigation. That the company decided not to go public is understandable and entirely within their legal right. It should also be noted that going public in this instance wouldn't have helped out anyone: the message would essentially be "your employees could steal from you!!" Everyone knows this already. It might have mattered more if, for example, the message was "change your default passwords immediately!"
    But, in light of the hack that occurred two years later, it does raise questions.  

    Lessons Not Learned

    Earlier this month, the US General Accounting Office released a report on the 2017 Equifax data breach, aka, The Big One. Per fortune.com, the report:
    summarizes an array of errors inside the company, largely relating to a failure to use well-known security best practices and a lack of internal controls and routine security reviews.
    "Lack of internal controls and routine security reviews." You'd think that a company that suffered a guy walking off with the company's secret sauce to a potential competitor would have done something regarding internal controls and routine security reviews. That these were lacking in the two years bookmarked by the two data breaches speaks volumes of what Equifax thought was important.
    Thankfully, it looks like perhaps the credit reporting agency is finally taking data security seriously. But then, with everyone looking and keeping track of what they're doing, it'd be a bad idea not to.
     
    Related Articles and Sites:
    https://www.wsj.com/articles/before-it-was-hacked-equifax-had-a-different-fear-chinese-spying-1536768305
    http://fortune.com/2018/09/07/equifax-data-breach-one-year-anniversary/
     
  • Anthem Data Breach Settled for $115M, Despite Having "Reasonable" Security

    Last week, a federal judge approved a settlement – the largest to date when it comes to data breaches – that is historic and yet falls flat: Anthem, the Indianapolis-based insurer, has agreed to pay a total of $115 million to settle all charges related to its 2015 data breach.

    The breach, strongly believed to have been perpetrated by actors with ties to the Chinese government, began with a phishing attack. By the time the electronic dust settled, the information of 79 million people (including 12 million minors) had been stolen, including names, birth dates, medical IDs and/or Social Security numbers, street addresses, and email addresses.

    Needless to say, this information can be used to perpetrate all types of fraud.

    And while the judge overseeing the case has found the settlement to be "fair, adequate, and reasonable," critics have noted that the victims only get $51 million of the total settlement, which amounts to 65 cents per person. The rest goes to lawyers and consultants.

    What's surprising about this story is not that the victims are getting shafted; or that the lawyers are getting an ethically-dubious portion of the settlement; or even that Anthem settled out of court, a once unthinkable action. Then again, courts are warming up to the idea that victims of a data breach have suffered an injury that is redressable by law. (Chances are that if this lawsuit had been filed ten years ago, the defending corporation would have successfully argued to have it tossed from court).  

    Reasonable Security

    What is surprising is that all of this happened despite Anthem having had what experts called "reasonable" security measures at the time of the breach.

    What exactly is "reasonable" security? Is it tantamount to "good" security? Or perhaps it doesn't reach the level of good, but it's better than "bad" security, which in turn is better than no security? Its converse, unreasonable security, what would that be like?

    What constitutes "reasonable" security is not fleshed out, anywhere, in detail. But, we do know this: per the settlement, Anthem has to increase threefold their data security budget. Which is weird because (a) if you have to treble your budget in regards to security, maybe it wasn't reasonable to begin with? and (b) the flashpoint of the data breach – clicking on a phishing email that surreptitiously installed malware, which may or may not have been flagged by antivirus software – can hardly be prevented by spending more money.

    But even weirder is this:

    "The [California Department of Insurance examination] team noted Anthem's exploitable vulnerabilities, worked with Anthem to develop a plan to address those vulnerabilities, and conducted a penetration test exercise to validate the strength of Anthem's corrective measures," the department said in its statement. "As a result, the team found Anthem's improvements to its cybersecurity protocols and planned improvements were reasonable." [healthitsecurity.com]

    There's that "reasonable" word again. The company had reasonable security, got hacked, corrective measures were taken, and now the improvements are reasonable?

    If you're being hacked by what could potentially be the intelligence arm of a foreign state, perhaps you'd like something that's more than reasonable. Hopefully, the choice of words to describe what was implemented do not accurately reflect the effort, planning, and technical expertise that actually went into it.

    At the same time, it's hard to ignore the fact that data breaches like this are the perfect moral hazard:

    • The information that is stolen is tied to individuals. Any misuse of the data will affect these people, not the company.
    • A rotating cast of executives means that you don't necessarily plan for the long term. Especially if you're paid very well for being fired because of a data breach.
    • Financial penalties become meaningless if (a) they can be used to offset taxes, (b) happen to be a drop in the bucket (Anthem's 2017 revenue was $90 billion), and (c) the cost can be passed on to customers.

     

    Related Articles and Sites:
    https://healthitsecurity.com/news/judge-gives-final-ok-to-115m-anthem-data-breach-settlement
    https://www.govinfosecurity.com/interviews/analysis-anthem-data-breach-settlement-i-4083
    https://www.ibj.com/articles/70144-anthem-data-breach-judge-oks-huge-fee-award-but-not-as-much-as-attorneys-wanted
    https://biglawbusiness.com/anthem-115-million-data-breach-settlement-approved-by-judge/

     
  • Survey Says Data Breaches Result In Long-Term Negative Impact

    According to darkreading.com, a recent survey commissioned by CA Technologies has shown that there can be serious repercussions for companies that fall victim to data breaches. If the survey's conclusions are to be believed, about half of the organizations that were involved in a data breach see "long-term negative effects on both consumer trust (50%) and business results (47%)." Which is surprising, since the general feeling is that businesses involved in a data breach are not penalized at an appropriate level.
    For example, Equifax revealed a history-making data breach almost one year ago. Its stock price took a nose-dive, people were fired, financial penalties were proclaimed, people complained, lawsuit were filed, etc. Today, the stock price has recovered quite a bit from its one-year lows. Lawsuits are being battled in court, with the very real possibility of a summary dismissal; if not, the company will probably settle for an amount that will be a drop in the bucket for a company its size. The proclaimed penalties were withdrawn in exchange for Equifax upping their security. People don't complain as much as they grumble sotto voce. Year-over-year revenue is up at Equifax.
    All in all, it looks like Equifax has weathered this storm quite nicely. Such has been the basic pattern for major companies involved in data breaches since at least ten years ago.
    Once in a blue moon will you hear of a company that was so aversely impacted by a data breach that it made other companies sit up and take notice. But such instances are certainly far and few in between.  

    Survey Says…

    According to ca.com, among other things:
    • 48% - Consumers who stopped using the services of at least one organization due to a data breach.
    • 59% - Businesses that reported moderate to strong long-term negative impact to business results after a breach.
    • 86% - Consumers that prefer security over convenience.
    These figures are curious, especially the last one. It's known that people don't necessarily tell the truth on surveys, but the real issue in this instance is that a survey is but a snapshot in time. One need not doubt that nearly half the people surveyed stopped being a customer of a breached entity; however, it would be more informative to know how long they've been boycotting a company – one day, one week, one month, one year? – and whether they're still doing so when followed up some time later. (It should be noted that the survey did not define the length of "long-term" but one assumes it's longer than one year, in keeping with accounting terminology).
    Likewise for the figure on businesses negatively affected by a data breach. Equifax, for example, would have claimed that they were seriously affected if surveyed three months after their public outing; however, their answer would have been different one year later. And five years from now? Who knows?
    And then you have that counterintuitive 86% figure: a clear majority of people prefer security over convenience? That certainly is news, especially considering that people's actions have not supported such a conclusion over the past decade.  

    Strong Laws and Enforcement

    The concluding remarks of the survey, in a gist, are that companies need to improve their data security. (And, also, companies that are in the business of transacting personal information need to be more transparent about it. This was, after all, the year of the Cambridge Analytica scandal). Will companies improve their data security? Can they? The answer is yes.
    But not because of consumer demand.
    Consumers of goods and services have been raising hell over data breaches for a long time now. Data breach-related lawsuits that have been filed worldwide probably number in the thousands. Public spankings and shamings exceed that number. All of it to no effect. The only thing that's been shown to encourage attention to security is the passage and enforcement of laws.
    The world, due to its fractured nature, with each sovereign state approaching data breach ramifications in their own way, has become a living laboratory that reveals what works and doesn't when it comes to increasing data security and curbing data abuses.
    Simply put, companies respond to financial penalties, as can be witnessed from Silicon Valley's behavior toward China and Europe, or how the United States healthcare sector significantly increased their data security only after regulators started hitting them with million-dollar fines.
     
    Related Articles and Sites:
    https://www.darkreading.com/risk/48--of-customers-avoid-services-post-data-breach/d/d-id/1332452
    https://www.ca.com/us/company/newsroom/press-releases/2018/ca-technologies-study-reveals-significant-differences-in-perceptions-on-state-of-digital-trust.html
    https://www.ca.com/us/collateral/white-papers/the-global-state-of-online-digital-trust.html
     
  • FBI Director Says Legislation Possibly A Way Into Encrypted Devices

    Last week, FBI Director Christopher Wray said that legislation may be one option for tackling the problem of "criminals going dark," a term that refers to law enforcement's inability to access suspects' data on encrypted devices. The implication is that, in the interest of justice and national security, the FBI will press for a law that will guarantee "exceptional access" to encrypted information. This most likely will require an encryption backdoor to be built on all smartphones, possibly on all digital devices that store data.
    It should be noted that the FBI emphatically denies that they want an encryption backdoor. One hopes they have taken this position because they're aware of the security problems backdoors represent; however, it's hard to ignore the possibility that the FBI is in spin-doctor mode. Their Remote Operations Unit, charged with hacking into phones and computers of suspects, uses terms like "remote access searches" or "network investigative techniques" for what everyone else would call "hacking" and "planting malware." Mind you, their actions are legally sanctioned, so why use euphemisms if not to mask what they're doing?
    If turning to legislation smells of déjà vu to old-timers, it's because this circus has been in town before. It set up its tent about 20 years ago and skipped town a couple of years later. And while many things have changed in that time, the fundamental reasons why you don't want encryption backdoors has not.  

    A Classic Example of Why You Don't Want a Backdoor

    The FBI has implied time and again that they are in talks with a number of security experts that supposedly claim the ability to build "encryption with a backdoor" that cannot be abused by the wrong people. These security experts are not, the FBI notes, charlatans. Perhaps it is because of these experts that the FBI has not desisted from pursuing backdoors. This, despite the overwhelming security community's proclamation that it cannot be done.
    It should be noted that Wray was asked by a Senator at the beginning of this year to provide a list of cryptographers that the FBI had consulted in pushing forth their agenda. To date, such a list has not been produced.
    (As an aside, according to wired.com, Ray Ozzie, arguably one of today's greatest minds in computing, has recently and independently proposed a way to securely install a backdoor without compromising the security of encryption. Here's one of the world's leading security expert's take on it: the conclusion, in a nutshell, is that it's flawed and mimics unsuccessful solutions proposed in the past).
    What is it about backdoors that their mention result in knee-jerk reactions by the security community? The answer lies in the fact that they have been looking into this for a long, long time. In the end, it's the unknown unknowns that are the problem: Encryption solutions run into surprises (bad ones) all the time. No matter how well-designed, it's impossible to prevent stuff like this or something like this from happening.
    In June 2017, it was reported that over 700 million iPhones were in use. Not sold; in use. It can also be assumed that there are at least an equal number of Android devices in use as well. That would be a lot of compromised devices if a backdoor was in effect and a bug was introduced.
    These issues cannot be legislated away. Furthermore, bugs merely represent one situation where a backdoor can lead to disaster. Others include the deliberate release of how to access the backdoor (think Snowden or Manning or the leak of CIA hacking tools); the phishing, scamming, conning, or blackmailing of the custodians of the backdoor; and the possibility of stumbling across the backdoor. Granted, the last one is highly unlikely, even more so than the others…but so are the chances of winning the lottery. And there have been hundreds, maybe thousands, of them across the world.
    The point is that the chances of the backdoor being compromised are higher than one would expect.  

    Moral Hazard = FBI's Pursuit of the Impossible?

    One has to wonder why the FBI is so insistent on pursuing the impossible dream of an encryption backdoor that doesn't compromise on security. It would be easy to dismiss it as a case of legal eggheads not knowing math and science, or not having the imagination to ponder how badly things could go wrong.
    But perhaps it's an issue of moral hazard. Basically, there is very little downside for the FBI if a backdoor is implemented. Everyone knows that, if the FBI gets what it wants, they won't have direct access to the backdoor; it wouldn't be politically feasible. For example, prior to suing Apple in 2016, they suggested that Apple develop a backdoor and guard access to it. When the FBI presents an iPhone and a warrant, Apple unlocks the device. The FBI is nowhere near the backdoor; they're by the water-cooler in the lobby.
    The arrangement sounds reasonable until you realize that the FBI doesn't take responsibility for anything while reaping the benefits. The FBI does not have to develop, test, and implement the backdoor. Once implemented, it doesn't have to secure and monitor it. If there is a flaw in the backdoor's design, the FBI dodges direct criticism: they didn't design it, don't control it, etc. Last but not least, the onus is on the tech companies to resist foreign governments' insistence on being given access to encrypted data. Which you know will happen because they know the capability is there.
    It's a classic case of heads, I win; tails, I don't lose much.
     
    Related Articles and Sites:
    https://www.cyberscoop.com/fbi-director-without-compromise-encryption-legislation-may-remedy/
    https://www.theregister.co.uk/2017/10/05/apple_patches_password_hint_bug_that_revealed_password/
    https://www.ecfr.eu/page/-/no_middle_ground_moving_on_from_the_crypto_wars.pdf
     
  • Most of the Used Memory Cards Bought Online Are Not Properly Wiped

    According to tests carried out by researchers at the University of Hertfordshire (UK), nearly two-thirds of memory cards bought used from eBay, offline auctions, and second-hand shops were improperly wiped. That is, the researchers were able to access images or footage that were once saved to these electronic storage units… even if they were deleted.

     

    Free and Easy to Use Software

    Popular media would have you believe that extracting such information requires advanced degrees in computers as well as specialized knowledge and equipment. These would certainly help; however, the truth is that an elementary school student would be able to do the same. The researchers used "freely available software" (that is, programs downloadable from the internet) to "see if they could recover any data," and operating such software is a matter of pointing and clicking.
    In this particular case, the recovered data included "intimate photos, selfies, passport copies, contact lists, navigation files, pornography, resumes, browsing history, identification numbers, and other personal documents." According to bleepingcomputer.com, of the one hundred memory cards collected:
    • 36 were not wiped at all, neither the original owner nor the seller took any steps to remove the data.
    • 29 appeared to have been formatted, but data could still be recovered "with minimal effort."
    • 2 cards had their data deleted, but it was easily recoverable
    • 25 appeared to have been properly wiped using a data erasing tool that overwrites the storage area, so nothing could be recovered.
    • 4 could not be accessed (read: were broken).
    • 4 had no data present, but the reason could not be determined
     

    Deleting, Erasing Wiping… Not The Same

    Thankfully, it appears that most people are not being blasé about their data. They do make an effort to delete the files before putting up their memory cards for sale. The problem is, deleting files doesn't actually delete files. (This terminology morass is the doing of computer software designers. Why label an action as "Delete file" when it doesn't actually do that?)

    The proper way to wipe data on any digital data medium is to overwrite it. For example, if you have a hard drive filled with selfies, you can "delete" all of it by saving to the disk as many cat pictures you can find on the internet (after having moved all of the selfies to the trash/recycle bin on your desktop). This is analogous to painting over a canvas that already has a picture on it, although the analogy breaks down somewhat if one delves into technical minutiae.

    Incidentally, this is why encryption can be used to "wipe" your drive: Encryption modifies data so that the data's natural state is scrambled / randomized. When an encryption key is provided, the data descrambles so that humans can read it. Once the computer is turned off, the data returns to its scrambled state. So, if you end up selling a memory card with encrypted data but without the encryption key, then it's tantamount to offering for sale a memory card that's been properly wiped.

     

    More of the Same

    This is not the first time an investigation has been conducted into data found on second-hand digital storage devices. As the bleepingcomputer.com article notes, similar research was conducted in the past:
    A study conducted in 2010 revealed that 50% of the second-hand mobile phones sold on eBay contained data from previous owners. A 2012 report from the UK's Information Commissioner's Office (ICO) revealed that one in ten second-hand hard drives still contained data from previous owners. A similar study from 2015 found that three-quarters of used hard drives contained data from previous owners.
    And these are but a small sample of the overall number of similar inquiries over the years. The world has seen more than its fair share of privacy snafus, be it a data breach or otherwise. Despite the increased awareness on data security and its importance, the fact that we're still treading water when it comes to securing data in our own devices could signify many things:
    • People don't really care, even if they say they do.
      • No surprises there.
    • We are too focused on spotlighting the problem and failing in highlighting the solution.
      • News anchor: "Yadda yadda yadda…This is how they hacked your data. Be safe out there. And now, the weather." Be safe how? What do I do to be safe?
    • People interested in preserving their privacy do not sell their data storage devices; hence, studies like the above are statistically biased to begin with.
      • Essentially, researchers are testing the inclinations of people who don't really care about privacy or don't care enough to really look into it (a quick search on the internet will show you how to properly wipe your data).
    • Devices sold were stolen or lost to begin with, so the sellers do not have any incentive to properly wipe data.

    Whatever the reasons may be for the continued presence of personal data on memory storage devices, regardless of how much more aware we are of privacy issues, one thing's for certain: It's not going away.

     

    Related Articles and Sites:
    https://www.bleepingcomputer.com/news/security/two-thirds-of-second-hand-memory-cards-contain-data-from-previous-owners/

     
  • SCOTUS Says Cops Need Warrant For Location Data From Network Providers

    It's hardly a secret that the proliferation of digital devices has opened up opportunities and headaches for law enforcement. In the former camp, modern communication devices are de facto tracking devices that store and generate copious amounts of data; access to it could easily make or break a case. In the latter, the use of encryption and other security measures makes it challenging, if not impossible, to access that same data. And now the courts are making it even more onerous to obtain it.  

    Get a Warrant

    According to a recent ruling, law enforcement will "generally" require a warrant to obtain a person's location data from network providers. Before this ruling, the Third Party Doctrine stated that a person gives up their expectation of privacy when he shares information with a third party (like banks, ISPs, phone companies, etc). Hence, law enforcement could request data from these third parties without a warrant; they only had to prove that the information they were seeking could be pertinent in an investigation. For example, police could ask a bank to see a suspect's credit card transactions, since it would pinpoint a person's location at a particular time. In fact, they can still do this going forward.
    However, the Supreme Court has decided otherwise when it comes to your location as pinpointed by your cellphone, more specifically, the cellphone location data created and collected by telcos. It is hardly a surprising judgment. For example, bugging a person's vehicle with a tracker requires a court order because continuous tracking is considered a violation of privacy expectations.
    Of course, there is a difference between bugging a car and using a cell phone: be the phone dumb or smart, it's not the government doing the bugging – you're bugging yourself and paying a company every month to do so. The government could argue (and probably has) that they're merely picking up a trail that you've agreed to broadcast. It would be no different from you tossing evidence left and right as you flee from a crime scene, creating a trail to yourself as you make your getaway. There's nothing illegal in law enforcement following that trail. Indeed, they'd be remiss in not doing so.
    The thing is, though, that life for many now revolves around access to the services that telcos offer. Well over half the population is using either a dumb or smart phone, and these devices need to know your location. Otherwise, you wouldn't be able to receive calls or texts. This is also the case for accessing mobile internet.
    Furthermore, these devices are very rarely turned off, for obvious reasons. So, the data that's collected by telcos and shared with law enforcement would include information that traditionally requires a warrant anyway. The warrant requirement for bugging a vehicle was already mentioned. Even more sacrosanct is the privacy of a person in one's home, and law enforcement's incursion nearly always requires a warrant. Even pointing a thermal imaging device to a person's home without court approval is illegal, which technically does not involve "entering" the home but does involve obtaining evidence from it.
    So, an "information dump" of telco data over an extensive period would already come to loggerheads with such legal restrictions, it seems.
    However, the judges ruled, five to four, that a warrant is necessary when accessing telco location data because the data allows the type of surveillance from which the Fourth Amendment protects US citizens. Remember, the Fourth exists because the British authorities would look for evidence of wrongdoing whenever they felt like it, wherever and whomever it might be.  

    The Fourth Amendment

    The US Constitution provides protections from that sort of thing. Certainly, you could have committed a crime. And, certainly, evidence of said crime could be in your home. But, the law can only enter your home and search for that evidence, and only that evidence, if they have probable cause, which is the grounds for issuing a warrant.
    Consider, then, the aspects of the information that law enforcement claims it should be able to access without a warrant:
    • Location data is very accurate. Not as accurate as GPS but close enough – and the technology will only get better to the point that it will be just as good as GPS or better.
    • This data now covers a sizable number of the entire US population, seeing how Americans of all stripes and colors carry a cellphone.
    • The collected data is excessive. A person's location can be pinged by cell towers multiple times every minute. One can literally tell where a person is every minute of the day.
    • The data is retroactive. The location data is stored by telcos for up to five years. Change the law, and it could be ten years. Perhaps even longer if DNA storage finally happens. ('Cause, let's face it, the only reason why telcos don't want to keep this data long term is tied to storage costs).

    So, we're talking about data that's akin to what would be generated if you implanted a tracking chip on most Americans and let them go about their lives. And because the government didn't force anyone to do this, and third parties are involved, a warrant shouldn't be necessary when trying to get a hold of this data. This, in a nutshell, was their (very dystopian) argument.

    The courts (barely) disagreed.

    However, it follows a number of rulings over recent years where the courts have upheld privacy interests over law enforcement. It seems that slowly, but surely, as the effects and impact of technology begins to leave an imprint upon all – as opposed to the young or the hip or the tech-savvy – people are beginning to have a better understanding of what's at stake.  

     

    Related Articles and Sites:
    https://gizmodo.com/cops-need-a-warrant-to-collect-your-phone-location-data-1827050891

     
More Posts Next page »