in

This Blog

Syndication

Tags

News

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.

Archives

AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • Indiana Supreme Court To Judge If Government Can Force People To Decrypt Their Smartphones

    The issue of forced smartphone decryptions, as it pertains to Constitutionally protected rights, is gathering steam. According to various sites, Indiana's Supreme Court will soon be listening to arguments on whether Fifth Amendment rights are being violated when the government forces suspects to unlock their encrypted smartphones (the Indiana Court of Appeals judged that it was a violation, in 2018).
    Earlier in the year, a California judge ruled that forcibly unlocking encryption by biometric means was a violation of a person's Fifth Amendment rights, the one that protects people from self-incrimination.
    In the 2018 judgment, the Indiana judges said that forcing suspects to decrypt cryptographically secured contents is in of itself unconstitutional, regardless of how it's done. A summary how this case came about to be can be found here and here.
    The arguments involve the cast of usual suspects: the Fourth and Fifth Amendments, foregone conclusions, smartphones and the amount of personal data they store, encryption (obviously), and whether the act of producing something is "testimonial" or not. These have been covered many times over in this blog and elsewhere.
    There is one surprising twist in the Indiana Court of Appeals decision, however: one of the judges argued that (my emphasis):
    "(W)e consider Seo's act of unlocking, and therefore, decrypting the contents of her phone, to be testimonial not simply because the passcode is akin to the combination to a wall safe as discussed in Doe," he wrote. "We also consider it testimonial because her act of unlocking, and thereby decrypting, her phone effectively recreates the files sought by the state." [theindianalawyer.com]

    Could Lead to Bad Logic, Terrible Results

    Analogies break down when you get into the nitty-gritty of things. And yet, analogies must be used when trying to bridge the digital to the analog, the virtual to the real.
    The layman's version of how computer encryption works is this: you provide an encryption key (generally, a very, very long string of numbers and characters) and the computer unscrambles data. Take away the key, usually by shutting down the computer or closing the app or file, and the computer scrambles the data. Due to the length of the key, which makes it humanly impossible to memorize, a person usually uses a password as a go-between: if the password matches the one on file, then the encryption key is made available; otherwise, nothing happens.
    In non-digital encryption – usually in the form of written text on paper – the work of the computer is done by a human being: a person, in possession of the encrypted text and the encryption key, manually decodes the message. If he or she loses the decoded message, the process of unearthing the message from the encrypted text must be repeated.
    In the latter case it is obvious that a file "sought by the state" has to be recreated if one is forced to surrender an encryption key, assuming a previously deciphered document is unavailable. Someone will take that key and the encrypted message and work to produce a plaintext document.
    But in the digital version, is it still the case?
    It is true that the computer (or smartphone) must do work to show you the decrypted data. It is also true that the natural state of the data is in its encrypted form. Arguably, the process of decrypting the information – which could also be seen as restoring or reassembling it, if you will – is the same as recreating it. But then, the argument must also hold that the process of encrypting data is tantamount to destroying the information. Otherwise, why would it be created and recreated each time you unlock a smartphone?
    This last argument is troubling and terrifying. Its implication ought to give people pause. Technicalities such as this one is what gets people into trouble despite being law-abiding citizens.
    Would it strain one's gray cells to imagine a situation where a suspect is accused of "destroying" information because he had it encrypted? Or because the contents were automatically encrypted after a specified period of time, as it does with smartphones (which will happen when they're put under custody of the state)?
    For example, technically, when you stream audio and video, you are creating a copy of those files. Makes sense if you think about it: It's not as if the video disappeared in the server and appeared on your screen. The video is still on the server, to be accessed by another person; but, it's also on your screen…because a copy of it exists in one form or another on your computer or smartphone. Now imagine that this is not taken into account when drafting and passing copyright law. Less than reputable companies would go around suing people because they "copied" the video when they streamed it legally. If memory serves, this hypothetical actually happened under the original DMCA.
    Of course, saner minds would ultimately prevail. Hopefully. But, the point is that you shouldn't give people the legal green-light to go crazy before reining them in: the damage they cause in the meantime is real.
     
    Related Articles and Sites:
    https://www.theindianalawyer.com/articles/48024-smartphone-privacy-ruling-tests-how-technology-affects-rights
    https://www.eff.org/deeplinks/2019/02/highest-court-indiana-set-decide-if-you-can-be-forced-unlock-your-phone
     
  • Judge Says Biometric Locks Protected By 5th Amendment

    The battle over privacy in the digital age ratcheted up last week. According to a California judge, the Fifth Amendment – the right not to incriminate oneself – protects people from being forced to bypass a smartphone's encryption via the use of irises, fingerprints, facial recognition, and other similar methods. Obviously, this means a warrant cannot be specifically issued for such purposes, either. The ruling represents a reversal to semi-established practices.
    Traditionally, biometric information is not considered to be legal testimonial but physical evidence. This is why, despite Fifth Amendment protections, blood samples can be compelled from a suspect even if that blood sample were to incriminate him. Or, why people can be forced to stand in a police lineup despite the potential to be identified as the criminal.
    Following this line of thought, many courts have nullified Fifth Amendment protections when it comes to bypassing biometric restrictions: judges have forced suspects, in court, to unlock their phones by placing their fingers on smartphones' finger scanners. Or, they have issued warrants to do the same outside of court. However, had the suspects used alphanumeric passwords, the courts would have not even dreamed of compelling access to the devices.
    Why not?

    Fingerprints are Not Alway Fingerprints

    Let's say you provided a fingerprint to the police, to be matched to another fingerprint found at a crime scene. You're not incriminating yourself because you're not acknowledging, tacitly or otherwise, that you were at the crime scene. Making that link – proving that you were at the crime scene – is the government's job, which has to discover said print at the scene and to pay someone to determine whether the two prints are a match.
    It's different for a password. The argument goes that producing the password that unlocks a smartphone essentially identifies the device as yours. Hence, the information that is stored there is also yours. Instead of the government proving that the device belongs to you, you've done it for them by unlocking the phone, and by doing so you've incriminated yourself. That's a fifth amendment no-no.
    For a long time, the courts have generally leaned towards saying that the forced provision of prints to a biometric scanner is more like the traditional "offer your prints" scenario and unlike the "give us your password" scenario.
    The judge in California is essentially pointing out that that couldn't be further from the truth. Especially in the case she ruled upon: The authorities were asking for a warrant that would allow them to search the premises where suspects were residing. The search would extend to the contents of any devices that were there. Furthermore, the authorities wanted to force the suspects to unlock these devices, if any were present, by pressing their fingers against the biometric pads. Plus, they also wanted to include anyone who was at that location at the time the warrant was being served.

    Overly Broad

    Man, talk about a fishing expedition. Anyone? Really? Including the FedEx dude that might be there? Of course, chances are that law enforcement wouldn't harass the FedEx guy… but then again, who knows? The warrant would have given them the legal authority to do so. And that's the point: this is overly broad. Warrants should be limited to whomever happens to be a suspect, not whoever happens to be within eyesight of the suspect.
    In addition, there is the problem that you're dealing with two suspects and their devices. Let's say the warrant is served and the authorities find one phone: if you forcibly unlock the device by having someone's finger put against it, that action is incriminatory, tantamount to providing a password.
    And last but not least, the judge pointed out that previous legal precedent has acknowledged that smartphones are essentially computers with built-in telephony. As such, due to the sheer amount of information that can be found in computers, there are limits to what can be authorized on a warrant when searching through such devices.
    This latest case is merely a single shot in the ongoing privacy battle that the digital era has unleashed. But it's pretty obvious that a trend is beginning to form. Slowly, but ever so slowly (it's almost glacial, to be honest), the courts are beginning to recognize that the old rules have to be updated for the digital age.
     
    Related Articles and Sites:
    https://www.zdnet.com/article/police-cant-force-us-citizens-to-unlock-their-phone-by-face-or-finger/
    https://www.techspot.com/news/78274-judge-rules-no-form-biometric-security-can-used.html
    https://reason.com/archives/2019/01/16/judge-rules-police-cannot-require-people
     
  • Convicted Terrorist Steals Hard Drive in Brussels Main Justice Offices

    According to the Associated Press, a man who was convicted (and released from prison) for terrorist activities is suspected in the theft of an external hard drive from a forensic doctor's office. The story is full of puzzling questions, including why the Belgian government is not doing an adequate job of securing their data.

    Motives Unknown

    Illias Khayari was convicted in Brussels, in 2016, for participating in terrorist activities (he confessed to decapitating a person in Syria as part of said activities but somehow that didn't factor into his sentence of five years in prison). This, of course, appears to the layperson as a comparatively insignificant amount of time. Even worse, he was released after serving only half that time (what, how, and why!?) because he was arrested earlier this month for (allegedly) stealing a hard drive, among other items. The hard drive, located in the "forensic doctor's office at the city's [Brussels] main justice offices" contained "autopsy reports about the victims of the suicide bombings in Brussels in 2016." Khayari has denied the charges. Naturally, he didn't give a motive as to why he picked that particular location in which to burgle. The nature of his past conviction and the hard drive's contents would seem to provide a reason; however, it could also be a huge coincidence. The city of Brussels seems pretty confident it's the latter:
    Brussels prosecutor's office spokesman Denis Goeman said the hard drive was a backup of an office computer so it contained several files relating to different cases. He said there was no label on the drive that might have identified it as containing material related to the March 22, 2016 attacks on the Brussels metro and airport that killed 32 people. No files or evidence was lost and no cases have been compromised by the Jan. 3 theft. "It is therefore premature to say that this is a theft concerning precise documents, on the contrary," Goeman said. The suspect has denied the charges.
    It most certainly is premature. But would it be erroneous? Of all the different places Khayari could break into, he chooses that place? And steals that hard drive? The other items that were stolen in tandem could just be to make it appear as if the drive was not the target. The lack of a label on the drive is insignificant; why would you assume that a storage device in a forensic doctor's office holds something other than forensic evidence?

    Encryption in Government Facilities

    Belgium is part of the European Union. Indeed, the EU headquarters is located in its capital city, Brussels. As such, it would be weird if its government didn't follow certain privacy and information security directives and laws that have been passed over the years. And while the EU does not require the use of encryption, it does strongly encourage it. One imagines that its use would be especially encouraged where it involves sensitive data surrounding an ongoing case. Was encryption used on the stolen hard drive which, incidentally, is yet to be recovered? Based on the words and actions of the prosecutor's office, one would have to guess a careful "no." Those who could be potentially affected by the disk theft have been notified, a move that is not required if the information had been secured via encryption.

     

    Related Articles and Sites:
    https://www.whig.com/article/20190109/AP/301099908#
    https://www.databreaches.net/convicted-terrorist-charged-with-stealing-hard-drive-with-victim-data-on-it/
    https://www.miamiherald.com/news/nation-world/article224122370.html

     
  • Scathing Government Report Concludes 2017 Equifax Breach Entirely Preventable

    This week, the US government published a report on the massive data breach Equifax experienced last year.  The overall conclusion shared by the House Oversight and Government Reform Committee is that the data breach – the largest one todate in US history and the foreseeable future – was entirely preventable.  However, as one reads through the entire report, it becomes apparent that it wasn't preventable at all.

    Or rather, it was preventable the way cardiac arrests, skincancer, and adult onset diabetes (now known as type two diabetes) are preventable: making sure that you're doing what needs to be done all the time.  Eating right.  Exercising regularly.  Applying sunscreen.  The majority don't do most of the above, and millions around the world suffer theconsequences every year

    Likewise, Equifax fell well short in what they had to do to maintain a healthy and secure data environment. To say that the one incident was preventable is to give Equifax too much credit.  The company had set itself up for a data breach.

    It's a wonder a massive information security incident didn't occur sooner.

     

    "Preventable"

    At the heart of Equifax's data breach was a critical Apache Struts vulnerability which was disclosed publically along with a security patch.  Obviously, hackers could and would take advantage of this vulnerability ASAP, and as often as possible, since there was a limited window of opportunity to exploit it: once patched, the window would close permanently.

    Equifax failed to apply it.

    Not that they didn't try. Equifax gave itself a 48-hour deadline to patch the weakness.  They scanned their network to see if the vulnerability was present but couldn't find any.  An unsurprising development, seeing how Equifax had no idea what they had, where they had it, and possibly how they had gotten it, the result of years-long acquisition binges that created a complex and fractured computing environment.

    Of course, this leads to the question of how they ultimately did learn of the breach.  The answer lies in expired security certificates.

    Equifax had allowed 300 security certificates to expire (bad).  More shockingly, they knew that these needed to be renewed and sat on it, in certain cases for over a year (terrible).  Once renewed, the company's IT department saw that something was very, very wrong.  Had those security certificates been active when the hackers exploited the Struts weakness, Equifax would have been aware of the breach immediately.  This is undisputed. 

    (Also undisputed is that the breach wouldn't have happened if they had applied the patch….but as already explained, they couldn't find the vulnerable Struts application.)

    So, it seems that, based on this, the report's authors concluded that the incident was preventable.  All it would have taken was to apply a free patch to an unaccounted-for vulnerability that would have been unearthed via the certificate (indirectly) if it had not been allowed to expire, if the network was already breached by hackers who were siphoning data away. Otherwise, nothing would have been flagged.

    How it that "preventable?" Under the circumstances, being breached is an active element of discovering that you can be and have been breached.  Unless, of course, what they meant was that the hackers could have been stopped if Equifax had a nominally "normal" infrastructure with an adequate (not even"good" or "stellar") approach to data security.

    But wouldn't that be true for pretty much all data breaches we've read about in the past ten years?

     

    Plus Ça Change, Plus C'est La Même Chose

    In September, there was a Congressional hearing that looked into Equifax's data breach.  Much of what's in the report echoes the hearing, although there are instances where the report further illuminates on what was disclosed previously.  In a number of instances, the report even seems to contradict what was said in September. For example, you'd have to really stretch the truth to describe the breach as "human error" after reading this report.

    Despite all that's been revealed, Equifax has not really been held accountable for its actions, or lack thereof.  Certainly, civil lawsuits have been filed.  And, for a short while, its stock price was hammered.  But, aside from the circus show in September, the government hasn't really done anything to the company. Of course, this does not mean that changes are not on their way, the Equifax bill being one such example and Democrats in the Senate calling for an information fiduciary law being another.

    And yet, attempts to pass such bills have a long history of dying in Congress, so don't hold your breath.

     

    Related Articles and Sites:

    https://gizmodo.com/equifax-breach-was-just-as-infuriating-and-dumb-as-you-1830996448
    https://oversight.house.gov/wp-content/uploads/2018/12/Equifax-Report.pdf

     
  • HIPAA Notifications Are Now Within 30 Days Since Breach If You're In Colorado

    According to bizjournals.com, any HIPAA-covered entities that do business in Colorado will now have 30 days to notify Coloradans (or Coloradoans, if you prefer) of a data breach involving personal information, and not the customary 60 calendar days under HIPAA. The reason? A bill on data security that went into effect in September.
    As usual, the use of encryption provides safe harbor. Indeed, the bill – HB18-1128 – goes out of its way to define data breaches as unauthorized access to "unencrypted" personal information. Furthermore, it notes that cryptographically protected information is still subject to the reporting requirements if the encryption key is compromised; that "encryption" is whatever the security community generally holds to be as such; and that a breached entity does not need to provide notification if it determines that "misuse of information… has not occurred and is not reasonably likely to occur."
    In the past, variations of that last part were heavily criticized. Naturally, it's in the breached entity's interest to declare that there won't be misuse of the hacked information, ergo no need to inform anyone about it. In 2018, however, it'd be a laughable position to take.  

    Surprising Development? Not Really

    Colorado's "encroachment" on HIPAA can take one aback but this would be merely a knee-jerk reaction to unfamiliar news: to date, if one was covered under HIPAA, state privacy and information security laws left HIPAA-covered entities alone. But there's absolutely no reason for it. After all, it wouldn't be the first time that a state decided to pass laws that are more severe than federal ones.
    Furthermore, think about the purpose of notifications. Supposedly, it's so (potentially) affected individuals can get a start on protecting themselves. If the past ten years have shown us anything, it's that receiving a notification 30 days after the discovery of a data breach can already be too late. In that light, waiting 60 days could be disastrous.
    It's a wonder that HIPAA hasn't updated its rules to reflect reality. HIPAA was, arguably, a trailblazer when it came to protecting personal information, with its no-nonsense approach and enforcement of the rules. That last one was a biggie: When Massachusetts General Hospital (MGH) was fined $1 million in 2011 – the largest amount at that time for a data breach – the medical sector not only took notice, they went into action. At minimum, entities started to encrypt their laptops; those paying attention did far, far more.
    At the time, HIPAA's 60-day deadline was seen as revolutionary by some (if memory serves, existing data breach laws didn't codify a deadline for alerting people). Of course, companies being what they are, covered-entities ended up doing as most people feared would do: they put off sending notifications for as long as possible, like mailing letters on the 59th day.
    Not everyone did this and HIPAA specifically prohibited the practice. A handful were fined as a result of purposefully delaying the inevitable. But waiting until the last possible moment to send notifications appears to be the ongoing behavior, regardless. The same thing happens for non-HIPAA data breaches, except that most states have set a 30-day limit, so companies send it on the 29th day.  

    Update Those BA Docs!

    Unsurprisingly, Colorado's law also affects business associates to HIPAA-covered entities. All hospitals, clinics, private practitioners, and others in the medical sector should immediately update legal documents that establish obligations between themselves and BAs.
    Remember, a covered entity's data breach is the covered entity's responsibility, and a BA's data breach is also the covered entity's responsibility.  

     

    Related Articles and Sites:
    https://www.bizjournals.com/denver/news/2018/11/29/amendments-to-data-breach-notification-law-in.html
    https://www.databreaches.net/amendments-to-data-breach-notification-law-in-colorado-impact-hipaa-regulated-entities/
    http://leg.colorado.gov/bills/hb18-1128

     
  • Leading Self-Encrypting Drives Compromised, Patched

    Earlier this week, security researchers revealed that certain SEDs (self-encrypting drives) sold by some of the leading brands in the consumer data storage industry had flaws in its full disk encryption.

    Bad Implementation

    One of the easiest ways to protect one's data is to use full disk encryption (FDE). As the name implies, FDE encrypts the entire content of a disk. This approach to protecting files ensures that nothing is overlooked: temp files, cached files, files erroneously saved to a folder that was not designated to be encrypted, etc.
    There is a downside to full disk encryption: it can slow down the read and write speeds of a disk drive, be it the traditional hard-disk drive or the faster solid-state drive (SSD). In order for a computer user to work with the encrypted data it must be decrypted first. This extra step can represent a slowdown of 10% to 20%. Not the best news if you invested in SSDs for the bump up in read/write speeds.
    The downside mentioned above, however, is mostly true when software-based FDE is used; that is, you used a software program to encrypt the disk, like Microsoft's BitLocker. For SEDs, the "self-encrypting" portion of their name comes from the fact that an independent chip for encrypting and decrypting data is built into the storage device. That means there is no impact in reading and writing data. It does mean, however, that you've got a new point of failure when it comes to data security. If the chip is not secure enough, it could lead to a data breach.
    The researchers were able to extract the encrypted information by modifying how these chips behave. It was hard, time-consuming work but they figured out how to bypass the encryption entirely. In certain instances, they found that the data wasn't encrypted at all due to a misconfiguration. You can read the details here.
    If you read the paper, you'll notice that the data hack is not for the faint of heart. While certain security professionals have decried the incompetence in how the SEDs' encryption was implemented – and truth be told, they are right. Some of these workarounds are very Wile E. Coyote – finding these flaws would have been nearly impossible for mere mortals, non-professionals, and amateur hackers.
    Indeed, it's quite telling that it took academic researchers to shine the light on the issue.

    BitLocker "Affected" As Well

    Oddly enough, BitLocker, arguably the most deployed full disk encryption program in the world today, was affected by the SED snafu. How, you may ask, seeing that BitLocker is software-based while the security issue affects hardware-based encryption?
    By default, BitLocker hands the reins over to SEDs if disk encryption is turned on, assuming an SED is being encrypted. On the surface, deferring to the SED encryption makes sense. People don't care how their data is encrypted as long as it is encrypted, and foregoing software-based encryption means there is no performance hit. It appears to be win-win.
    (There is a group policy setting to override this behavior. Security professionals recommend that this setting be used going forward. Being security professionals, it makes sense they'd place more weight on security than performance.)

    Trade-Off: Speed vs. Transparency

    Relying on hardware-based encryption, however, means that you're relying on Samsung, Crucial, and other hardware manufacturers to implement encryption correctly. Have they? There isn't an easy way to know because they're not transparent about the design and implementation. The revealed vulnerabilities could be all there is to it… or could represent the tip of the iceberg.
    Hence the recommendation by the pros that software-based encryption be used: any solution that is worth its salt will ask NIST to validate it. Sure, the process is long and expensive; however, the ensuing uptick in business more than makes up for it. While NIST's stamp of approval does not guarantee perfect security (possibly not even adequate security), it does remove the possibility of terrible security implementation like the ones witnessed this week. And even if it's not validated, the ones that are transparent allow for examination. If something's is glaringly wrong, it will be found and noted by researchers.
    All of this being said, zdnet.com confirms that companies have either come out with firmware patches for the vulnerabilities in question or are working on it. Apply those as soon as possible, and rest easy (or easier) that your data will be safer by doing so.
     
    Related Articles and Sites:
    https://www.zdnet.com/article/flaws-in-self-encrypting-ssds-let-attackers-bypass-disk-encryption/
     
More Posts Next page »