This Blog




AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.


AlertBoot Endpoint Security

AlertBoot offers a cloud-based full disk encryption and mobile device security service for companies of any size who want a scalable and easy-to-deploy solution. Centrally managed through a web based console, AlertBoot offers mobile device management, mobile antivirus, remote wipe & lock, device auditing, USB drive and hard disk encryption managed services.
  • SCOTUS Says Cops Need Warrant For Location Data From Network Providers

    It's hardly a secret that the proliferation of digital devices has opened up opportunities and headaches for law enforcement. In the former camp, modern communication devices are de facto tracking devices that store and generate copious amounts of data; access to it could easily make or break a case. In the latter, the use of encryption and other security measures makes it challenging, if not impossible, to access that same data. And now the courts are making it even more onerous to obtain it.  

    Get a Warrant

    According to a recent ruling, law enforcement will "generally" require a warrant to obtain a person's location data from network providers. Before this ruling, the Third Party Doctrine stated that a person gives up their expectation of privacy when he shares information with a third party (like banks, ISPs, phone companies, etc). Hence, law enforcement could request data from these third parties without a warrant; they only had to prove that the information they were seeking could be pertinent in an investigation. For example, police could ask a bank to see a suspect's credit card transactions, since it would pinpoint a person's location at a particular time. In fact, they can still do this going forward.
    However, the Supreme Court has decided otherwise when it comes to your location as pinpointed by your cellphone, more specifically, the cellphone location data created and collected by telcos. It is hardly a surprising judgment. For example, bugging a person's vehicle with a tracker requires a court order because continuous tracking is considered a violation of privacy expectations.
    Of course, there is a difference between bugging a car and using a cell phone: be the phone dumb or smart, it's not the government doing the bugging – you're bugging yourself and paying a company every month to do so. The government could argue (and probably has) that they're merely picking up a trail that you've agreed to broadcast. It would be no different from you tossing evidence left and right as you flee from a crime scene, creating a trail to yourself as you make your getaway. There's nothing illegal in law enforcement following that trail. Indeed, they'd be remiss in not doing so.
    The thing is, though, that life for many now revolves around access to the services that telcos offer. Well over half the population is using either a dumb or smart phone, and these devices need to know your location. Otherwise, you wouldn't be able to receive calls or texts. This is also the case for accessing mobile internet.
    Furthermore, these devices are very rarely turned off, for obvious reasons. So, the data that's collected by telcos and shared with law enforcement would include information that traditionally requires a warrant anyway. The warrant requirement for bugging a vehicle was already mentioned. Even more sacrosanct is the privacy of a person in one's home, and law enforcement's incursion nearly always requires a warrant. Even pointing a thermal imaging device to a person's home without court approval is illegal, which technically does not involve "entering" the home but does involve obtaining evidence from it.
    So, an "information dump" of telco data over an extensive period would already come to loggerheads with such legal restrictions, it seems.
    However, the judges ruled, five to four, that a warrant is necessary when accessing telco location data because the data allows the type of surveillance from which the Fourth Amendment protects US citizens. Remember, the Fourth exists because the British authorities would look for evidence of wrongdoing whenever they felt like it, wherever and whomever it might be.  

    The Fourth Amendment

    The US Constitution provides protections from that sort of thing. Certainly, you could have committed a crime. And, certainly, evidence of said crime could be in your home. But, the law can only enter your home and search for that evidence, and only that evidence, if they have probable cause, which is the grounds for issuing a warrant.
    Consider, then, the aspects of the information that law enforcement claims it should be able to access without a warrant:
    • Location data is very accurate. Not as accurate as GPS but close enough – and the technology will only get better to the point that it will be just as good as GPS or better.
    • This data now covers a sizable number of the entire US population, seeing how Americans of all stripes and colors carry a cellphone.
    • The collected data is excessive. A person's location can be pinged by cell towers multiple times every minute. One can literally tell where a person is every minute of the day.
    • The data is retroactive. The location data is stored by telcos for up to five years. Change the law, and it could be ten years. Perhaps even longer if DNA storage finally happens. ('Cause, let's face it, the only reason why telcos don't want to keep this data long term is tied to storage costs).

    So, we're talking about data that's akin to what would be generated if you implanted a tracking chip on most Americans and let them go about their lives. And because the government didn't force anyone to do this, and third parties are involved, a warrant shouldn't be necessary when trying to get a hold of this data. This, in a nutshell, was their (very dystopian) argument.

    The courts (barely) disagreed.

    However, it follows a number of rulings over recent years where the courts have upheld privacy interests over law enforcement. It seems that slowly, but surely, as the effects and impact of technology begins to leave an imprint upon all – as opposed to the young or the hip or the tech-savvy – people are beginning to have a better understanding of what's at stake.  


    Related Articles and Sites:

  • Yahoo Penalized £250,000 By UK Information Commissioner's Office

    It was reported this week that the United Kingdom's Information Commissioner – the person whose department is in charge of upholding the nation's data privacy laws – has penalized Yahoo! UK Services Limited with the amount of £250,000.
    The penalty is in response to the global data breach Yahoo experienced, and hid, for over two years. Approximately 500,000 accounts in the UK were affected.
    Knowing what we do of the Yahoo breach, and keeping in mind that the ICO can issue a monetary penalty of up to £500,000, it sounds like a woefully inadequate amount. For example, the US's SEC, the Securities and Exchange Commission, fined Yahoo $35 million, a little over 10 times the ICO's penalty.  

    Data Breach Not the Issue?

    According to, Yahoo UK was not fined for the data breach. Apparently, what the ICO views as problematic is the long delay in notifying people of the data breach (two years!).
    Which is crazy if it's true.
    There was no "delay." Yahoo didn't fail to alert users of the data breach "in a timely manner." The company, for all intents and purposes, appears to have actively hid the data breach – which is the real scandal; data breaches involving hundreds of millions of people are not a rarity anymore, and neither is going public with the fact at the speed of molasses – of which not alerting affected users is a key component. To fine Yahoo UK for taking longer than usual in notifying people of a data breach is bonkers.
    Thankfully, it seems that the ICO took more than the so-called delay into account:
    • Yahoo! UK Services Ltd failed to take appropriate technical and organisational (sic) measures to protect the data of 515,121 customers against exfiltration by unauthorized persons;
    • Yahoo! UK Services Ltd failed to take appropriate measures to ensure that its data processor – Yahoo! Inc – complied with the appropriate data protection standards;
    • Yahoo! UK Services Ltd failed to ensure appropriate monitoring was in place to protect the credentials of Yahoo! employees with access to Yahoo! customer data;
    • The inadequacies found had been in place for a long period of time without being discovered or addressed.
    Still, the explanation doesn't quite make sense. In the past, the ICO has issued penalties as high as £400,000 for data breaches, as well as other violations of the Data Protection Act. Considering only instances involving data breaches, aside from Yahoo, none of the companies have swept incidences under the rug. They were accused of being technically negligent (same as Yahoo); of having financial, technical, and other means to ensure better data security (same as Yahoo); of not being aware that they were hacked, when they could easily have figured that out (same as Yahoo); etc. In most cases, if not all, less people were affected than in the Yahoo breach.
    So why is Yahoo UK's penalty so much lower? Especially considering that the other companies do not have the dubious reputation of actively hiding the fact that they were hacked? If anything, you would think Yahoo UK's penalty would have hit a new high in the history of ICO monetary penalties to date.
    Related Articles and Sites:
  • FBI Inflated Encrypted Smartphone Count

    Over a number of years, the FBI kept making the case for an encryption backdoor to smartphones. Of course, because "encryption backdoor" is a charged term, they said that they didn't need a backdoor per se, just a (secret) reliable way to get into encrypted devices when they obtained a warrant.
    This twisting of words is risible because "a reliable way to get into encrypted devices" is kind of the definition of a backdoor. Even the passwords set by smartphone owners are not reliable in the vein that the FBI wants them to be since people are prone to forgetting passwords: What if you went on a digital detox for a month and you actually did forget it? What if you changed it while drunk? What if you had a concussion? So, if you're looking for a method that will work 100% of the time, well… it's got to be a backdoor.
    As part of their case for notbackdoors, the FBI quoted the number of inaccessible devices that were at the center of unsolved crimes. In January 2018, the Director of the FBI, Christopher Wray, emphasized in a speech at a cyber-security conference that nearly 7800 devices could not be accessed in 2017.
    Last week, the Washington Post wrote that the figure was inflated, which was confirmed by the FBI. The actual number of devices that are inaccessible has not been released as of yet, but it's believed to be between 1000 to 2000, a range that is more in line with the 2016 figure: 880 encrypted devices.
    Why the sudden decrease? The FBI says they made an error when compiling their data, a result of having the data in three separate databases instead of one, central one.  

    Credibility Issues

    The FBI has credibility issues. In areas other than encryption, it could be because they're victims of concerted political smear campaigns. Who knows, really. But when it comes to encryption, the Bureau keeps painting itself into a corner.
    This month, it was the revelation of overinflated figures.
    In 2016, the FBI took Apple to court, arguing that they had exhausted all avenues for accessing a terrorist's encrypted iPhone. Towards the end of the legal battle, most experts were learning towards the opinion that the FBI would lose. Coincidentally, or not, the Bureau dropped their lawsuit at the eleventh hour, saying that they had found a third party that could crack open the phone's contents for them.
    Later that same year, the Office of the Inspector General reported that an internal miscommunication led the FBI to conclude that they had tried everything to crack the iPhone's encryption… but they hadn't. (So, technically somehow, the FBI wasn't lying when they said they had).
    And earlier this year, a second company announced discovering ways around iPhone encryption and began selling these techniques to law enforcement. At relatively affordable prices, one might add. So. Over the last couple of years, the FBI has essentially:
    • Mislead the public and Congress, probably not on purpose;
    • Tried to force a company to redesign a key component of their profit driver under the auspices of national security, as if we were living in a Soviet-era communist nation, despite the fact that said company hadn't done anything illegal (because, otherwise, why'd they drop the case? They should have continued even if they eventually found a way into the iPhone);
    • Passive-aggressively insinuated that the entire tech community is a group that encourages and enables criminals, evidenced by its unwillingness (and not mathematical impossibility) to create an encryption backdoor that's not a backdoor, because, you know, that's not what the FBI wants. This, despite the NSA and the CIA issuing declarations that backdoors and other forms of intentionally crippling security are a bad idea.
    The above, of course, does not cover scandals that involve the FBI that are not tied to encryption. It's becoming very hard not to view the FBI's action through a cynical lens.  

    Future Tools

    One has to admit that the problem of "going dark" is real. While it's anyone's guess how big a problem it currently is, it undoubtedly will grow bigger as time goes by. A solution may present itself in quantum computers.
    IBM warned earlier this year that advances in quantum computing could mean that today's ubiquitous encryption can be easily broken in five years' time. Their cost could ensure that only governments and large organizations can afford them for the foreseeable future – just like only they can afford supercomputers – satisfying the goal of not hamstringing cryptography as well as only allowing "the good guys" to break encryption when needed (and authorized).  
    Related Articles and Sites:
  • US Court Says Border Searches Require "Suspicion"

    As many travellers may know, people at US borders are subject to an altered set of laws due to the fact that… well, a border is a border. This includes "pseudo-borders" like airports that may be located well within US soil. The most obvious alteration is the seeming suspension of the Fourth Amendment, the Constitutional law that covers one's right against search and seizure.
    Last week, the media was slightly abuzz over a decision by the Fourth Circuit Court of Appeals in Virginia. The appeals court, many news sites mentioned, confirmed that US border authorities cannot search a traveller's cellphone contents without a warrant. Other sites, more law-oriented than general news sites, correctly noted that the court confirmed that phones cannot be searched without a reason (also referred to as cause or suspicion).
    The decision appears to be more nuanced than that, actually. At the border, the government is not bound to the same level of Fourth Amendment oversight as elsewhere in the US – which is why they're already able to go through your luggage for absolutely no reason whatsoever.
    Not only that, they're also able to go through you laptops, USB flash drives (the government has dogs that can sniff out electronics in your luggage), and your smartphones. And when it comes to the last one, they can poke, swipe, and press through it or even do a forensic analysis – all of it without a warrant.  

    What's a Manual Search and a Forensic Search?

    The difference between a manual and forensic search of your smartphone (or your laptop) is in whether a device for rummaging through your digital files was doing the searching. If a Customs and Border Patrol (CBP) agent looks through a phone's contents, not unlike what an average guy would do if he found a smartphone just lying on the street, that's a manual search.
    A forensic search usually requires the use of a separate computer or similar device to analyze your smartphone's files: it might go through all of your pictures, videos, emails, texts, apps, GPS data, etc. and analyze file names, file sizes, the existence of hidden files, possibly run facial recognition similar to what Facebook uses for tagging photographs, etc.
    (There is also, apparently, something that is considered to be a "deep forensic examination," although it's not detailed how it differs from a regular forensic search).
    A manual search is considered to be "routine," just like going through your luggage. A forensic search is "nonroutine." That is, you've got to have a reason for doing it. Among examples of nonroutine searches at borders, courtesy of the Appeals Court:
    overnight detention [of a suspect] for monitored bowel movement followed by rectal examination is "beyond the scope of a routine customs search" and permissible … only with reasonable suspicion [, which is the basis for nonroutine searches].
    If you're wondering why the government is detaining people to monitor their caca, it's because it involves a case where a person was suspected of being a drug mule. Notice how the word "warrant" does not show up anywhere. That's because a warrant is not necessary at the border. Whereas a warrant would probably be required in normal circumstances to carry forth what's quoted above, at US borders the government only requires "reasonable suspicion." What exactly is that, you may wonder. Per Wikipedia:
    reasonable suspicion is a legal standard of proof in United States law that is less than probable cause … but more than an "inchoate and unparticularized suspicion or 'hunch'"; it must be based on "specific and articulable facts", "taken together with rational inferences from those facts", and the suspicion must be associated with the specific individual.
    Probable cause, emphasized above, is the basis for obtaining a search warrant. At a border, you don't need probable cause; all you need is its more relaxed, less strict and chill brother – reasonable suspicion. This includes, according to the Fourth Circuit Court of Appeals, instances where the CBP wants to perform a forensic examination of your smartphone. In short, the Appeals Court confirmed that:
    • Forensic searches of phones are allowed at borders as long as authorities can validate their suspicion. This does not mean that they need a warrant. But, it does mean that they must have reason to believe that criminal activity is ongoing.
    • The Fourth Amendment gets suspended at borders.
    We already knew this. So, why all the hubbub? Because it's indicative that things could change at the border.  

    The Application of Riley

    This latest judgment is one of the handful of court decisions that declare that there is a limit to what border agents can do when searching through your devices' data. It is a reflection of US vs. Riley, where the Supreme Court ruled, in 2014, that a warrant is required to go through a phone's data. There are exceptions, of course, in exigent cases. But otherwise, a warrant is required, even if a police officer is merely conducting a manual search (and this applies to flip phones as well as smartphones). This ruling went counter to how police had operated, viewing the search of a detained person's phone as no different from a physical pat-down.
    Since then, the courts have been trying to decide whether Riley applies at borders and, if so, to what extent. Per
    Courts across the country have been struggling with how to apply the Fourth Amendment in this context, in an era when tens of thousands of people are subjected to searches of their electronic devices at the border each year. Today’s ruling from the Fourth Circuit joins an earlier decision from the Ninth Circuit Court of Appeals requiring at least reasonable suspicion for forensic searches of electronic devices seized at the border. In March, two judges on the Eleventh Circuit concluded that such searches should be treated the same as searches of physical luggage, which don’t require a warrant, while a third judge dissented, arguing for a warrant requirement. Earlier that month, a Fifth Circuit judge expressed strong skepticism that the traditional rationales for warrantless border searches should be extended to searches of electronic devices, but that court declined to set a rule.
    As you can see from the above summary, the issue is a contentious one. But if we were to make some projections based on what's happened so far, and what we know so far, it would appear that Riley cannot, and won't, be instituted exactly as it is at borders.
    As already noted, the law operates differently at the border just because it happens to be the border. A warrant has never been required at the border. That's over two hundred years of precedence. The counterargument goes that, well, we've never had the ability to carry through a border what constitutes the entire private contents that are found in your house (medical files, photo albums, correspondence, diary, etc) and then some.
    But just like invasion of privacy is greatly suspended at the border (honestly, how many people would say searching through a smartphone is more of an invasion than having one's bowel movements monitored and followed by a cavity inspection? Mind you, it has happened and was found legal by the courts), you can expect the courts to dilute the Riley findings when it comes to transnational crossings.  
    Related Articles and Sites:
  • Yahoo (ie, Altaba) Settles Two Lawsuits Tied To Huge Data Breach

    Last week, Yahoo (now reborn as Altaba after Verizon's acquisition) announced a settlement with the SEC over misleading investors regarding the biggest data breach in known history. The crime: not revealing it in a timely manner. It was one of the many lawsuits the company is fighting currently as a result of the data breach.
    The final settlement is for $35 million.
    Before that, in March, the company also settled a lawsuit for $80 million. As noted by, that would be the first instance of a security fraud lawsuit tied to a data breach that was successfully won by plaintiffs.  

    The Tides are Not Turning

    Over the past ten years (or possibly longer), most if not all lawsuits revolving around a data breach were tossed out of courts for not having "standing." That is, it couldn't be shown that a data breach was directly tied to a harm… if there was any harm at all. So, the cases were tossed out of court.
    For example, nearly all courts ruled that having your personal information stolen in of itself was not an actual harm. So, if you were suing a company merely because they were hacked and your information was stolen, forget about it. No standing.
    (Call the same information a client list, and switch a company's status from defendant to plaintiff, though, and suddenly it has value and hence standing in court. The prosecution of client list theft is quite the business in legal circles. The irony).
    Returning to the topic at had, even if you were eventually harmed per the courts' definition – due to identity theft, phishing attempts, etc. – data breach victims still couldn't see their day in court because the link between the data breach and their being victimized is tenuous. With so many companies losing personal information left and right, it's virtually impossible to show that your personal torments are tied to a particular data breach.
    So, these latest legal results seem to indicate, if certain headlines are to be believed, that companies are sensing that the courts will change their stance. But that's not the case at all.  

    $350 Million

    After learning of the data breach, Verizon knocked off $350 million from the original acquisition offer for Yahoo. This means that shareholders of Yahoo stock received, as a group, $350 million less than they could have. That's not chump change.
    As a result, it could be argued, and it has, that the data breach was material information that could affect a stock's market price, and that it was not revealed in a timely fashion.
    Not revealing pertinent information in a timely fashion is illegal for companies listed in stock markets. It is this illegality that the courts would have ruled on. Yahoo/Altaba, knowing they were licked, offered a settlement in both cases. So, what you're seeing here is not a watershed moment but more of the same.
    If we were to look for a silver-lining, maybe it's that companies now know how bad things can get if they don't go public over a massive data breach within a reasonable amount of time. Do it fast enough and all you have to deal with is a bunch of lawsuits that won't go anywhere. Delay and hide, and you get the same plus lawsuits that will cost you big.  
    Related Articles and Sites:
  • Florida Government Hard Drives Stolen For Games

    Many, if not most, data security professionals will tell you that you should run a risk assessment and accordingly develop your plans for securing information, sensitive or otherwise. Then there are others who will counsel that one should secure as much as possible: obviously protect what represents a high risk situation, but never discount the possibilities of what seems like a low or no-risk situation blowing up beyond expectations.
    The idea is that dealing with the legal, financial, and public relations fallout of a data breach are comparable regardless of the initial risk classification.
    For example, one might recommend that disk encryption be deployed to all computers – not just laptops – because it's never guaranteed that a desktop computer won't be leaving a business's premises in an unapproved manner. There is history to back this up: burglaries; theft or loss of computers that have fallen into disuse and put into "temporary" storage; computers where information was inadequately scrubbed (or not at all) before being retired; etc., have been reported in the media over the years.
    The motives behind such data breaches are as varied as the data breaches themselves. Some people may be after the data. Others may want to replace their aging computer back home. Yet others may be looking to flip the hardware on craigslist. And, of course, there is the time-honored, ever-surging, never-can-kill-it "oops" situation.
    And then you have the guy who really, really wants to play Xbox.  

    Custodian Swipes Hard Drives

    According to and other sources, a Florida man was arrested for stealing hard drives from the Florida Department of Revenue. The hard drives contained taxpayer information and their disappearance, needless to say, triggered a data breach.
    A swift investigation led to one Andru Reed, a 21 year-old who eventually admitted to the theft. Reed confessed that he had stolen four hard drives so that he could connect them to his Xbox and download video games. Law enforcement is still conducting data forensics to sniff out any conflicts in Reed's story, but they're pretty certain that the taxpayer data was never accessed.
    How was Reed able to steal these hard drives? Pretty easily. He was a custodian who was working the premises. So, he didn't have to "Mission Impossible" himself into the offices. He just walked in. Furthermore, the four hard drives in question were external hard drives. All he had to do was pick them up, ideally when no one was watching (which he bungled, apparently. When the police started the investigation, employees in the office mentioned seeing Reed acting suspiciously).  

    Encrypted or Not?

    As noted before, there are competing schools of thought when it comes to data security. The loss or theft of external hard drives can be deemed a low data risk situation if they never leave a secure area.
    That's a big if, though. Security breaches where employees or outside contractors purposefully steal sensitive data, usually to sell to legal and illegal data brokers, are not unusual. So, did the Florida Department of Revenue (FDR) encrypt these hard drives or not?
    We don't know. The March 27 statement from the FDR is pretty nebulous:
    At this time, we are taking all necessary precautions to review the established physical and digital internal security procedures to ensure uniform implementation across the Department. If after the full investigation it is found that any employee did not take the proper steps to protect taxpayer information they will be held accountable. []
    And then on April 17:
    Through the details presented, we are confident that the information on the drives was not accessed. As a result of the Department of Revenue’s thorough processes and procedures to monitor and maintain equipment, we were able to rapidly identify and report the property missing. []
    Florida, like most US states, has a data breach notification law. It states that notifications to individuals must be made no later than 30 days after the breach has been identified. If encryption was not used on any of the four storage devices, it will be known before the month is over. (That the drives were recovered does not negate that a breach took place).
    For the time being, for speculation purposes, all signs appear to point towards encryption not being used: the public announcement, which is required by law; the weeks-long digital forensics (it really shouldn't be taking that long with encryption in place); and the lack of the word encryption in any materials covering the case (it's usually mentioned if it was present).
    On the other hand, the words "data breach" are not linked to this situation in any form whatsoever. The fact that the theft and the recovery have been dealt with by the media without alluding to a data breach is unusual, and reason enough to wonder whether the external hard drives were secured correctly after all.  
    Related Articles and Sites:
More Posts « Previous page - Next page »