A trio of Republican senators on Tuesday proposed legislation that requires service providers and device makers in America to help the Feds bypass encryption when presented with a court-issued warrant.
The law bill [PDF] is dubbed the Lawful Access to Encrypted Data Act, which uncharacteristically cannot be condensed into a pandering acronym. This latest legislative attempt to make encryption – math – insecure on-demand should not be confused with another bill up for consideration in the United States’ Congress, the EARN-IT Act, which threatens service providers with liability for supporting private, aka encrypted, communications.
LAEDA is sponsored by US Senators Lindsey Graham (R-SC), Tom Cotton (R-AR), and Marsha Blackburn (R-TN). Graham is one of the sponsors of the EARN-IT Act. And Blackburn pioneered the Trump administration’s rule changes that allowed ISPs to market people’s online data.
Cotton also received attention recently for an unvetted New York Times op-ed that called for a military response to public protests over the police killing of George Floyd.
Pay to play fast and loose
The bill requires any corporate presented with a warrant – “device manufacturer, an operating system provider, a provider of remote computing service, or another person” – to help authorities “access information stored on an electronic device or to access remotely stored electronic information.”
Low Barr: Don’t give me that crap about security, just put the backdoors in the encryption, roars US Attorney General
It doesn’t specify how encryption should be dealt with, just that it should be undoable when inconvenient to authorities.
The terms “device manufacturer,” “operating system provider,” and “provider of remote computing service,” apply only to firms with unit sales over one million annually or one million customers/subscribers. Any electronic devices included in the law must have 1GB of storage or more. The term “another person” is not qualified in the text.
The bill entitles those drafted for encryption-cracking duty to liability exemption and to compensation “for reasonable expenses directly incurred in complying with the order,” but not more than $300.
Service providers that handle data in motion – over a network – are also expected to help authorities access encrypted data and to bear the cost of maintaining their mandated encryption-dissolving systems.
What’s more, the bill calls for the creation of a competition, funded with no more than $50m in tax dollars, to pay out one or more prizes, awarded at the discretion of the US Attorney General, for anyone developing a system capable of undoing encryption and providing authorities with access to data. The competition winner can be awarded no more than $1m however.
“Tech companies’ increasing reliance on encryption has turned their platforms into a new, lawless playground of criminal activity,” said Cotton in a statement.
“Criminals from child predators to terrorists are taking full advantage. This bill will ensure law enforcement can access encrypted material with a warrant based on probable cause and help put an end to the Wild West of crime on the Internet.”
Logic kicks in
Encryption, it should be said, also prevents a fair amount of crime by keeping things like online bank accounts and web browsing reasonably secure. Mandating a backdoor, which mathematically anyone could find, might not be the wisest move.
In an effort to show that such legislation is needed, the Senators cite a case where encryption was bypassed without legally-compelled industry help. They point to the December 2019 terrorist attack at the Pensacola Naval Air Station in Pensacola, Florida, involving a member of the Royal Saudi Air Force.
The FBI, they said, recovered information from the attacker’s phone without any help from Apple after spending four months and some amount of money described as “large sums of tax dollars.” Apple denies the claim.
Privacy and civil liberties advocates predictably are aghast at the proposed legislation and are weary of having to fight the Clipper Chip battle from the 1990s over and over again.
“This is a full-frontal assault on encryption and on Americans’ privacy and security, just when the shift to living much of our lives online from home means we can least afford it,” said Riana Pfefferkorn, associate director of surveillance and cybersecurity at the Stanford Center for Internet and Society, in an email to The Register.
“The bill unambiguously contains the long-dreaded backdoor mandate for devices and online services alike, from cloud storage to email to apps, such as end-to-end encrypted messaging apps.”
“This bill is simply blind to reality,” said EFF senior staff attorney Andrew Crocker in an email to The Register.
“It is blind to the fact that as millions of us march in the streets and shelter in place, we’ve never been more dependent on secure communications and devices. It is blind to the expert consensus that there is no way to provide access to securely encrypted data without a backdoor, something that legislating a prize for a magical solution cannot change. And it is blind to public opinion.”
“For decades, Americans have overwhelmingly rejected government attempts to require security flaws in technology, from the Clipper Chip, to the Apple San Bernardino case, up to Senator Graham’s other misguided bill, the EARN IT Act, which would allow a government task force to outlaw end-to-end encryption,” Crocker said. “We shouldn’t spend one second more debating these fictions.”
Asked about whether LAEDA, if approved, might be subject to a legal challenge under the First Amendment, Pfefferkorn said the bill would bring internet and computing devices under the CALEA [Communications Assistance for Law Enforcement Act] rules to which telcos are currently subject.
“While I am not aware of what First Amendment challenges may have been brought to CALEA when it was passed in the 1990s, I believe this up-front, design-for-decryptability mandate is a little different than the situation in the San Bernardino showdown, where Apple did bring First Amendment arguments,” she said.
“With that said, to the degree that Apple would still be forced to create code it does not want to create, and cryptographically sign – i.e., vouch for – code it does not truly stand by, then yes, the arguments Apple raised in the San Bernardino case could putatively be raised here as well,” she said. ®
Ransomware criminals claiming to have siphoned confidential docs on Nicki Minaj, Mariah Carey, and Lebron James from an American law firm are threatening to auction off the info.
The REvil ransomware gang declared it will sell off troves of the paperwork, which it said it exfiltrated from the computer systems of American showbiz lawyer Allen Grubman. Unspecified stolen data about chanteuses Nicki Minaj and Mariah Carey, along with basketball ace Lebron James, will be up for auction on July 1, with a reserve price of $600,000, according to a statement posted to the crew’s Tor-hidden blog seen by The Register.
A post advertising the auction was filled with lurid claims that it would reveal “big money and social manipulation, mud lurking behind the scenes and sexual scandals, drugs and treachery,” as well as “bribery by Democratical Party” [sic].
Infosec biz Emsisoft’s Brett Callow told El Reg an apparent delay between the initial hack and the auction announcement may have been an attempt by the gang to build “anticipation” for the sale in the criminal marketplace.
Posh Spice’s perfume people pop up in Maze ransomware gang extortion effort
He said: “The crims likely do have at least some of the information they claim, but it may or may not be as salaciously juicy as they say. The claims and sex and political scandals could be utterly bogus and made only for the purpose of creating a bidding war.
“Let’s face it, you wouldn’t be able to ask for your money back were it to turn out that REvil had misrepresented the goods. Well, you could ask I suppose, but you probably wouldn’t have much luck.”
Should any of the three celebs not want their dealings with their lawyer made public, the gang “generously” offered to sell the whole lot back for $42m, having doubled a previous demand.
“Each lot includes full information downloaded from the office, namely – contracts, agreements, nda, confidential information, court conflicts, internal correspondence with the Firm,” said REvil, sardonically adding: “We are not responsible for the buyer’s actions.”
The auction will be followed by a second tranche on July 3 of files concerning Universal Studios, Puff Daddy’s* music label Bad Boy Records’ holding company, and MTV, it is claimed.
The Register was unable to reach Grubman for comment through his firm, Grubman, Shire, Meiselas & Sacks. Its website consists of a logo only, presumably while the lawyers fix the damage caused by REvil.
REvil is fairly indiscriminate about its targeting, having published the passports of some staff at UK electricity market middleman Elexon to menace that company into paying a ransom or as revenge for not coughing up the demand. Elexon had shrugged off the gang’s ransomware infection, rebuilding from backups and seemingly refusing to engage with the criminals. ®
* Puff Daddy was the stage name by which the US rapper Sean Combs was first known in the UK. He has since gone through a variety of monikers, lists of which can be found through your search engine of choice.
Enterprise Vulnerabilities From DHS/US-CERT’s National Vulnerability DatabaseCVE-2020-15005
In MediaWiki before 1.31.8, 1.32.x and 1.33.x before 1.33.4, and 1.34.x before 1.34.2, private wikis behind a caching server using the img_auth.php image authorization security feature may have had their files cached publicly, so any unauthorized user could view them.
The web interface on Supermicro X10DRH-iT motherboards with BIOS 2.0a and IPMI firmware 03.40 allows remote attackers to exploit a cgi/config_user.cgi CSRF issue to add new admin users. The fixed versions are BIOS 3.2 and firmware 03.88.
Government-mandated Internet shutdowns occur far more regularly than you might expect.
Since the death of George Floyd at the hands of Minneapolis law enforcement on May 25, millions of people worldwide have taken to the streets to protest police violence. But one oft-used government tactic in some countries to limit the ability of their citizens to communicate and organize has been absent so far: There have been virtually no reports of state-mandated Internet shutdowns in response to the protest.
Part of the reason for that is it’s much harder to diagnose cellular connectivity problems when thousands of people flood into one neighborhood, all demanding to use mobile phone infrastructure that wasn’t designed to handle so many devices at once. While one of the few instances of a US government-mandated network shutdown came in 2011 – when police for the BART transit system in the San Francisco Bay Area shut down cellular service for several hours during protests that followed multiple police shootings of passengers – this time around Seth Schoen, senior staff technologies at the Electronic Frontier Foundation, said his colleagues haven’t been able to confirm the rumors they’ve heard about government interference of mobile networks.
“I haven’t seen any hard evidence that couldn’t also be easily explained by networks being overloaded,” Schoen said in an email exchange. But in many cases, consumers can tell whether there has been government interference with Internet access because it will “affect people on different parts of the Internet in different ways,” he added.
Government-mandated Internet shutdowns occur far more regularly than you might expect. The number of countries that shut down access for their residents jumped from 25 in 2018 to 33 in 2019, according to the annual Keep It On report published in February by nonprofit Internet advocacy group Access Now. China, Vietnam, Egypt, Iran, Syria, and Cuba are notorious in this regard and regularly cited as countries with the least Internet freedom, according to 2019’s “Freedom on the Net” report, produced annually by the US-based democracy and human rights nonprofit Freedom House. But they’re not the only countries that use Internet shutdowns to control the flow of information and ideas.
How to Discern Despite politically driven interference, Internet monitoring organizations point to some telltale clues that can help people determine whether their sudden inability to use the Internet is a technical glitch, such as an underwater cable cut, a distributed denial-of-service (DDoS) attack, or a government-mandated order.
Most of the government-mandated Internet shutdowns or blocks are based on interfering with the country’s Domain Name System (DNS), the protocol that maps websites to IP addresses, says Arturo Filastò, project lead at the Open Observatory of Network Interference, a nonprofit that monitors and documents Internet shutdowns. Since most of the world’s DNS queries are resolved in plain text, Internet service providers can be “convinced” by the governments they operate under to restrict access to certain sites – or even all of them, he says.
“DNS hijacking is most common in the West. It’s the first level because it’s the easiest and cheapest,” Filastò says. “Another technique under the DNS tampering umbrella is DNS spoofing, such as the Chinese Great Firewall, where they will spoof the response to a DNS query faster than the legitimate response.”
Measuring Internet connectivity and shutdowns has grown more sophisticated over the years. The Center for Applied Internet Data Analysis at the University of California, San Diego, uses a combination of global Internet routing, active probing of IP addresses, and the background radiation from the Internet itself to evaluate the cause of a shutdown.
“Some measurements will tell you that the physical [Internet] connectivity still exists during a shutdown. One of our three methods will still see the existence of connectivity,” says Alberto Dainotti, research scientist with the Internet Outage Detection and Analysis group at CAIDA.
Determining whether a shutdown is caused by a technical snafu, a DDoS attack, or government interference can be tricky. If you can’t reach a website but others in your country can, it’s most likely a technical issue with your network. (T-Mobile users experienced this in North America last week.) If all or most websites work for you but one specific one appears to be down, it could be a targeted attack (by malicious hackers or a government order) focused on that one site. Specific websites can be checked with the service Down for Everyone or Just Me.
Of course, it can also be a government shutdown. Several services can help Internet users identify when their service is being disrupted by a DDoS attack or a government shutdown. In its “Surveillance Self-Defense” guide, the EFF recommends using encrypted DNS, a virtual private network, or the Tor Browser to circumvent DNS-based network shutdowns. Filastò’s employer also offers its OONI Probe to help users test network connectivity and identify likely reasons for the shutdown they’re experiencing.
The EFF’s Schoen noted consumers should be more worried about technical problems on their devices or with their ISPs before presuming their government is blocking part or all of their Internet access – even though government-initiated Internet interference and shutdowns are on the rise.
“Governments do actively tamper with people’s devices and network connections, but less frequently than random errors and outages that aren’t intentional on anyone’s part,” he said.
Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click formore information and to register for this On-Demand event.
Seth is editor-in-chief and founder of The Parallax, an online cybersecurity and privacy news magazine. He has worked in online journalism since 1999, including eight years at CNET News, where he led coverage of security, privacy, and Google. Based in San Francisco, he also … View Full Bio
The activist group Distributed Denial of Secrets, perhaps better known by their shorter but clumsy moniker DDoSecrets, has been permanently banned from Twitter.
The self-declared “transparency collective”, which published leaked and hacked data it claimed was of public interest, earned its banishment from Twitter after it distributed a gigantic collection of sensitive documents related to police and law enforcement across the United States.
As we previously reported, the 270GB data dump (dubbed “BlueLeaks”) contains many years worth of information from over 200 US police departments, FBI reports, and other law enforcement agencies.
As investigative journalist Brian Krebs reports, the data appears to have been exfiltrated following a security breach at web development firm Netsential.
The publication of the data appears to have been deliberately timed by DDoSecrets to coincide with “Juneteenth”, the United States’s national day of commemoration of the ending of slavery, June 19th.
Unfortunately, the group’s haste to release the data in time appears to have overtaken any desire to redact details which could put innocent parties at risk: such as images of suspects in police investigations, banking details, and other personally identifiable information (PII).
There are additionally concerns that the breach could endanger ongoing police investigations, and the lives of law enforcement officers.
And as the dumped data contains information reaching back as far as perhaps the mid-1990s, there is additionally the risk that information may be completely out-of-date.
Speaking to Wired, DDOSecrets founder Emma Best admitted that the group had probably failed to redact all information related to crime victims, children, and unrelated private businesses:
“Due to the size of the dataset, we probably missed things. I wish we could have done more, but I’m pleased with what we did and that we continue to learn.”
That’s a startling admission of failure. More clearly could have been done, but from the sound of things DDoSecrets and its supporters were working to too tight a deadline.
And clearly Twitter was not impressed to see the dissemination of the hacked data, which is in conflict with its policies.
Having been criticised in the past for its tardy response in banning other hacking groups, such as The Dark Overlord, DC Leaks, and Guccifer 2.0, Twitter clearly felt it couldn’t stand silent while the BlueLeaks data leak was being so overtly disseminated on its platform.
Such a ban, however, may not silence DDoSecrets permanently. Don’t be surprised if they pop up again, in a new guise, to share stolen secrets on Twitter.
Sensitivity of customer information and time-to-detection determine financial blowback of cybersecurity breaches.
The authors of the “Trends in Cybersecurity Breach Disclosures” report from Audit Analytics reviewed 639 cybersecurity breaches at public companies since 2011 and discovered that, on average, each cyber breach costs $116 million.
The report found that in 2019, cybercriminals usually targeted customer names, addresses, and e-mail addresses (48%, 29%, and 28%, respectively). In 2018, names and credit card information were the most-sought types of information. Between 2011 and 2019, malware (34%) was the common commonly used method to obtain data, followed by phishing (25%), unauthorized access (20%), and misconfiguration (12% percent). However, almost half (43%) of companies that suffered a data breach kept the type of attack to themselves.
Multivector Web-Based Attacks Are Common In 2018, British Airways became the victim of the most extensive data breach since the introduction of the EU’s General Data Protection Regulation. In that incident, criminals stole customer names, addresses, email addresses, and detailed credit card information. Web application firewalling, which inspects and filters traffic on websites, might have prevented this because it’s designed to detect and stop data theft and SQL injection as well as cross-site scripting, which are often used to compromise websites. Apparently, the airline either lacked this firewalling measure or didn’t configure it properly.
Distributed denial-of-service (DDoS) attacks — which cause an abrupt spate of Internet traffic to web or application servers — can cripple a company’s online infrastructure. They are also relatively easy to launch. As a result, they’re often used to cover up a broader, more serious attack. In 2015, for example, Carphone Warehouse websites including OneStopPhoneShop.com, e2save.com, and Mobiles.co.uk were hit by a DDoS attack that diverted its IT experts’ attention from a sophisticated hack of the company’s customer database and a theft of 2.4 million customer records. The credit card information of roughly 90,000 customers was stolen, although — and fortunately for Carphone Warehouse — the data was encrypted.
Stock Market Aftershocks Companies that expose themselves to breaches often pay penalties for allowing the attacks to happen. Besides these, according to the Audit Analytics report, remediation costs and lower stock market values are the other two most significant financial impacts of a breach.
The primary cost factor for a breach is the value of the stolen information. Not surprisingly, compromised financial information is seen as the most damaging. But Audit Analytics noted that, between 2016 and 2019, Social Security numbers (SSNs) also became popular breach targets, as SSN thefts increased by more than 500% during that period. Since 2011, of the breaches of publicly traded companies that cost more than $50 million to remediate, seven compromised financial information and three compromised SSNs. Some of the largest attacks were leveled at Target in 2013 ($292 million), Home Depot in 2014 ($298 million), Equifax in 2017 ($1.7 billion), and Marriott in 2018 ($114 million).
It’s important to note that the biggest cases — like the $5 billion Facebook has spent on its breaches or the nearly $2 billion spent by Equifax — skew the average data breach cost. Note that while the Audit Analytics report pegged Equifax’s remediation costs at $1.7 billion, the company reported more remediation spending in the first quarter of 2020.
Slower Time-to-Detection Escalates Costs The second determining factor in the cost of a data breach is the length of time it takes to disclose the breach. According to Audit Analytics, an average of 108 days passed before companies discovered a breach and 49 more days, on average, before they reported it. The median gap between the discovery of a breach and notifying the authorities was 30 days.
For companies, the discovery-to-disclosure period isn’t trivial. An academic article citing research from Audit Analytics found that equity value declined about 0.33% in firms that immediately disclosed a data breach, but by 0.72% in those that delayed disclosure by a month. The decline was much larger when companies failed to disclose the attack and parties outside the firm later discovered it. In these cases, company stocks dropped 1.47% in the three days after the revelation of the attack and 3.56% in the month afterward.
The worst case of delay involves Yahoo, which knew that Russian hackers had penetrated its system in 2013 but only reported the breach at the time of the firm’s acquisition by Verizon in 2016. The hack affected more than 3 billion accounts. The Securities and Exchange Commission eventually fined Yahoo $35 million for the 1,649-day delay in reporting the breach. Another case involves a data breach at Choice Hotels International, which began in June 2015 but was not reported until 2019. Data from the chain’s online reservation portal were shared with third parties more than 88,000 times because of a coding error.
Complex Attacks Require Better Internal Controls To be fair, some firms hire third-party investigators to look into their data breaches, which can result in delays to reports to authorities. Nevertheless, the delays are problematic. “Cyber breaches that are not discovered quickly are concerning for both regulators and investors,” its report states, referring to a SEC investigative report on the effects of cyber fraud on the internal controls of public companies. The SEC did not recommend enforcement in the nine cases highlighted in its 2018 document, but recommended that firms review their internal controls in relation to cyber threats.
“Data breaches that are not discovered quickly raise red flags about a company’s internal controls, suggesting that controls may not have been sufficient enough to detect the issues in a timely manner,” the Audit Analytics report concludes.
Depending on the nature of the information that is lost, repeated breaches can lead to extra future costs, including lawsuits filed by consumers and vendors whose financial data was compromised or company employees whose personal data were affected. Diligence by IT is crucial, especially since research and experience shows that the bad guys always come back: Audit Analytics reported that 26% of companies hit by data breaches — including Facebook, Sony, Amazon, Comcast, and T-Mobile USA — were victimized repeatedly.
Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click formore information and to register for this On-Demand event.
Marc Wilczek is a columnist and recognized thought leader, geared toward helping organizations drive their digital agenda and achieve higher levels of innovation and productivity through technology. Over the past 20 years, he has held various senior leadership roles across … View Full Bio
A top judge told a barrister for the UK Information Commissioner’s Office (ICO) today that his legal arguments against police facial-recognition technology face “a great difficulty” as he wondered whether they were even relevant to the case.
Sir Terence Etherton, the Master of the Rolls and president of the Court of Appeal, stopped Gerry Facenna QC at the beginning of his legal submissions this morning to question their relevance.
“I think that this line of submissions faces a great difficulty,” the Master of the Rolls told the ICO’s barrister. “Effectively, as I understand these submissions on behalf of the Information Commissioner, it’s not addressing the question… You’re talking about compliance with the legal framework whereas what was in issue before, and the essence of ground 1, is not compliance with it but whether the framework is sufficient.”
Facenna replied: “My primary submission now is that the legal framework which the Divisional Court set out in the annex to its judgment does not meet the law requirement under Article 8 or that requirement as it’s set out in section 35 of the Act.”
In plain English, Facenna was saying that South Wales Police’s legal justification for deploying facial-recognition tech, as detailed yesterday, didn’t comply with the Human Rights Act-guaranteed right to privacy – nor the Data Protection Act 2018 section, which states: “The processing of personal data for any of the law enforcement purposes is lawful only if and to the extent that it is based on law.”
The ICO has made no secret [PDF] that it is against routine police use of facial-recognition tech. Yet Facenna’s arguments this morning seemed to be falling on stony ground.
Despite the barrister’s efforts, the Master of the Rolls remained “in some confusion” about the legal submissions as he told the barrister: “Your Item 2 is not part of this appeal. Item 1 may be but your Item 2 is not part of Ground 1,” referring to the detailed grounds of appeal with which Liberty hopes to overturn the earlier judgment. Facenna had raised a line of argument nobody else was looking at, the judge was saying.
“It’s not part of Ground 1,” conceded Facenna, “but Ground 1 does relate to the lawfulness of the deployments that have taken place in the past. As I understand it, the purpose of this litigation has been to ascertain whether the overall legal framework under which facial recognition is continuing to be deployed is sufficient, and whether its continuing deployment is therefore lawful.”
The Master of the Rolls pushed his glasses up his nose at this point. Taking the hint, Facenna cut short his detailed exposition and said: “Why don’t I crack on?”
“Yes, thank you,” said the president of the court.
In written submissions Facenna told the court that the Information Commissioner:
Although police filled out a data protection impact assessment for its camera deployment, the ICO said it was not good enough, pointing out the Divisional Court found that it was a previous document that had been “revised and retitled”. The data protection regulator added in legal submissions: “Although it made passing reference to the possibility that members of the public might be affected by the measures, it contained no assessment of that impact on the protection of their personal data, nor any assessment of the risks to their rights and freedoms.”
Building on this, Facenna urged the judges to rule that facial recognition should be better regulated, saying: “In a democratic society like ours, when you have a new sophisticated technology or tool, undoubtedly of use to the state, whether in the public interest, prevention of crime, tax evasion or whatever it is, is [this type of unregulated deployment] right?”
The barrister went on: “Its use involves an interference – maybe not very large but interference nonetheless – with the fundamental rights of tens of thousands, millions of citizens, depending how it is employed. Is it consistent with the law that that can be rolled out, even on a pilot basis, by individual police forces or public authorities using to a large extent their own discretion, without there being some kind of legal framework?”
A thoughtful Master of the Rolls asked later: “Would the ICO have power to issue a bespoke code of practice that might provide a specific framework on AFR [automated facial recognition]?”
Facenna said he thought it did not but would check, adding: “My point is it can’t be left to individual police forces or deployments to develop impact assessments or policy documents.”
This afternoon the court began hearing from Jason Beer QC, barrister for South Wales Police. Tomorrow it will hear from counsel for the Home Office and the Surveillance Camera Commissioner and The Register will be reporting their arguments.
The judges are: the Master of the Rolls, Sir Terence Etherton, who is president of the Civil Division of the Court of the Appeal; Lady Justice Sharp, president of the Queen’s Bench Division of the High Court; and Lord Justice Singh, president of the controversial Investigatory Powers Tribunal. ®
Annual “Black Hat USA Attendee Survey” indicates unprecedented concern over possible compromises of enterprise networks and US critical infrastructure.
Thanks to the COVID-19 crisis, security professionals are more concerned than ever about potential breaches, according to a survey released by Black Hat this week.
Respondents – 273 top security professionals – registered record levels of concern about near-term compromises of their own IT environments, as well as US critical infrastructure. Ninety-four percent said they believe the COVID-19 crisis increases the cyberthreat to enterprise systems and data, according to the “2020 Black Hat Attendee Survey.” Twenty-four percent said the increased threat is critical and imminent. Vulnerabilities in enterprise remote access systems that support home workers were the chief concern (57%). Increased phishing and social engineering threats also ranked highly (51%).
In addition, nearly 90 of respondents (87%) said they believe a successful cyberattack on US critical infrastructure will occur in the next two years, up from 77% in 2019 and 69% in 2018. Only 16% believe government and private industry are prepared to respond to such an attack, down from 21% in 2019.
Seventy percent of cybersecurity pros said they believe they will have to respond to a major security breach in their own organizations in the coming year, up from 59% in 2018. Thirteen percent of 2020 respondents said such a breach is a certainty. When asked whether they have sufficient security staff to defend their enterprises against current cyberthreats, 59% said no. When asked whether they had enough budget to defend their data against current threats, a majority (56%) also said no.
While breach concerns have been high for the past several years, COVID-19 has heightened them.
“Greater dependence on cloud computing and employee-controlled/owned devices and networks will lessen the visibility and control IT and security functions rely upon to manage risk,” said one survey respondent. “This is a fundamental paradigm shift that will necessitate a change in the way we manage risk, allocate already scarce resources, and deploy controls.”
While resources are a major concern for security pros, many also raised concerns about current security technologies. In the survey, only 10 of 21 categories of security products were rated as “effective” by respondents. Multifactor authentication (84%), encryption (74%), and endpoint security tools (63%) received the highest “effectiveness” rating.
The security technologies rated least effective were passwords (25%), deception/honeypots (27%), and antivirus tools (31%). Cloud security providers (41%) and cloud security tools (46%) were rated ineffective by the majority of respondents.
The Black Hat survey also revealed frustration about some technologies that have been repeatedly promoted as “game changers” in security technology. When asked about artificial intelligence (AI) and machine learning (ML), for example, only 23% of survey respondents said they believe AI and ML will be game-changing technologies. Eighty-three percent said they believe the impact of AI and ML on security will be limited. Thirty percent said they believe AI and ML are discussed too much or overhyped; only 33 percent ranked them as effective.
Attitudes toward blockchain technology were even more cynical: Only 12 percent of Black Hat survey respondents rated the technology as game-changing, while 24% said they believe the technology is overhyped and unlikely to be of much use to their organizations.
Many security experts also expressed serious questions about the ability of corporations and consumers to protect the data and identity of individual users. In the survey, nearly half of respondents (45%) said they believe the consumer data stored by most corporations is highly vulnerable to attack, and that consumers should assume that their personal data has been breached.
Eighty-seven percent of cybersecurity pros said they believe that no matter how careful consumers are with their personal information, it’s likely that their data and/or credentials are available to criminals online right now. Only 38% of respondents believe it will be possible for individuals to protect their online identity and privacy in the future.
Many of the survey responses also indicated that, thanks to the COVID-19 crisis, cybersecurity professionals are under more pressure than ever before. And this pressure is taking its toll – not only on enterprise networks, but on IT security pros themselves.
When asked about their current level of “burnout,” in which professionals lose effectiveness because they are overstressed and oversubscribed, a majority of security professionals (53%) said they consider themselves “burned out” by their work. This figure is up significantly from 40% in 2019, suggesting that burnout is now prevalent across the industry.
Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click formore information and to register for this On-Demand event.
Tim Wilson is Editor in Chief and co-founder of Dark Reading.com, UBM Tech’s online community for information security professionals. He is responsible for managing the site, assigning and editing content, and writing breaking news stories. Wilson has been recognized as one … View Full Bio
The product has got plenty of attention, partly because people really like the look of what they’ve seen, and partly because Apple and HEY’s developers Basecamp got into a very public ding-dong about whether their iOS app was breaking the App Store’s rules or not.
It looks like that kerfuffle is now getting resolved, and – frankly – it’s probably helped drive even more interest in HEY, and encouraged more people to sign-up to the waiting list to give HEY a try.
But creating an email service from scratch isn’t simple, and designing one which attempts to take a different look at how we manage our email inbox is perhaps even more complicated.
One sign of that came to light yesterday on Twitter, when HEY user Kylie Stewart, a software engineer at Formidable Labs, tweeted a link to an email thread she had exchanged with her colleague Jon Reynolds.
Hmm… didn’t realize all Hey emails can be publicly shared without the consent of both parties 🤔 https://t.co/PlqxsinP7i
Yes, you read that correctly. Kylie posted a link that allowed anybody to see her email conversation with Jon. But Jon hadn’t approved it.
HEY gave Kylie, and any other user of the new email service, an easy way of sharing a public link to an email thread.
And yes, HEY did display a clear message that sharing the link would allow anyone in the world to access it. But what it didn’t do is seek the permission of anyone else on that email thread.
Furthermore, HEY’s public link didn’t just include all messages in a thread up until that point, but all subsequent messages on that thread were also publicly exposed.
Email should be private by default. If personal emails are going to be shared then it should be with the explicit permission of all participants.
And yes, it’s easy to screenshot an email thread or forward an email message. No-one’s denying that it’s easy to break a confidence, but HEY’s “Get a public link” functionality sits uncomfortably alongside other features which promote its desire for greater inbox privacy.
Fortunately, HEY seems to agree. Within hours of Kylie’s message on Twitter, Basecamp’s founder David Heinemeier Hansson said that the “public link” feature was being withdrawn while his team went away and thought about things a bit more.
We’ve pulled the public links feature from HEY. After Kylie and others called out the problems around consent, I first dug in, thinking that’s how forwards work, but that’s a technical framing. And HEY is here to IMPROVE email, not repeat its past mistakes ✌️❤️ https://t.co/gfcmOo451g
Labour MP Harriet Harman (pic: UK Parliament, CC BY 3.0)
Harriet Harman MP, chair of Britain’s Commons Human Rights Committee, has written to UK health secretary Matt Hancock seeking clarity on privacy aspects of the government’s latest coronavirus contact-tracing app.
“It is still crucial that people in the UK should be able to feel reassured that their data protection, privacy, and non-discrimination rights are protected in any contact tracing system,” she wrote.
Brits were told in April by Hancock and Prime Minister Boris Johnson that the NHS’s IT wing, NHSX, was working with tech providers to produce a homegrown “world beating app”, that would send data to a central repository. The Reg, back in May, was vocal in explaining why this was unwise.
Last week, the department finally admitted that it was scrapping those initial plans because the software developed didn’t work as they’d hoped.
Harman’s line of questioning touches on many points pertaining to the acquisition and retention of data, and quizzes Hancock on essential operational details, including how the app will handle data it isn’t authorised to collect, as well as ensuring it doesn’t discriminate against certain demographics.
The cross-party Human Rights Commission has been unequivocal in its insistence that any contact-tracing framework should be operated in a way that protects the public’s right to privacy. While Harman hasn’t advocated for any particular technological approach to the problem, she has nonetheless described existing data privacy as insufficient for the scope of a nationwide contact-tracing app.
“We don’t want the system to rely on the individual integrity of any minister, or any ministerial team, or any government. That’s not the way to protect rights. The way to have protections is through law,” Harman said in a May interview with The Reg.
The former deputy Labour leader has also called for the implementation of a contact-tracing tsar, which would be responsible for the governance of any eventual app, and would field complaints from the public.
The letter coincides with the release of research from identity management firm Okta showing widespread public scepticism that data collected from the UK’s contact-tracing app would be used solely for the stated purposes.
The survey, which encompassed 2,218 UK consumers, showed an overwhelming majority – 84 per cent – believe their contact-tracing data will be used for reasons other than stopping the spread of COVID-19. ®
Check out Dark Reading’s updated, exclusive news and commentary surrounding the coronavirus pandemic.
Image Source: CDC Newsroom Image library
06/24/2020 Rethinking Enterprise Access, Post-COVID-19 New approaches will allow businesses to reduce risk while meeting the needs of users, employees, and third parties. Here are three issues to consider when reimagining enterprise application access.
06/11/2020 What COVID-19 Teaches Us About Social Engineering Unless we do something proactively, social engineering’s impact is expected to keep getting worse as people’s reliance on technology increases and as more of us are forced to work from home.
06/08/2020 Safeguard Your Remote Workforce DDoS attacks on VPN servers can not only bring remote work to a standstill but also cut off admins from accessing their systems. Here are three ways to stay safer.
06/05/2020 Q&A: Eugene Spafford on the Risks of Internet Voting Allowing people to cast their ballots online to circumvent coronavirus-related health concerns introduces problems that we simply don’t know how to manage, says the Purdue University professor and security leader.
GDPR Enforcement Loosens Amid Pandemic The European Union has given some organizations more breathing room to remedy violations, yet no one should think regulators are planning to abandon the privacy legislation in the face of COVID-19.
5/26/2020 How to Pay a Ransom Even prior to the COVID-19 pandemic, ransomware attacks were on the rise and becoming more expensive. Now your organization has fallen victim and is going to pay. Here’s how to handle it.
5/21/2020 The Need for Compliance in a Post-COVID-19 World With the current upheaval, business leaders may lose focus and push off implementing security measures, managing risk, and keeping up with compliance requirements. That’s a big mistake.
3/19/2020 VPN Usage Surges as More Nations Shut Down Offices As social distancing becomes the norm, interest in virtual private networks has rocketed, with some providers already seeing a doubling in users and traffic since the beginning of the year.
Dark Reading’s Quick Hits delivers a brief synopsis and summary of the significance of breaking news events. For more information from the original source of the news item, please follow the link provided in this article. View Full Bio
New approaches will allow businesses to reduce risk while meeting the needs of users, employees, and third parties. Here are three issues to consider when reimagining enterprise application access.
As we look to reopen the economy, a lot of muscle memory will have to be relearned. The old way of doing things isn’t going to make it in the post-COVID-19 world. Too much is on the line, for both employees and customers. Everything is being reconsidered, from entry procedures to foot traffic and flow, to capacity, back-end and front-end processes, online customer service, social distancing, and cleaning.
COVID-19 is an unprecedented challenge for IT departments too. Facing lockdowns and quarantines, organizations are rethinking how they rushed thousands of new users, both insiders and third parties, onto enterprise networks to access critical private applications. In many cases, enterprises also are adding new applications to facilitate online transactions and drive-by service in an effort to deliver contactless customer service. During this crisis, speed and agility were what mattered, and now safety and security are driving the decision-making.
To connect a specific user with a specific set of apps, traditional approaches transport the user all the way to the doorstep of the app with a dedicated tunnel — a VPN. VPNs are permissive, difficult to configure, complicated to manage, and extremely fragile. One slight change in location, device, or operating system and the whole tunnel must be rebuilt from scratch. With a small number of users, devices, and private apps, this is somewhat manageable. But when COVID-19 hit and countless apps, users, devices, and locations needed instant access, it became absolute madness.
How can something so vital to business operations, accessing our own apps, still be so complicated?
Whenever the health crisis of COVID-19 subsides, IT organizations should take the time to rethink how they deliver enterprisewide application access. Crises tend to reveal underlying cracks in an organization. In the case of traditional application access solutions, the pandemic has revealed operational and security issues that are clearly not aligned with digital transformation, the user experience, or the future of work.
Ease of Use Matters Operational challenges are one of the most persistent challenges that IT teams face. The complexity of multicloud network infrastructure and applications today has led to a tool for every problem. Traditional access solutions have proven to be difficult to deploy and operate. They require new licenses to scale and time-consuming network changes to onboard new users. Post-COVID, we won’t have time for that.
What About Zero Trust? Solutions like VPNs provide too much access, taking the opposite of a zero-trust approach. Users need to be tightly managed, monitored, and controlled. They should not be free to roam once they have gained access. But it is clear that we are largely flying blind, and need better visibility and control not only over user access but each individual request.
Remember Risk? The security weaknesses of traditional approaches can no longer be ignored. Why are we bringing users on to the network at all? Why are we exposing users to insecure legacy apps?
Here are three considerations for enterprise IT teams to reopen and reimagine enterprise application access, transforming vulnerable apps and networks into zero-trust resources.
Leverage the cloud to isolate the apps completely from the network, making frontal attacks virtually impossible.
Enable continuously monitored, recorded, and controlled zero-trust user access. No more binary decisions at the beginning of the session and free range thereafter. Continuously evaluate user access according to threats and user behavior. No more implicit trust. Application access should be zero trust.
Centralize the access policy and management control of all applications. Ease of use matters.
COVID-19 exposed a lot of weaknesses in the way we enable application access for employees, partners, and third parties. This pain was felt across the board, by executives who wondered about productivity and by users who worried about rationed access. This was felt by IT teams that had to deal with network changes, hardware licensing, and a host of other headaches. Applications remain the lifeblood of business, and employee and third-party access is an issue that is not going away in the new work-from-anywhere world.
Not every change to the way we do business after this crisis will be welcome or particularly helpful. That said, we have learned many lessons during this period of significant business disruption. Access to applications, the foundational tools of business, was put to the test. New approaches will allow businesses to reduce risk while meeting the needs of users, employees, and third parties. That’s a change worth making.
Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click formore information and to register for this On-Demand event.
Dor Knafo is co-founder and CEO of Axis Security. Axis Security was founded to solve the problem of secure application access for employees, partners, and other stakeholders. Axis Security delivers a purpose-built zero-trust cloud native security and analytics platform for … View Full Bio
The Maze ransomware gang has threatened to publish information stolen from an American firm that overhauls airliners and installs flight control software upgrades – because its victim refused to pay a demanded ransom.
In a “press release” published on its leaks website, Maze raged against victims who refused to play its game and cough up vast sums of money to decrypt their illicitly encrypted data.
Among those recent targets was VT San Antonio Aerospace, a maintenance, repair and overhaul (MRO) company in Texas. A subsidiary of ST Engineering, VT San Antonio was said to have lost 1.5TB of data to the Maze criminals. Its MRO customers include Air Canada, Fedex and UPS Airlines.
Earlier this week the Maze gang highlighted ST Engineering for not paying the ransom, a sensible action that busts the gang’s business model.
‘Work pressure’ sees Maze ransomware gang demand payoff from wrong company
In its post the gang complained that ST Engineering’s ransom negotiator “lied” before declining to take part in “further negotiation” with them, promising: “Soon it will be the time for weapon contacts, contracts for alteration of airplanes for first persons, contracts with dictatorship countries, contacts for cybersecurity systems for government structures. We just can’t understand what cybersecurity they are talking about as they have an Australia size security hole in their security perimeter.”
Ed Onwe, veep and general manager of VT San Antonio Aerospace, told The Register that local authorities had been informed of the ransomware attack as the firm figured out how to respond to the initial infection, adding: “As part of this process, we are conducting a rigorous review of the incident and our systems to ensure that the data we are entrusted with remains safe and secure. This includes deploying advanced tools to remediate the intrusion and to restore systems.
“We are committed to responding to this incident transparently and proactively, and already have begun notifying potentially affected customers. We will be working with our customers and industry peers to share insights and any lessons learned so that they can learn from our experience.”
The Maze gang has stepped up its public-facing activities in recent weeks, not without cost to itself. Last week it sent a ransom demand to the wrong company, having mixed up two firms’ names. It has also targeted Posh Spice’s perfumers and other celebrity lawyers, about whom El Reg will be writing more soon. Its tactics include leaking selected files publicly to apply further pressure to victims, in the hope they pay the demanded ransom, as well as – it now seems – ranting away when they refuse to play the game.
Current British government advice is never to pay a ransomware demand: it not only encourages and enriches the crooks but there’s no guarantee that they’ll delete your data as they promise. ®
Graham Cluley Security News is sponsored this week by the folks at LastPass. Thanks to the great team there for their support!
LastPass has analyzed over 47,000 businesses to bring you insights into security behavior worldwide.
The takeaway is clear: Many businesses are making significant strides in some areas of password and access security – but there is still a lot of work to be done. Use of important security measures like multifactor authentication is up, but the continued reality of poor password hygiene still hampers many business’ ability to achieve high standards of security.
In the report, we not only highlight key trends by company size, sector, and location, we provide analysis and recommendations to help IT and business leaders take action where it’s needed most.
Download the free report now to see the current state of password security, access, and authentication around the world – and learn what you can do today to better secure your company.
If you’re interested in sponsoring my site for a week, and reaching an IT-savvy audience that cares about computer security, you can find more information here.
A privacy pickle as the pandemic lockdown lifts in England.
The UK Government has announced that it will be easing the Coronavirus lockdown on July 4th.
Amongst other changes, restaurants, pubs, and cafes in England will be allowed to reopen provided that they follow guidelines to help prevent the spread of the Coronavirus.
According to the UK Government’s own advice, these include “keeping a temporary record of your customers and visitors for 21 days.”
The opening up of the economy following the COVID-19 outbreak is being supported by NHS Test and Trace. You should assist this service by keeping a temporary record of your customers and visitors for 21 days, in a way that is manageable for your business, and assist NHS Test and Trace with requests for that data if needed. This could help contain clusters or outbreaks. Many businesses that take bookings already have systems for recording their customers and visitors – including restaurants, hotels, and hair salons. If you do not already do this, you should do so to help fight the virus…
In other words, in just ten days thousands of restaurants, bars and pubs are expected to start collecting the details of their customers and visitors.
Wouldn’t it be nice to think that this information will be collected carefully, stored securely, and ultimately properly destroyed, in a way which doesn’t breach GDPR regulations?
And yet, for now at least, the UK Government isn’t telling businesses how on earth they should do this.
And cafes and restaurants have probably got enough on their plate already, trying to reconfigure their premises and working methods to follow social distancing guidelines, without also having to get their head around data protection and privacy challenges.
Restaurants, pubs, and cafes are also not being told what information they should be collecting from their customers.
Let me say again, just ten days.
The UK Government’s advice acknowledges that firms might need some help:
We will work with industry and relevant bodies to design this system in line with data protection legislation, and set out details shortly.
I understand that there’s a global pandemic going on, and not everything is going to be perfect.
But it’s not as though it’s a surprise to anybody that at some point the lockdown would begin to be lifted – and that restaurants, pubs, and cafes would begin to reopen slowly. Was there no plan already being worked on?
Giving so little notice to the hospitality industry puts them in a privacy pickle, even if the UK Government does serve up advice for how this data should be collected and secured before July 4th, I doubt that many companies will be doing it properly.
Of course, security and privacy are not going to be the only challenges…
I’ve been asked several times today about apps for use by pubs in tracking attendance. Yes, confidentiality is one problem but the other, much bigger issue, is authenticity – beautifully summed up by Matt via @lilianedwardspic.twitter.com/2C2Kg9R8jI
Exploit kits are not as widespread as they used to be. In the past, they relied on the use of already patched vulnerabilities. Newer and more secure web browsers with automatic updates simply do not allow known vulnerabilities to be exploited. It was very different back in the heyday of Adobe Flash because it’s just a plugin for a web browser, meaning that even if the user has an up-to-date browser, there’s a non-zero chance that Adobe Flash may still be vulnerable to 1-day exploits. Now that Adobe Flash is about to reach its end-of-life date at the end of this year, it is disabled by default in all web browser and has pretty much been replaced with open standards such as HTML5, WebGL, WebAssembly. The decline of exploit kits can be linked to the decline of Adobe Flash, but exploit kits have not disappeared completely. They have adapted and switched to target users of Internet Explorer without the latest security updates installed.
Microsoft Edge replaced Internet Explorer as a default web browser with the release of Windows 10 in 2015, but Internet Explorer is still installed for backward compatibility on machines running Windows 10 and it has remained a default web browser for Windows 7/8/8.1. The switch to Microsoft Edge development also meant that Internet Explorer would no longer be actively developed and would only receive vulnerability patches without general security improvements. Still, somehow, Internet Explorer remains a relatively popular web browser. According to NetMarketShare, as of April 2020 Internet Explorer is used on 5.45% of desktop computers (for comparison, Firefox accounts for 7.25%, Safari 3.94%, Edge 7.76%). Despite the security of Internet Explorer being five years behind that of its modern counterparts, it supports a number of legacy script engines. CVE-2018-8174 is a vulnerability in a legacy VBScript engine that was originally discovered in the wild as an exploited zero-day. The majority of exploit kits quickly adopted it as their primary exploit.
Since the discovery of CVE-2018-8174 a few more vulnerabilities for Internet Explorer have been discovered as in-the-wild zero-days: CVE-2018-8653, CVE-2019-1367, CVE-2019-1429, and CVE-2020-0674. All of them exploited another legacy component of Internet Explorer – a JScript engine. It felt like it was just a matter of time until exploit kits adopted these new exploits.
Exploit kits still play a role in today’s threat landscape and continue to evolve. For this blogpost I studied and analyzed the evolution of one of the most sophisticated exploit kits out there – Magnitude EK – for a whole year.
This blogpost in a nutshell:
Magnitude EK continues to deliver ransomware to Asia Pacific (APAC) countries via malvertising
Study of the exploit kit’s activity over a period of 12 months shows that Magnitude EK is actively maintained and undergoes continuous development
In February this year Magnitude EK switched to an exploit for the more recent vulnerability CVE-2019-1367 in Internet Explorer (originally discovered as an exploited zero-day in the wild)
Magnitude EK uses a previously unknown elevation of privilege exploit for CVE-2018-8641 developed by a prolific exploit writer
Magnitude EK is one of the longest-standing exploit kits. It was on offer in underground forums from 2013 and later became a private exploit kit. As well as a change of actors, the exploit kit has switched its focus to deliver ransomware to users from specific Asia Pacific (APAC) countries via malvertising.
Active attacks by Magnitude EK in 2019 according to Kaspersky Security Network (KSN) (download)
Active attacks by Magnitude EK in 2020 according to Kaspersky Security Network (KSN) (download)
Our statistic shows that this campaign continues to target APAC countries to this day and during the year in question Magnitude EK always used its own ransomware as a final payload.
Like the majority of exploit kits out there, in 2019 Magnitude EK used CVE-2018-8174. However, the attackers behind Magnitude EK were one of the first to adopt the much newer vulnerability CVE-2019-1367 and they have been using it as their primary exploit since February 11, 2020. As was the case with CVE-2018-8174, they didn’t develop their own exploit for CVE-2019-1367, instead reusing the original zero-day and modifying it with their own shellcode and obfuscation.
Exploit packed with JScript.Encode technique
Unpacked exploit. Shellcode, names and some strings are obfuscated
Their shellcodes piqued my interest. They use a huge number of different shellcode encoders, from the classical Metasploit shikata_ga_nai encoder and DotNetToJScript to a variety of custom encoders and stagers.
It was also impossible not to notice the changes happening to their main shellcode responsible for launching the ransomware payload. The attackers are fine-tuning their arsenal on a regular basis.
Magnitude EK has existed since at least 2013, but below you can see just the changes to payload/shellcode that occurred over the period of one year (June 2019 to June 2020). During this period we observed attacks happening almost every day.
Timeline of shellcode/payload changes
Shellcode downloads a payload that’s decrypted with a custom xor-based algorithm. All strings are assembled on stack and to change payload the URL shellcode needs to be recompiled. The payload is a PE module. The module export function name is hardcoded to “GrfeFVGRe”. The payload is executed in an Internet Explorer process. It contains an elevation of privilege exploit with support for x86 and x64 versions of Windows and an encrypted ransomware payload. After elevation of privilege it injects the ransomware payload to other processes, spawns the wuapp.exe process and injects there as well. If process creation fails, it also runs the ransomware from the current process.
July 20, 2019
Payload module export function name is auto-generated.
November 11, 2019
Shellcode tries to inject the payload to other processes. If API function Process32First fails, it spawns the process wuapp.exe from Windows directory and injects the payload there. The injection method is WriteProcessMemory + CreateRemoteThread.
The payload is ransomware without elevation of privilege. The payload module export function name is hardcoded again, but now to “lssrcdxhg”.
November 20, 2019
Looks like they messed up the folder with shellcodes; in some attacks they use a shellcode from June, and later the same day they start to use their November shellcode with the new hardcoded export name “by5eftgdbfgsq323”.
November 23, 2019
They start to use the elevation of privilege exploit again, but now they also check the integrity level of the process. If it’s a low integrity process, then they execute the payload with the exploit in the current process; if that’s not the case, then it’s injected into other processes. The process is no longer created from shellcode, but it’s still created from the payload. The payload module export name is hardcoded to “gv65eytervsawer2”.
January 17, 2020
It looks like the attackers had a short holiday at the beginning of the year. The shellcode remains the same, but the payload module export function name is hardcoded to “i4eg65tgq3f4”. The payload changed a bit. The name of the created process is now assembled on stack. The name of the process also changed – it no longer spawns a wuapp.exe, but instead launches the calculator calc.exe and injects the ransomware payload there.
January 27, 2020
The payload is no longer a PE module but plain shellcode. The payload consists of ransomware without elevation of privilege.
February 4, 2020
The payload is a PE module again, but once again the export name is auto-generated.
February 10, 2020
The shellcode comes with two URLs for different payloads. The shellcode checks the integrity level and depending on process integrity level, it executes the elevation of privilege payload or uses the ransomware payload straightaway. All strings and function imports in the exploit are now obfuscated. The payload does not spawn a new process, and only injects to others.
February 11, 2020
Magnitude EK starts using CVE-2019-1367 as its primary exploit. The attackers use the shellcode from January 27, 2020, but they have modified it to check for the name of a particular process. If the process exists, they don’t execute the payload from Internet Explorer. The process name is “ASDSvc” (AhnLab, Inc.).
February 17, 2020
The attackers switch to the shellcode from February 10, 2020, but the payload module export function name is hardcoded to “xs324qsawezzse”.
February 28, 2020
Shellcode encryption is removed. The payload module export function name is hardcoded to “sawd6vf3y5”.
March 1, 2020
Strings are no longer stored on stack.
March 6, 2020
Back to the shellcode from February 17, 2020.
March 10, 2020
The attackers add some functionality implemented after February 17, 2020: payload encryption is removed and strings are no longer stored on stack. The payload module export function name is still hardcoded to “xs324qsawezzse”.
March 16, 2020
Functionality added so as not to inject into a particular process (explorer.exe). The injection method is also changed to NtCreateSection + NtMapViewOfSection + RtlCreateUserThread.
April 2, 2020
The attackers add some functionality similar to that used in November 2019. They check the integrity level of a process and if it’s a low integrity process, they execute the payload from the current process. If that’s not the case, they inject it to other processes (other than explorer.exe) and at the end create a new process and inject it there as well. The created processes are C:\Program Files\Windows Media Player\wmlaunch.exe or C:\Program Files (x86)\Windows Media Player\wmlaunch.exe depending on whether it’s a WOW64 process or not.
April 4, 2020
Shellcode updated to use a new injection technique: NtQueueApcThread. The shellcode also comes with a URL for a ransomware payload without elevation of privilege. The shellcode checks the integrity level and if it’s a low integrity process, the shellcode calls ExitProcess(). Use of the hardcoded export name “xs324qsawezzse” is also stopped.
April 7, 2020
Back to the shellcode from April 2, 2020.
May 5, 2020
Previously the attackers adjusted their injection method, but now they revert back to injection via the WriteProcessMemory + CreateRemoteThread technique.
May 6, 2020
They continue to make changes to the code injection method. From now on they use NtCreateThreadEx.
Elevation of privilege exploit
The elevation of privilege exploit used by Magnitude EK is quite interesting. When I saw it for the first time, I wasn’t able to recognize this particular exploit. It exploited a vulnerability in the win32k kernel driver and closer analysis revealed that this particular vulnerability was fixed in December 2018. According to Microsoft, only two win32k-related elevation of privilege vulnerabilities were fixed that month – CVE-2018-8639 and CVE-2018-8641. Microsoft previously shared more information with us about CVE-2018-8639, so we can say with some certainty that the encountered exploit uses vulnerability CVE-2018-8641. The exploit has huge code similarities with another zero-day that we had found previously – CVE-2019-0859. Based on these similarities, we attribute this exploit to the prolific exploit writer known as “Volodya”, “Volodimir” or “BuggiCorp”. Volodya is famous for selling zero-day exploits to both APT groups and criminals. In the past, Volodya advertised his services at exploit(dot)in, the same underground forum where Magnitude EK was once advertised. We don’t currently know if the exploit for CVE-2018-8641 was initially used as a zero-day exploit or it was developed as a 1-day exploit through patch diffing. It’s also important to note that a public exploit for CVE-2018-8641 also exists, but it’s incorrectly designated to CVE-2018-8639 and it exploits the vulnerability in another fashion, meaning there are two completely different exploits for the same vulnerability.
Magnitude EK uses its own ransomware as its final payload. The ransomware comes with a temporary encryption key and list of domain names and the attackers change them frequently. Files are encrypted with the use of Microsoft CryptoAPI and the attackers use Microsoft Enhanced RSA and AES Cryptographic Provider (PROV_RSA_AES). The initialization vector (IV) is generated pseudo randomly for each file and a 0x100 byte long blob with encrypted IV is appended to the end of the file. The ransomware doesn’t encrypt the files located in common folders such as documents and settings, appdata, local settings, sample music, tor browser, etc. Before encryption, the extensions of files are checked against a hash table of allowed file extensions that contains 715 entries. A ransom note is left in each folder with encrypted files and at the end a notepad.exe process is created to display the ransom note. To hide the origin of the executed process, the ransomware uses one of two techniques: “wmic process call create” or “pcalua.exe –a … -c …”. After encryption the ransomware also attempts to delete backups of the files with the “wmic shadowcopy delete” command that is executed with a UAC-bypass.
Example of Magnitude EK ransom note
The core of the ransomware did not undergo many changes throughout the year. If we compare old samples with more recent versions, there are only a few notable changes:
In older versions, immediately at launch the payload gets the default UI language of the operating system using the GetSystemDefaultUILanguage API function and compares the returned value against a couple of hardcoded language IDs (e.g. zh-HK – Hong Kong S.A.R., zh-MO – Macao S.A.R., zh-CN – People’s Republic of China, zh-SG – Singapore, zh-TW – Taiwan, ko-KR – Korea, ms-BN – Brunei Darussalam, ms-MY – Malaysia). If the language ID doesn’t match, then ExitProcess() will be executed. In newer versions, the check for the language ID was removed.
In older versions, the ransomware deletes file backups with the command “cmd.exe /c “%SystemRoot%\system32\wbem\wmic shadowcopy delete” via UAC-bypass in eventvwr.exe. In the newer version, the command is obfuscated with caret character insertion “cmd.exe /c “%SystemRoot%\system32\wbem\wmic ^s^h^a^d^o^w^c^o^p^y^ ^d^e^l^e^t^e” and executed via UAC-bypass in CompMgmtLauncher.exe.
The total volume of attacks performed by exploit kits has decreased, but they still exist, are still active, and still pose a threat, and therefore need to be treated seriously. Magnitude is not the only active exploit kit and we see other exploit kits that are also switching to newer exploits for Internet Explorer. We recommend installing security updates, migrating to a newer operating system (make sure you stay up to date with Windows 10 builds) and also not using Internet Explorer as your web browser. Throughout the entire Magnitude EK campaign we have detected the use of Internet Explorer exploits with the verdict PDM:Exploit.Win32.Generic.
Puny humans still think they’re superior to AI when it comes to infosec – and a significant number still don’t venture into meatspace or get enough sunlight.
So reckons a survey carried out on behalf of Bugcrowd, which also made the edifying finding that 64 per cent of independent infosec researchers are on median incomes below $25,000/year – with half being aged 24 or younger.
Bugcrowd, which competes with HackerOne in the “crowdsourced security” bug bounty market, released its “In The Mind of a Hacker” report to shed some light on the sorts of people using its services. While it referred to them throughout as “hackers”, it meant both infosec researchers and pentesters who claim bug bounties through its platform – the people whose work helps thwart criminal hackers with bad intentions.
Financial reward was not the number-one motivation of the survey’s 3,500 respondents either: just under a third (30 per cent) said “learning” was their main motivation, followed by a quarter who cheerfully admitted they were doing it for the cash. A fifth said they enjoyed the problem-solving element of vuln hunting.
“Hackers will always be one step ahead of AI when it comes to cybersecurity because humans are not confined by the logical limitations of machine intelligence,” said Jasmin Landry, top-ranked Bugcrowd hacker. “For example, hackers can adapt four to five low-impact bugs to exploit a single high-impact attack vector that AI would likely miss without the creative flexibility of human decision-making.”
Eye-catchingly, or perhaps not, the company’s survey found that 87 per cent of humans agreed with Landry. No AI pentesting solutions were asked to respond.
Of the 3,500 people who answered the survey, just under half (48 per cent) reckoned healthcare orgs were most vulnerable to cybercrime during the COVID-19 pandemic. Although some ransomware gangs announced earlier this year they would stop targeting healthcare organisations, other notable names from the underworld declined to join those calls.
Bugcrowd also asked ethical infosec researchers how much sunshine they had access to during the year. A third answered “less than three hours a day”, helping reinforce the stereotype that begins with an angry young man hiding inside a hoodie, an image 71 per cent said depicted them. And yes, at present it’s almost always men: 94 per cent of respondents said they were male. ®
Folks running Bitdefender’s Total Security 2020 package should check they have the latest version installed following the disclosure of a remote code execution bug.
Wladimir Palant, cofounder of Adblock-Plus-maker Eyeo, tipped off Bitdefender about the flaw, CVE-2020-8102, after discovering what he called “seemingly small weaknesses” that could be exploited by a hostile website to take control of a computer running Bitdefender’s antivirus package. The bug, privately reported in April, was patched in May.
This week, Palant said the vulnerability stems from the way Bitdefender’s code inspected HTTPS-encrypted connections for signs of malicious activity to block. To do this, the software examined webpages and other data once it was fetched over HTTPS and decrypted.
It’s important to note that Bitdefender said the bug was within its Chromium-based “secure browser” SafePay, which is supposed to protect online payments from hackers and is part of its Total Security 2020 suite. Meanwhile, Palant said the vulnerability was within a component called Online Protection within that suite, meaning it could be exploited by any website opened in any browser on any computer running Bitdefender’s vulnerable antivirus package.
At the heart of the matter is the way Bitdefender’s code handles pages fetched via HTTPS.
“Occasionally their product will have to modify the server response, for example on search pages where they inject the script implementing the Safe Search functionality,” Palant explained. “Here they unavoidably have to encrypt the modified server response with their own certificate.”
This is where the software tripped up. When the antivirus suite wanted to flag up suspicious or broken HTTPS certificates, which are sometimes a sign shenanigans may be afoot, Bitdefender’s code generated a custom error page that appeared as though it came from the requested website. It would do this by modifying the server response.
It’s generally preferable that antivirus vendors stay away from encrypted connections as much as possible
There was nothing to stop a web server with a bad certificate from requesting the contents of Bitdefender’s custom error page, though, because as far as your browser is concerned, the error page came from the web server anyway.
Thus, a malicious web server could serve a page with a good certificate, and cause a new window to open with a page from the same domain and server albeit with an invalid certificate. Bitdefender’s code would jump in, and replace the second webpage with a custom error page. The first page with the good certificate could then use XMLHttpRequest to fetch the contents of the error page, which your browser would hand over.
That error page contained the Bitdefender installation’s session tokens, which could be used to send system commands to the security software suite on the user’s PC to execute. Palant’s proof-of-concept exploit worked against a Windows host, allowing a malicious page to install, say, spyware or ransomware on a victim’s computer.
“The URL in the browser’s address bar doesn’t change,” Palant explained. “So as far as the browser is concerned, this error page originated at the web server and there is no reason why other web pages from the same server shouldn’t be able to access it. Whatever security tokens are contained within it, websites can read them out.
“It’s generally preferable that antivirus vendors stay away from encrypted connections as much as possible. Messing with server responses tends to cause issues even when executed carefully, which is why I consider browser extensions the preferable way of implementing online protection. But even with their current approach, Bitdefender should really leave error handling to the browser.”
Bitdefender said the update to fix the hole should be automatically applied.
“Improper input validation vulnerability in the Safepay browser component of Bitdefender Total Security 2020 allows an external, specially crafted web page to run remote commands inside the Safepay Utility process,” the biz acknowledged. “This issue affects Bitdefender Total Security 2020 versions prior to 188.8.131.52.” ®