WWDC Apple this year will boldly go where its peers have gone before by implementing support for encrypted DNS in iOS and macOS.

“Starting this year, Apple platforms natively support encrypted DNS,” said Tommy Pauly, internet technologies engineer, in a video presentation for Apple’s 2020 Worldwide Developer Conference, virtualized this year by necessity.

More specifically, macOS 11, iOS 14, and Mac Catalyst framework 14 (for Mac version of iPad apps) will support DNS-over-TLS (DoT) and DNS-over-HTTPS (DoH). These Apple operating system updates are scheduled for release later this year, likely in September or October.

When you visit a website with a browser, or connect to a service via an app, the software will, typically, in the background send domain-name system (DNS) queries to DNS servers, such as ones provided by your ISP, to translate domain names, like theregister.com, into network IP addresses the programs can use. These queries are typically sent unencrypted, meaning eavesdroppers on the network path can snoop on the names of sites and services you’re using, and modify the query results to redirect you to malicious websites.

Encrypted DNS, as its name suggests, encrypts those queries to shield them from snoops and meddlers.

DoT started taking shape in 2014. A proposal to establish DoH as a standard was drafted in 2017. And a year later, a research paper presented at a Usenix conference underscored the need for better security when it reported that about 8.5 per cent of DNS queries were intercepted by service providers.

Around that time, with standards in place, internet companies got serious about encrypting DNS queries, and people had arguments about how DoH disempowers network administrators and let people flout filters put in place to protect them from smut and illegal content.

Cloudflare began supporting DNS-over-TLS and DNS-over-HTTPS queries in 2018 with the launch of its 1.1.1.1 DNS service. Mozilla began rolling out DoH support last year.

Google began testing DoH last year and just implemented it in Chrome 83 recently. Microsoft talked about secure DNS last year and is now testing it for Windows. Even Comcast joined the party this week with Firefox.

Which brings us back to Apple.

Screenshot from Apple WWDC 2020 video

Apple’s encrypted DNS support shown off at WWDC this year

Apple’s updated code will allow those offering DNS services, and enterprise organizations administering corporate software via Mobile Device Management, to create apps for configuring DNS settings so they use an encrypted transport.

For example, a service provider like Cloudflare could create a network extension app using the NEDNSSettings class to switch a device to use DoT/DoH systemwide using Cloudflare’s resolvers. Organizations using MDM will be able to do so by applying a Profile to managed devices.

Developers will also be able to create individual apps that allow users to choose to make app-specific connections over encrypted DNS using the NWParameters.PrivacyContext object and standard networking APIs.

As demonstrated in the video, an iOS app implementing encrypted DNS can be activated via Settings -> General -> VPN & Network (a menu called simply “VPN” on current iOS 13 systems).

Better fashionably late than never. ®

Sponsored: Webcast: Simplify data protection on AWS

Follow me for more information.

Maze ransomware masterminds claim to have stolen source code from LG after hacking into the electronics giant.

Researchers at security outfit Cyble clocked screenshots of files, apparently swiped from LG’s internal network, posted on the malware gang’s website, where the miscreants boast about their victims.

“Soon you’ll be able to know how the LG company lost the source code of its products for one very big telecommunications company, working worldwide,” the crooks warned in an announcement on their site this week.

Maze’s operators not only use their ransomware to scramble file-systems on hacked corporate victims, they also exfiltrate sensitive information, and show a glimpse of that data on their site to prove they mean business. If a victim doesn’t pay up, the gang starts publicly leaking the purloined files. This is particularly effective when companies try to opt for the “nuke and pave” recovery approach of reformatting and restoring from backups, if they have them.

It seems the hackers were able to get into computers linked to LG’s lgepartner.com domain, based on the screenshots we’ve seen, and extract at least some of the data stored within. It appears the files related to internet or cellular-connected devices. LG makes a huge range of stuff, from smart fridges to telecommunications gear. Someone could use the leaked source code to hunt for security vulnerabilities in products to exploit.

Blackmail

Ransomware crims to sell off ‘scandalous’ files swiped from Mariah Carey, Nicki Minaj, Puff Daddy’s legal eagles

READ MORE

It is not yet clear how the attackers were able to get into the corporate network nor which systems and source code may have been on it.

“As per now, the ransomware operators have only released three screenshots as proof of the data breach,” Cyble noted. “One of the screenshots seems to consist of LG Electronics official firmware or software update releases that assist their hardware products to work more efficiently. While the other screenshot seems to list out the source code of its products.”

LG, meanwhile, seems to be in the early stages of its response.

“At LG, we take cybersecurity issues very seriously,” a spokesperson told The Register. “We are looking into this alleged incident and will involve appropriate law enforcement agencies if there is evidence that a crime has been committed.” ®

Sponsored: Ransomware has gone nuclear

Follow me for more information.

Webcast Leaked emails are the IT security mishap that just keeps on giving. From salacious tabloid headlines to lost elections to international security crises, a hacked or misfired email is the ultimate piece of first-hand evidence to light up a scandal, or ruin a reputation.

In addition, passwords, account details, or other juicy information useful for industrial espionage, identity theft, or big-ticket phishing scams, are also easily attainable from an exposed inbox. With email now the number one destination to hoodwink overworked and bleary-eyed users with a confidence trick, there are many, many reasons to keep email secure.

While most businesses say that encryption is a priority factor in a digital transformation process, it remains to be seen what this means in understanding, let alone in practice. Encryption isn’t the same as putting a password on something, and there are many different levels of obscuring or hashing information to help or hinder an employee or client who are just trying to do their job.

The historical problem with a technique like encryption in the past has been, if carried out in a heavy-handed fashion, it can be an all-or-nothing kind of deal.

Encryption could lead to emails that were difficult or impossible to work with outside the company network on different platforms, servers or setups, proving unreadable on a basic level, or with important files removed or modified, often beyond the knowledge or control of the innocent or bemused sender.

Echoworx specializes in email encryption for the modern era, and the company’s Jacob Ginsberg will be joining The Register’s Tim Phillips for a webcast on July 2 at 1000 EDT (1600 BST). There, they’ll discuss email encryption as part of a larger goal – namely defining the need to keep emails safe not by technology itself, but by the specific needs of those sending and receiving them.

The pair will explore how building email encryption into a digital transformation strategy can keep security ticking over while reducing friction for the people using the system. If use-cases are taken directly into consideration, deploying good encryption can be flexible, customizable, completely user friendly, and integrated seamlessly into popular everyday tools and platforms such as Office 365, on web desktop or in mobile environments.

Sign up for the webcast, brought to you by Echoworx and titled Your email encryption wake-up call, right here.

Follow me for more information.

Evil Corp. group hit at least 31 customers in campaign to deploy WastedLocker malware, according to Symantec.

More than two-dozen US organizations — several of them Fortune 500 companies — were attacked in recent days by a known threat group looking to deploy a dangerous new strain of ransomware called WastedLocker.

Had the attacks succeeded, they could have resulted in millions of dollars in damages to the organizations and potentially had a major impact on supply chains in the US, Symantec said in a report Thursday.

According to the security vendor, at least 31 of its customers were targeted, suggesting the actual scope of the attacks is much higher. Eleven of the companies are publicly listed, and eight are in the Fortune 500.

Among those affected were five organizations in the manufacturing sector, four IT companies, and three media and telecommunications firms. Organizations in multiple other sectors — including energy, transportation, financial services, and healthcare — were also affected. In each instance, the attackers managed to breach the networks of the targeted organizations and were preparing to deploy the ransomware when they were detected and stopped.

“The attackers behind this threat appear to be skilled and experienced, capable of penetrating some of the most well protected corporations, stealing credentials, and moving with ease across their networks,” Symantec warned. “As such, WastedLocker is a highly dangerous piece of ransomware.”

Symantec described the attacks as being carried out by Evil Corp., a Russian cybercrime group that has been previously associated with the Dridex banking Trojan and the BitPayment ransomware family. Last December, US authorities indicted two members associated with the group — Maksim Yakubets and Igor Turashev — in connection with their operation of Dridex and the Zeus banking Trojans.

The two — along with other conspirators — are alleged to have attempted theft of a staggering $220 million and caused $70 million in actual damages. The US Department of State’s Transnational Organized Crime (TOC) Rewards Program has established an unprecedented $5 million bounty for information on Yakubets. Both men remain at large.

Dangerous Campaign
The NCC Group, which also this week published a report on the WastedLocker campaign, said its investigations showed the ransomware has been in use at least since May and was likely in development several months before that. Evil Corp. has typically targeted file servers, database services, virtual machines, and cloud environments in its ransomware campaigns. They have also shown a tendency to disrupt or disable backup systems and related infrastructure where possible to make recovery even harder for victims, NCC Group said.

Symantec said its investigation shows the attackers are using a JavaScript-based malware previously associated with Evil Corp. called SocGholish to gain an initial foothold on victim networks. SocGholish is being distributed in the form of a zipped file via at least 150 legitimate — but previously compromised — websites. 

The malware masquerades as a browser update and lays the groundwork for the computer to be profiled. The attackers have then been using PowerShell to download and execute a loader for Cobalt Strike Beacon, a penetration-testing tool that attackers often use in malicious campaigns.

The tool is being used to execute commands, inject malicious code into processes or to impersonate them, download files, and carry out other various tasks that allow the attackers to escalate privileges and gain control of the infected system. As with many current malicious campaigns, the attackers behind WastedLocker have been leveraging legitimate processes and functions, including PowerShell scripts and the Windows Management Instrumentation Command Line Utility (wmic dot exe) in their campaign, Symantec said.

To deploy the ransomware itself, the attackers have been using the Windows Sysinternals tool PsExec to launch a legitimate command line tool for managing Windows Defender (mpcmdrun.exe). This disables scanning of all downloaded files and attachments and disables real-time monitoring, Symantec said. “It is possible that the attackers use more than one technique to perform this task, since NCC reported suspected use of a tool called SecTool checker for this purpose,” Symantec said.

The ransomware deploys after Windows Defender and all associated services have been stopped across the organization, the vendor noted. “A successful attack could cripple the victim’s network, leading to significant disruption to their operations and a costly clean-up operation,” Symantec warned.

Related Content:

Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click for more information and to register for this On-Demand event. 

 

Jai Vijayan is a seasoned technology reporter with over 20 years of experience in IT trade journalism. He was most recently a Senior Editor at Computerworld, where he covered information security and data privacy issues for the publication. Over the course of his 20-year … View Full Bio

Recommended Reading:

More Insights

Follow me for more information.

With the pandemic uprooting networks and upending careers, which security skills are hot — and which are not?

When things change, the most successful organizations and individuals are those who can learn from the new environment and adapt to the new requirements. In the age of COVID-19, what lessons have infosec professionals been able and willing to learn? Whether you have been busier than ever or recently joined the ranks of the unemployed, cybersecurity pros have been learning new skills to get by — training in the school of hard knocks or in more formal settings.

So we asked: What types of security training modules have become more or less popular? What types of skill sets are people interested in developing now and why? What is most essential and what isn’t? Some of these will have consequences that last for the duration of the pandemic, while a few show consequences that may last for years to come.

What lessons have you learned during the pandemic? Which skills have become more valuable? Let us know in the Comments section, below — we can all learn from others in the industry!

(Image: dizain VIA Adobe Stock)

Curtis Franklin Jr. is Senior Editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity, and … View Full Bio

Previous

1 of 6

Next

Recommended Reading:

More Insights

Follow me for more information.

South Wales Police and the UK Home Office “fundamentally disagree” that automated facial recognition (AFR) software is as intrusive as collecting fingerprints or DNA, a barrister for the force told the Court of Appeal yesterday.

Jason Beer QC, representing the South Wales Police (SWP) also blamed the Information Commissioner’s Office (ICO) for “dragging” the court into the topic of whether the police force’s use of the creepy cameras complied with the Data Protection Act.

The president of the Court of Appeal, Sir Terence Etherton, had asked: “I thought we were dealing with this overall issue about whether the law was sufficiently foreseeable, et cetera. Why are we going into this?”

Beer replied: “The ICO has dragged us into it,” to which Etherton said: “This deals with the breach [of the law], not with the framework [of law]. Why are we going into this in some detail?”

Nervously smiling on the video feed, Beer replied: “Sometimes the answers I give the court aren’t accepted,” which prompted a rare laugh from the judge. The barrister added: “The [facial-recognition tech] is no more intrusive than the use of CCTV on the streets.”

Etherton, who is the Master of the Rolls and one of England’s most senior civil judges, had been asking why Beer was going into detail about the legalities of South Wales Police’s (SWP) use of AFR technology. That tech, supplied to SWP and other forces by NEC under the name Neoface, is the subject of a legal challenge spearheaded by human rights pressure group Liberty, along with the ICO.

The thrust of Beer’s arguments were that SWP had fully complied with the law, including when it made a disputed series of privacy and equality impact assessments. Liberty argues the police paid lip service to these, while police and the Home Office say the law is perfectly clear and that SWP did nothing wrong.

More revealingly, Beer spoke in detail about the NEC Neoface deployments carried out by SWP. Of these, he said, in 2018 there were “83 incorrect alerts” leading to “21 incorrect interventions”. An intervention, he explained, meant a police constable on the street would approach a suspect and do everything from have a conversation with them to a full arrest and search. He did not break the “intervention” figure down further than that.

“Over the years,” Beer told the Court of Appeal, there was “an average of 22 per cent” of false matches made by NEC Neoface. A total of 220 false matches were reduced down to 48 wrongful “interventions” by police, according to the force’s own figures. He said: “It shows that for 80 per cent of cases over the three years, officers themselves making the match have been able to weed out the cases where the machine has incorrectly made a comparison and suggested that the person is worthy of speaking to.”

The Register is currently seeking access to witness statements deployed by SWP in support of these arguments. Beer told the court that Liberty had not correctly understood how NEC Neoface worked.

Ignore wider issues, it’s all OK – Home Office

Beer was joined in his general legal points by barrister Richard O’Brien, who argued the Home Office’s case. In written submissions O’Brien told the court it should concentrate solely on the use of Neoface’s AFR Locate function by SWP, that being the stated reason for the appeal case, and not the wider legal issues engaged by AFR tech as a whole, a point the judges appeared to be sympathetic to earlier in the case.

Referring to the human “failsafe” of police checking a match by eye as mentioned by Beer, the Home Office’s barrister wrote: “These features of AFR Locate demonstrate why the extreme potential uses of AFR to which the Appellant refers do not assist in the resolution of this case. The Respondent employs AFR Locate at specific locations in the South Wales Police’s area of responsibility. It could not lawfully or practically ‘track the movements of individuals as they move around the country’.”

O’Brien continued:

The Home Office also argued that SWP’s failure to publish its policies and processes governing AFR at the time of the disputed 2018 Cardiff deployments was “sufficient” to comply with the law as it stands.

Judgment in the case is expected later this year. ®

Sponsored: Ransomware has gone nuclear

Follow me for more information.

South Wales Police and the UK Home Office “fundamentally disagree” that automated facial recognition (AFR) software is as intrusive as collecting fingerprints or DNA, a barrister for the force told the Court of Appeal yesterday.

Jason Beer QC, representing the South Wales Police (SWP) also blamed the Information Commissioner’s Office (ICO) for “dragging” the court into the topic of whether the police force’s use of the creepy cameras complied with the Data Protection Act.

The president of the Court of Appeal, Sir Terence Etherton, had asked: “I thought we were dealing with this overall issue about whether the law was sufficiently foreseeable, et cetera. Why are we going into this?”

Beer replied: “The ICO has dragged us into it,” to which Etherton said: “This deals with the breach [of the law], not with the framework [of law]. Why are we going into this in some detail?”

Nervously smiling on the video feed, Beer replied: “Sometimes the answers I give the court aren’t accepted,” which prompted a rare laugh from the judge. The barrister added: “The [facial-recognition tech] is no more intrusive than the use of CCTV on the streets.”

Etherton, who is the Master of the Rolls and one of England’s most senior civil judges, had been asking why Beer was going into detail about the legalities of South Wales Police’s (SWP) use of AFR technology. That tech, supplied to SWP and other forces by NEC under the name Neoface, is the subject of a legal challenge spearheaded by human rights pressure group Liberty, along with the ICO.

The thrust of Beer’s arguments were that SWP had fully complied with the law, including when it made a disputed series of privacy and equality impact assessments. Liberty argues the police paid lip service to these, while police and the Home Office say the law is perfectly clear and that SWP did nothing wrong.

More revealingly, Beer spoke in detail about the NEC Neoface deployments carried out by SWP. Of these, he said, in 2018 there were “83 incorrect alerts” leading to “21 incorrect interventions”. An intervention, he explained, meant a police constable on the street would approach a suspect and do everything from have a conversation with them to a full arrest and search. He did not break the “intervention” figure down further than that.

“Over the years,” Beer told the Court of Appeal, there was “an average of 22 per cent” of false matches made by NEC Neoface. A total of 220 false matches were reduced down to 48 wrongful “interventions” by police, according to the force’s own figures. He said: “It shows that for 80 per cent of cases over the three years, officers themselves making the match have been able to weed out the cases where the machine has incorrectly made a comparison and suggested that the person is worthy of speaking to.”

The Register is currently seeking access to witness statements deployed by SWP in support of these arguments. Beer told the court that Liberty had not correctly understood how NEC Neoface worked.

Ignore wider issues, it’s all OK – Home Office

Beer was joined in his general legal points by barrister Richard O’Brien, who argued the Home Office’s case. In written submissions O’Brien told the court it should concentrate solely on the use of Neoface’s AFR Locate function by SWP, that being the stated reason for the appeal case, and not the wider legal issues engaged by AFR tech as a whole, a point the judges appeared to be sympathetic to earlier in the case.

Referring to the human “failsafe” of police checking a match by eye as mentioned by Beer, the Home Office’s barrister wrote: “These features of AFR Locate demonstrate why the extreme potential uses of AFR to which the Appellant refers do not assist in the resolution of this case. The Respondent employs AFR Locate at specific locations in the South Wales Police’s area of responsibility. It could not lawfully or practically ‘track the movements of individuals as they move around the country’.”

O’Brien continued:

The Home Office also argued that SWP’s failure to publish its policies and processes governing AFR at the time of the disputed 2018 Cardiff deployments was “sufficient” to comply with the law as it stands.

Judgment in the case is expected later this year. ®

Sponsored: Webcast: Simplify data protection on AWS

Follow me for more information.

South Wales Police and the UK Home Office “fundamentally disagree” that automated facial recognition (AFR) software is as intrusive as collecting fingerprints or DNA, a barrister for the force told the Court of Appeal yesterday.

Jason Beer QC, representing the South Wales Police (SWP) also blamed the Information Commissioner’s Office (ICO) for “dragging” the court into the topic of whether the police force’s use of the creepy cameras complied with the Data Protection Act.

The president of the Court of Appeal, Sir Terence Etherton, had asked: “I thought we were dealing with this overall issue about whether the law was sufficiently foreseeable, et cetera. Why are we going into this?”

Beer replied: “The ICO has dragged us into it,” to which Etherton said: “This deals with the breach [of the law], not with the framework [of law]. Why are we going into this in some detail?”

Nervously smiling on the video feed, Beer replied: “Sometimes the answers I give the court aren’t accepted,” which prompted a rare laugh from the judge. The barrister added: “The [facial-recognition tech] is no more intrusive than the use of CCTV on the streets.”

Etherton, who is the Master of the Rolls and one of England’s most senior civil judges, had been asking why Beer was going into detail about the legalities of South Wales Police’s (SWP) use of AFR technology. That tech, supplied to SWP and other forces by NEC under the name Neoface, is the subject of a legal challenge spearheaded by human rights pressure group Liberty, along with the ICO.

The thrust of Beer’s arguments were that SWP had fully complied with the law, including when it made a disputed series of privacy and equality impact assessments. Liberty argues the police paid lip service to these, while police and the Home Office say the law is perfectly clear and that SWP did nothing wrong.

More revealingly, Beer spoke in detail about the NEC Neoface deployments carried out by SWP. Of these, he said, in 2018 there were “83 incorrect alerts” leading to “21 incorrect interventions”. An intervention, he explained, meant a police constable on the street would approach a suspect and do everything from have a conversation with them to a full arrest and search. He did not break the “intervention” figure down further than that.

“Over the years,” Beer told the Court of Appeal, there was “an average of 22 per cent” of false matches made by NEC Neoface. A total of 220 false matches were reduced down to 48 wrongful “interventions” by police, according to the force’s own figures. He said: “It shows that for 80 per cent of cases over the three years, officers themselves making the match have been able to weed out the cases where the machine has incorrectly made a comparison and suggested that the person is worthy of speaking to.”

The Register is currently seeking access to witness statements deployed by SWP in support of these arguments. Beer told the court that Liberty had not correctly understood how NEC Neoface worked.

Ignore wider issues, it’s all OK – Home Office

Beer was joined in his general legal points by barrister Richard O’Brien, who argued the Home Office’s case. In written submissions O’Brien told the court it should concentrate solely on the use of Neoface’s AFR Locate function by SWP, that being the stated reason for the appeal case, and not the wider legal issues engaged by AFR tech as a whole, a point the judges appeared to be sympathetic to earlier in the case.

Referring to the human “failsafe” of police checking a match by eye as mentioned by Beer, the Home Office’s barrister wrote: “These features of AFR Locate demonstrate why the extreme potential uses of AFR to which the Appellant refers do not assist in the resolution of this case. The Respondent employs AFR Locate at specific locations in the South Wales Police’s area of responsibility. It could not lawfully or practically ‘track the movements of individuals as they move around the country’.”

O’Brien continued:

The Home Office also argued that SWP’s failure to publish its policies and processes governing AFR at the time of the disputed 2018 Cardiff deployments was “sufficient” to comply with the law as it stands.

Judgment in the case is expected later this year. ®

Sponsored: Ransomware has gone nuclear

Follow me for more information.

botnet hacker jailed

The United States Department of Justice yesterday sentenced a 22-year-old Washington-based hacker to 13 months in federal prison for his role in creating botnet malware, infecting a large number of systems with it, and then abusing those systems to carry out large scale distributed denial-of-service (DDoS) attacks against various online service and targets.

According to court documents, Kenneth Currin Schuchman, a resident of Vancouver, and his criminal associates–Aaron Sterritt and Logan Shwydiuk–created multiple DDoS botnet malware since at least August 2017 and used them to enslave hundreds of thousands of home routers and other Internet-connected devices worldwide.

Dubbed Satori, Okiru, Masuta, and Tsunami or Fbot, all these botnets were the successors of the infamous IoT malware Mirai, as they were created mainly using the source code of Mirai, with some additional features added to make them more sophisticated and effective against evolving targets.

Even after the original creators of the Mirai botnet were arrested and sentenced in 2018, many variants emerged on the Internet following the leak of its source code online in 2016.

According to a press release published by the Department of Justice, thought the primary aim was to earn money by renting other cybercriminals access to their botnet networks, Schuchman and his hacking team themselves used the botnet to conduct DDoS attacks.

In late 2017, CheckPoint researchers spotted Mirai variant Satori exploiting a zero-day RCE vulnerability (CVE-2017-17215) in Huawei HG532 devices that infected more than 200,000 IP addresses in just 12 hours.

The report linked the malware to a hacker using the online alias ‘Nexus Zeta,’ who turned out Kenneth Currin Schuchman after the FBI’s investigation.

“Cybercriminals depend on anonymity, but remain visible in the eyes of justice,” said U.S. Attorney Schroder. “Today’s sentencing should serve as a reminder that together with our law enforcement and private sector partners, we have the ability and resolve to find and bring to justice those that prey on Alaskans and victims across the United States.”

“Cyber-attacks pose serious harm to Alaskans, especially those in our more remote communities. The increasing number of Internet-connected devices presents challenges to our network security and our daily lives,” said Special Agent in Charge Robert W. Britt of the FBI’s Anchorage Field Office.

“The FBI Anchorage Field Office will continue to work tirelessly alongside our partners to combat those criminals who use these devices to cause damage globally, as well as right here in our own neighborhoods.”

Schuchman and his associates Sterritt, a 20-year-old U.K national, also known as “Vamp,” or “Viktor” and Shwydiuk, a 31-year-old Canadian national, also known as “Drake,” have also been charged for their roles in developing and operating these botnets to conduct DDoS attacks.

Schuchman has been sentenced by Chief U.S. District Judge Timothy M. Burgess after he pleaded guilty to one count of fraud and related activity in connection with computers, in violation of the Computer Fraud & Abuse Act.

Schuchman has also been ordered to serve a term of 18 months of community confinement and drug treatment, following his release from prison and a three-year term of supervised release.

Follow me for more information.

Three ways that security teams can improve processes and collaboration, all while creating the common ground needed to sustain them.

We’ve seen COVID-19 infection curves flatten when people are conscientious about recommended pandemic hygiene, such as social distancing and wearing a mask. As we start to re-emerge from quarantine, it serves as a powerful example of what can be accomplished if security and IT teams approach cyber hygiene with the same rigor and sense of urgency. Effective cyber hygiene requires a level of cross-team collaboration, which is rarely the norm. Here are three ways security teams can make effective improvements while creating the common ground needed to sustain them.

Seek to Understand and Empathize
Corporate IT teams remain surprisingly siloed, which makes fundamental yet essential cyber hygiene functions such as vulnerability and patch management difficult to do well. Reducing vulnerability-related IT risk isn’t possible without contributions from both security and IT operations teams. Teamwork is hard, and even simple cyber hygiene workflows are easily complicated, often by the division of labor across different teams. 

Security teams are usually the ones that find vulnerabilities, while other IT teams (mainly IT operations and DevOps teams) are the ones that fix the issues. When those fixes don’t work as planned, it can impede their ability to preserve the availability and reliability of infrastructure. The bottom line is that full-stack security isn’t trivial and requires compromise and collaboration across all stakeholders. 

As the pandemic has reminded us, the simple act of connecting with another human being can have a profound impact on the personal and professional resilience of all parties. Take the initiative to reach out to colleagues on other teams. Ask what a successful day looks like for them, about the tools they use and love, the processes that work well and don’t work at all. With normal processes and interpersonal communications upended, now’s the time for security teams to connect with their counterparts on other teams and (re)forge the connections that lead to productive partnerships.

Intelligent Vulnerability Remediation Goes Beyond Patch Management
According to Imperva, there were more than 20,000 new vulnerabilities reported in 2019. Unfortunately, handling the influx of all these new security threats remains a largely manual and error-prone process. And we all know patches can easily break more things than they fix. But patching is not the only remedy for security vulnerabilities. Configuration-based remediation options such as closing down firewall ports can be used to close security gaps quickly, even if only used as a temporary stopgap until a more robust solution can be implemented. 

It’s difficult for IT operations teams to source and compile the patches, workarounds, configuration changes, and compensating controls needed to remediate an avalanche of vulnerabilities every week. Using remediation repositories that store what can also be called remediation intelligence, the vulnerability management equivalent of threat intel, security teams can help to lighten their load. Instead of tossing a list of unprioritized vulnerabilities over the cubicle wall for the IT team to deal with, remediation intelligence enables security teams to take a more active and collaborative role in closing tickets.

From using Ansible playbooks or Chef recipes to patch a Linux server to preventing exploits by updating a firewall configuration, remediation intelligence enables security teams to help IT operations teams determine the best fix for their environment. Take this time to figure out how your security and IT teams can use remediation intelligence to streamline infrastructure security. 

Re-Evaluate Remediation KPIs to Ensure Relevancy 
Security operations teams often rely on industry-standard benchmarks to prioritize the execution of cyber hygiene workflows, but many of those metrics are outdated or have become dangerously misleading. For example, prioritizing remediation based solely on a vulnerability’s Common Vulnerability Scoring System (CVSS) score is still a common but highly flawed practice. CVSS scores are essential for benchmarking the criticality of a vulnerability, but not how critical the threat is to the assets in a unique environment. 

So, what metrics should be used to guide and prioritize the efficient work of vulnerability remediation? Here are a few of my favorites. While these are metrics used by security teams, strong cross-team support leads to greater control over these benchmarks.

  • Coverage: Does the security team have sufficient vulnerability scanning in place for all business-critical systems and applications? Are there any blind spots? Coverage clarity across the full scope of risks, known and unknown, is necessary for comprehensive security.
  • Vulnerability dwell time: The time between vulnerability disclosure and published exploit of the vulnerability in the wild has contracted substantially over the last couple of years, from weeks to days. The longer the vulnerability dwell time, or the time the vulnerability is persistent in the environment, the greater chance it will be exploited.
     
  • SLA goals versus actual remediation results: By evaluating remediation results against goals outlined in service-level agreements with the business, you can gauge how well your team has met its stated operational and risk management goals, why or what not, and how to improve.
     
  • A commonsense risk model: Just because an Oracle vulnerability has a CVSS score of 10 doesn’t mean it matters to your organization if you don’t run any Oracle. But if significant components of your infrastructure run on Oracle, you’d want these vulnerabilities to be flashing red on the remediation list.

As Rahm Emanuel (via Winston Churchill) famously said, “Never let a good crisis go to waste.” Change at scale is never easy, but the pandemic has created a once-in-a-career opportunity to make material improvements to cyber hygiene practices.

Related Content:

Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click for more information and to register for this On-Demand event. 

 

With over a decade of cybersecurity experience under his belt, Yaniv has spent years working with some of the largest companies in the world. With his “solutions, not problems” mindset, Yaniv had co-founded Vulcan Cyber in order to do just that – enable security teams to … View Full Bio

Recommended Reading:

More Insights

Follow me for more information.

Three ways that security teams can improve processes and collaboration, all while creating the common ground needed to sustain them.

We’ve seen COVID-19 infection curves flatten when people are conscientious about recommended pandemic hygiene, such as social distancing and wearing a mask. As we start to re-emerge from quarantine, it serves as a powerful example of what can be accomplished if security and IT teams approach cyber hygiene with the same rigor and sense of urgency. Effective cyber hygiene requires a level of cross-team collaboration, which is rarely the norm. Here are three ways security teams can make effective improvements while creating the common ground needed to sustain them.

Seek to Understand and Empathize
Corporate IT teams remain surprisingly siloed, which makes fundamental yet essential cyber hygiene functions such as vulnerability and patch management difficult to do well. Reducing vulnerability-related IT risk isn’t possible without contributions from both security and IT operations teams. Teamwork is hard, and even simple cyber hygiene workflows are easily complicated, often by the division of labor across different teams. 

Security teams are usually the ones that find vulnerabilities, while other IT teams (mainly IT operations and DevOps teams) are the ones that fix the issues. When those fixes don’t work as planned, it can impede their ability to preserve the availability and reliability of infrastructure. The bottom line is that full-stack security isn’t trivial and requires compromise and collaboration across all stakeholders. 

As the pandemic has reminded us, the simple act of connecting with another human being can have a profound impact on the personal and professional resilience of all parties. Take the initiative to reach out to colleagues on other teams. Ask what a successful day looks like for them, about the tools they use and love, the processes that work well and don’t work at all. With normal processes and interpersonal communications upended, now’s the time for security teams to connect with their counterparts on other teams and (re)forge the connections that lead to productive partnerships.

Intelligent Vulnerability Remediation Goes Beyond Patch Management
According to Imperva, there were more than 20,000 new vulnerabilities reported in 2019. Unfortunately, handling the influx of all these new security threats remains a largely manual and error-prone process. And we all know patches can easily break more things than they fix. But patching is not the only remedy for security vulnerabilities. Configuration-based remediation options such as closing down firewall ports can be used to close security gaps quickly, even if only used as a temporary stopgap until a more robust solution can be implemented. 

It’s difficult for IT operations teams to source and compile the patches, workarounds, configuration changes, and compensating controls needed to remediate an avalanche of vulnerabilities every week. Using remediation repositories that store what can also be called remediation intelligence, the vulnerability management equivalent of threat intel, security teams can help to lighten their load. Instead of tossing a list of unprioritized vulnerabilities over the cubicle wall for the IT team to deal with, remediation intelligence enables security teams to take a more active and collaborative role in closing tickets.

From using Ansible playbooks or Chef recipes to patch a Linux server to preventing exploits by updating a firewall configuration, remediation intelligence enables security teams to help IT operations teams determine the best fix for their environment. Take this time to figure out how your security and IT teams can use remediation intelligence to streamline infrastructure security. 

Re-Evaluate Remediation KPIs to Ensure Relevancy 
Security operations teams often rely on industry-standard benchmarks to prioritize the execution of cyber hygiene workflows, but many of those metrics are outdated or have become dangerously misleading. For example, prioritizing remediation based solely on a vulnerability’s Common Vulnerability Scoring System (CVSS) score is still a common but highly flawed practice. CVSS scores are essential for benchmarking the criticality of a vulnerability, but not how critical the threat is to the assets in a unique environment. 

So, what metrics should be used to guide and prioritize the efficient work of vulnerability remediation? Here are a few of my favorites. While these are metrics used by security teams, strong cross-team support leads to greater control over these benchmarks.

  • Coverage: Does the security team have sufficient vulnerability scanning in place for all business-critical systems and applications? Are there any blind spots? Coverage clarity across the full scope of risks, known and unknown, is necessary for comprehensive security.
  • Vulnerability dwell time: The time between vulnerability disclosure and published exploit of the vulnerability in the wild has contracted substantially over the last couple of years, from weeks to days. The longer the vulnerability dwell time, or the time the vulnerability is persistent in the environment, the greater chance it will be exploited.
     
  • SLA goals versus actual remediation results: By evaluating remediation results against goals outlined in service-level agreements with the business, you can gauge how well your team has met its stated operational and risk management goals, why or what not, and how to improve.
     
  • A commonsense risk model: Just because an Oracle vulnerability has a CVSS score of 10 doesn’t mean it matters to your organization if you don’t run any Oracle. But if significant components of your infrastructure run on Oracle, you’d want these vulnerabilities to be flashing red on the remediation list.

As Rahm Emanuel (via Winston Churchill) famously said, “Never let a good crisis go to waste.” Change at scale is never easy, but the pandemic has created a once-in-a-career opportunity to make material improvements to cyber hygiene practices.

Related Content:

Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click for more information and to register for this On-Demand event. 

 

With over a decade of cybersecurity experience under his belt, Yaniv has spent years working with some of the largest companies in the world. With his “solutions, not problems” mindset, Yaniv had co-founded Vulcan Cyber in order to do just that – enable security teams to … View Full Bio

Recommended Reading:

More Insights

Follow me for more information.

While the security operations center is enjoying a higher profile these days, just one-fourth of security operations centers actually resolve incidents quickly enough.

Security operations centers (SOCs) have gained more prestige, profile, and, in some cases, budget in the organization. But even well-resourced SOCs suffer many of the same woes that struggling SOCs do: an incomplete view of all devices connecting to their networks and an overload of redundant and underutilized security tools spitting out more data and alerts than they can handle or grok.

More alarmingly, many still struggle to quickly resolve security incidents. In some 40% of SOCs, the mean time to resolution (MTTR) is months to years, according to a study by the Ponemon Institute and commissioned by Devo Technology that published this week. Around 37% resolve incidents within weeks and 24% within hours or days.

With the exception of the most mature SOCs, that slow resolution rate is typical, notes Julian Waits, general manager of cybersecurity at Devo. “Their program is still immature, they don’t have playbooks in place, and so much is still happening manually,” he says.

It takes about a week for Texas A&M University’s SOC to resolve incident, according to Dominic Dertatevasion, associate director of IT at Texas A&M’s SOC. That MTTR is based on what A&M SOC tools can actually see, he notes. “Within a week, they should be able to identify where the host or user is and clean it up or educate the host to reset passwords” and other controls, Dertatevasion says.

Texas A&M’s SOC is actually watching not only the network on its massive flagship campus in College Station, Texas, but it also provides SOC services for 11 universities under the A&M system as well as a half-dozen government state agencies on its network. “We’re only seeing what we can see and what you can give us access to. I’m 100% sure we’re missing stuff,” Dertatevasion says of the other campuses his team services.

Some 40% of security pros say their SOCs have too many tools. Devo’s Waits says it’s not surprising that SOCs end up with too many tools that often overlap or produce redundant data. “A new technology gets brought in and many of the older technologies [overlap] … another thing gets added on the stack, and there’s not thought on how to optimize them,” he says.

The most common overlapping tools are endpoint detection and response products and network detection tools, he says. And the consolidation among security vendors also inadvertently results in redundancy in the SOC. He points to the example of next-generation firewall vendor Palo Alto Networks, which now also has endpoint technology.

Different tools can be generating alerts on the same IP address but are run by different SOC analysts, he notes.

Dertatevasion says Texas A&M adds a new tool every one to two years in a slow and deliberate strategy. The goal is to allow analysts to gain expertise in the tools and ensure they fit well into the SOC ecosystem and operations before adding anything new.

“We might be different than private industry in that we have SOC-managed tools we run and our constituents have tools they purchase and might bring along. We’ve always had to adapt to the tools other people bring along and try not to overpromise or overdeliver on that,” he says. “We don’t want to be in a jack-of-all-trades-but-master-of-none type of situation.”

Given that the security landscape is constantly evolving, he says, the university can’t afford to keep any insufficient tools, anyway.

Automation has been the battle cry for streamlining and eliminating the high volume of alerts tools generate in the SOC. More than 70% say they want more automation in the SOC, especially to help relieve the manual labor of alert management, incident evidence-gathering, and malware defense, the study found.

But, yes, there is such thing as too much automation, where the SOC analyst ends up being relegated to more of a help desk role that doesn’t tap his or her skills. As Sean Curran, a partner with West Monroe Partners, describes it, too much automation can turn SOC analysts into robots that can’t properly pivot when an incident pivots from script. He points to a case where SOC analysts disabled a legitimate alert because it didn’t fit the runbot.

“They didn’t know what to do with it,” so they assumed it was a false positive and disabled it, he recalled during a recent Dark Reading panel discussion on SOCs and incident response.

“They’re just shuffling tickets” in that scenario, Dertatevasion says. “I aim for my organization to automate the boring stuff. If we’re seeing something three times a day, and every time we see this set of IOCs we know it’s benign and we’re not going to escalate it, then we automate it.”  

COVID-19
Meanwhile, there’s been a well-documented high burnout rate among SOC analysts, leading to turnover. The Ponemon-Devo report – based on a survey of IT and IT security professionals in organizations with SOCs and taken between March 11 and April 5, at the start of the pandemic – found that 78% of SOC analysts describe the SOC “very painful” to work in, an increase from 70% last year. Around 60% are looking to jump ship and change jobs.

A recent study from Exabeam found that 64% of SOC analysts on the front line were leaving their jobs because they saw no career path for them there.

“We did this research before we really knew the reality of COVID-19,” Devo’s Waits notes. The stress levels likely have escalated, with the teams sent to work from home who weren’t accustomed to it, and the underfunded SOCs are even more challenged without the face-to-face work support, he notes.

It’s often “more chaotic” working from home, especially with family and other personal distractions, he notes. “Now SOC analysts may have something slip through the cracks” more easily, he says.

Related Content:

Learn from industry experts in a setting that is conducive to interaction and conversation about how to prepare for that “really bad day” in cybersecurity. Click for more information and to register for this On-Demand event. 

Kelly Jackson Higgins is the Executive Editor of Dark Reading. She is an award-winning veteran technology and business journalist with more than two decades of experience in reporting and editing for various publications, including Network Computing, Secure Enterprise … View Full Bio

Recommended Reading:

More Insights

Follow me for more information.

22-year old man from Vancouver, Washington, has been sentenced to a US federal prison for his role in the development of the Satori botnet, which launched distributed denial-of-service (DDoS) attacks from hijacked IoT devices.

The Satori botnet, based upon similar code to the notorious Mirai botnet which knocked major websites offline in 2016, is thought to have compromised hundreds of thousands of IoT devices, exploiting vulnerabilities to even infect routers wrongly assumed to have been protected with strong passwords.

Kenneth Currin Schuchman, who used the online handle “Nexus-Zeta”, was sentenced yesterday to 13 months in prison, having previously pleaded guilty to charges under the Computer Fraud & Abuse Act. In addition, Schuchman has been ordered to serve 18 months of community confinement to help him address mental health and substance abuse issues, and a three year term of supervised release.

After being initially charged in August 2018 Schuchman was released to pretrial supervision, but broke the terms of his release by making the astonishing decision to continue to create and operate a DDoS botnet, and communicate with his co-conspirators.

In one Discord chat with a co-conspirator using the handle “Viktor”, Schuchman is reminded that he is not supposed to be using the internet without the supervision of his father.

The conversation is accompanied by a screen capture from Schuchman’s conditions of release.

Schuchman, who has already spent 13 months confined in a jail in Alaska, is not the only person of interest to law enforcement as it investigates the Satori botnet.

As Brian Krebs reports, minutes after Schuchman’s sentencing the US Department of Justice charged men from Canada and Northern Ireland for their alleged involvement in the Satori and related IoT botnets.

Aaron Sterritt, 20, from Larne, Northern Ireland and 31-year-old Logan Shwydiuk of Saskatoon, Canada are said by prosecutors to have built, maintained, and sold access to the botnets under their control.

Sterritt is particularly of interest. According to the Department of Justice he was a criminal associate of Schuchman, and used the aliases “Viktor” or “Vamp.” As a teenager he was involved in the high-profile hack of TalkTalk, sentenced to 50 hours community service, and – perhaps most painfully of all – ordered to write a letter apologising to the telecoms firm.

It’s no excuse for criminal behaviour, of course, but the Satori botnet would not have been capable of launching crippling DDoS attacks if it hadn’t successfully recruited vulnerable routers and other IoT devices to form part of its army.

Businesses and home users can play their part by ensuring that IoT devices are not using default or easy-to-crack passwords, are running the latest security patches, and are properly configured and defended to reduce the threat surface.

But there is also a need for manufacturers to build more secure devices in the first place, and to ensure that when a new vulnerability is discovered that it can be easily rolled out to protect customers and the rest of the internet.

Follow me for more information.

Open-source security specialist Snyk has released a new survey combining data on vulnerabilities in available packages with responses from developers and DevOps teams about how they handle the challenge this poses.

Snyk's Open source security report, 2020

Click to enlarge (via Snyk)

The problem is easy to express. Software development today typically makes use of packages from online repositories. A developer sits down to create a web application and starts by installing libraries from npm.js (more than 1 million JavaScript packages to choose from), Maven (for Java), NuGet (for .NET) or PyPI (for Python).

Each package may and probably will pull down other packages on which it depends. The result is a big chunk of code that gets deployed with the application, but was not written by the developer and may include security vulnerabilities.

Indirect issues

The Snyk survey is based on responses from 500 developers, security pros and operations bods, together with data from the company’s own vulnerability database and “correlated data from the hundreds of thousands of projects currently monitored” by Snyk, and data published by sources such as GitHub, GitLab and Bitbucket, each of which manages a large number of code repositories.

The majority of problems, the report said, come from indirect dependencies, which are least visible to developers. In the case of npm.js, around 80 per cent of the vulnerabilities are in indirect dependencies. The good news is that new vulnerabilities are down by almost 20 per cent across “the most popular ecosystems”, but there is still plenty to worry about.

What kind of vulnerabilities? Top of the list is cross-site scripting, where JavaScript is injected into a site via techniques such as user input that is not properly sanitised.

The second top vulnerability last year was malicious packages, where a trusted package is contaminated with one crafted for an attack. Looking more closely, the researchers found that the top vulnerability “currently impacting scanned projects” is prototype pollution, a JavaScript attack where an object’s behaviour is modified by altering its base class. When listing top vulnerabilities by project impact, prototype pollution was followed by deserialisation of untrusted data, denial of service, denial of service by Regular Expression (which oddly gets a category all to itself), and arbitrary code execution.

SQL injection, historically a big problem, seems to be in decline. That said, the researchers reported increasing numbers of SQL injection vulnerabilities in PHP packages.

Snyk also looked at issues around containers and Kubernetes, the flavour of the moment for application deployment. Docker images often contain high-severity vulnerabilities, they reported, coming from the version of Linux on which they are based, but slimmer base images help.

When deploying to Kubernetes clusters, do IT teams check the security of Helm charts or other manifests? Most rely on manual review while 31 per cent said: “I don’t know.”

Snyk observed: “There are numerous key configuration decisions that can be made when defining a Kubernetes cluster that have a direct impact on the security of that cluster.”

We asked Snyk to put the report in context. How serious is the issue of developers inadvertently adding vulnerabilities to application via open-source dependencies?

“The problem is very severe,” Snyk president and co-founder Guy Podjarny told us. “A dev using one open-source package typically unwittingly pulls in dozens of others. Most known vulnerabilities are in those packages, and with a typical app using hundreds of libraries, the odds of a severe vulnerability in some of them are high.

“To top that, OS vulnerabilities are easy pickings for attackers, as the vulnerable code is in plain sight, and a single vulnerability has many victims. This allows even the least sophisticated attackers to exploit such issues, resulting in botnets focusing on these exploits.

“Combined, the likelihood of having an OSS vulnerability exploited in your system is considerable, and far outweighs the risk of flaws in your own code being found and exploited.”

What, then, is the call to action for DevOps teams? “The first step is clearly visibility – know which components you’re using, contrast them against a vulnerability database like Snyk’s,” Podjarny said.

“The second step, however, isn’t to triage across the org, but rather to start fixing issues. Teams who overspend energy triaging at the expense of fixing end up incurring greater risk. So prioritisation is important – but nothing is more important than actual fixing.”

A dev using one open-source package typically unwittingly pulls in dozens of others. Most known vulnerabilities are in those packages, and with a typical app using hundreds of libraries, the odds of a severe vulnerability in some of them are high

It seems a big ask for developers sitting down to work on a project. If sucking in all these packages and dependencies is wrong, what should they be doing? “It’s important for developers to understand their bill of materials,” said Simon Maple, Snyk developer relations veep.

Having established what vulnerabilities exist, “there are a number of ways to mitigate the risks. Does an upgrade path exist? Or we need to code defensively so that if a vulnerability exists, which doesn’t have a fix, we do enough input validation or whatever is needed to avoid that code path from being attacked. And when we choose an open-source package, how many maintainers are there? If a vulnerability is found, how quick are they to provide patches?”

Is it humanly possible to know all your dependencies in such detail? Automation, it seems, is unavoidable. “There are many tools which will give you information about the health of those libraries,” said Maple – and note that this is exactly Snyk’s business, so its recommendations are not without self-interest.

Should developers minimise the number of packages they use? “We want to practice caution but we want to keep the agile part of development,” said Maple. “We need to use open source but use it responsibly with a measure of how it potentially affects us.”

What about applications that live behind firewalls that require multi-factor authentication to log in – can developers argue that these are less likely to be attacked? “Hackers love those developers,” said Maple, “because as soon as they do get past that firewall, it’s party time.” ®

Sponsored: Ransomware has gone nuclear

Follow me for more information.

Product categories

Post

June 2020
SMTWTFS
 123456
78910111213
14151617181920
21222324252627
282930 
X