Cellebrite makes software to automate physically extracting and indexing data from mobile devices. They exist within the grey – where enterprise branding joins together with the larcenous to be called “digital intelligence.” Their customer list has included authoritarian regimes in Belarus, Russia, Venezuela, and China; death squads in Bangladesh; military juntas in Myanmar; and those seeking to abuse and oppress in Turkey, UAE, and elsewhere. A few months ago, they announced that they added Signal support to their software.

Their products have often been linked to the persecution of imprisoned journalists and activists around the world, but less has been written about what their software actually does or how it works. Let’s take a closer look. In particular, their software is often associated with bypassing security, so let’s take some time to examine the security of their own software.

 

The background

First off, anything involving Cellebrite starts with someone else physically holding your device in their hands. Cellebrite does not do any kind of data interception or remote surveillance. They produce two primary pieces of software (both for Windows): UFED and Physical Analyzer.

UFED creates a backup of your device onto the Windows machine running UFED (it is essentially a frontend to adb backup on Android and iTunes backup on iPhone, with some additional parsing). Once a backup has been created, Physical Analyzer then parses the files from the backup in order to display the data in browsable form.

When Cellebrite announced that they added Signal support to their software, all it really meant was that they had added support to Physical Analyzer for the file formats used by Signal. This enables Physical Analyzer to display the Signal data that was extracted from an unlocked device in the Cellebrite user’s physical possession.

One way to think about Cellebrite’s products is that if someone is physically holding your unlocked device in their hands, they could open whatever apps they would like and take screenshots of everything in them to save and go over later. Cellebrite essentially automates that process for someone holding your device in their hands.

The rite place at the Celleb…rite time

By a truly unbelievable coincidence, I was recently out for a walk when I saw a small package fall off a truck ahead of me. As I got closer, the dull enterprise typeface slowly came into focus: Cellebrite. Inside, we found the latest versions of the Cellebrite software, a hardware dongle designed to prevent piracy (tells you something about their customers I guess!), and a bizarrely large number of cable adapters.

Cellebrite case on side of road.

The software

Anyone familiar with software security will immediately recognize that the primary task of Cellebrite’s software is to parse “untrusted” data from a wide variety of formats as used by many different apps. That is to say, the data Cellebrite’s software needs to extract and display is ultimately generated and controlled by the apps on the device, not a “trusted” source, so Cellebrite can’t make any assumptions about the “correctness” of the formatted data it is receiving. This is the space in which virtually all security vulnerabilities originate.

Since almost all of Cellebrite’s code exists to parse untrusted input that could be formatted in an unexpected way to exploit memory corruption or other vulnerabilities in the parsing software, one might expect Cellebrite to have been extremely cautious. Looking at both UFED and Physical Analyzer, though, we were surprised to find that very little care seems to have been given to Cellebrite’s own software security. Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present.

As just one example (unrelated to what follows), their software bundles FFmpeg DLLs that were built in 2012 and have not been updated since then. There have been over a hundred security updates in that time, none of which have been applied.

FFmpeg vulnerabiltiies by year

The exploits

Given the number of opportunities present, we found that it’s possible to execute arbitrary code on a Cellebrite machine simply by including a specially formatted but otherwise innocuous file in any app on a device that is subsequently plugged into Cellebrite and scanned. There are virtually no limits on the code that can be executed.

For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question.

Any app could contain such a file, and until Cellebrite is able to accurately repair all vulnerabilities in its software with extremely high confidence, the only remedy a Cellebrite user has is to not scan devices. Cellebrite could reduce the risk to their users by updating their software to stop scanning apps it considers high risk for these types of data integrity problems, but even that is no guarantee.

We are of course willing to responsibly disclose the specific vulnerabilities we know about to Cellebrite if they do the same for all the vulnerabilities they use in their physical extraction and other services to their respective vendors, now and in the future.

Below is a sample video of an exploit for UFED (similar exploits exist for Physical Analyzer). In the video, UFED hits a file that executes arbitrary code on the Cellebrite machine. This exploit payload uses the MessageBox Windows API to display a dialog with a message in it. This is for demonstration purposes; it’s possible to execute any code, and a real exploit payload would likely seek to undetectably alter previous reports, compromise the integrity of future reports (perhaps at random!), or exfiltrate data from the Cellebrite machine.

Also of interest, the installer for Physical Analyzer contains two bundled MSI installer packages named AppleApplicationsSupport64.msi and AppleMobileDeviceSupport6464.msi. These two MSI packages are digitally signed by Apple and appear to have been extracted from the Windows installer for iTunes version 12.9.0.167.

MSI packages

The Physical Analyzer setup program installs these MSI packages in C:\Program Files\Common Files\Apple. They contain DLLs implementing functionality that iTunes uses to interact with iOS devices.

DLLs installed on filesystem

The Cellebrite iOS Advanced Logical tool loads these Apple DLLs and uses their functionality to extract data from iOS mobile devices. The screenshot below shows that the Apple DLLs are loaded in the UFED iPhone Logical.exe process, which is the process name of the iOS Advanced Logical tool.

DLLs loaded in process

It seems unlikely to us that Apple has granted Cellebrite a license to redistribute and incorporate Apple DLLs in its own product, so this might present a legal risk for Cellebrite and its users.

The completely unrelated

In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software. Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding. We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time. There is no other significance to these files.

Google runs some of the most venerated cybersecurity operations on the planet: its Project Zero team, for example, finds powerful undiscovered security vulnerabilities, while its Threat Analysis Group directly counters hacking backed by governments, including North Korea, China, and Russia. And those two teams caught an unexpectedly big fish recently: an “expert” hacking group exploiting 11 powerful vulnerabilities to compromise devices running iOS, Android, and Windows.

But MIT Technology Review has learned that the hackers in question were actually Western government operatives actively conducting a counterterrorism operation. The company’s decision to stop and publicize the attack caused internal division at Google and raised questions inside the intelligence communities of the United States and its allies.

A pair of recent Google blog posts detail the collection of zero-day vulnerabilities that it discovered hackers using over the course of nine months. The exploits, whichwent back to early 2020 and used never-before-seen techniques, were “watering hole” attacks that used infected websites to deliver malware to visitors. They caught the attention of cybersecurity experts thanks to their scale, sophistication, and speed. 

Google’s announcement glaringly omitted key details, however, including who was responsible for the hacking and who was being targeted, as well as important technical information on the malware or the domains used in the operation. At least some of that information would typically be made public in some way, leading one security expert to criticize the report as a “dark hole.” 

“Different ethical questions”

Security companies regularly shut down exploits that are being used by friendly governments, but such actions are rarely made public. In response to this incident, some Google employees have argued that counterterrorism missions ought to be out of bounds of public disclosure; others believe the company was entirely within its rights, and that the announcement serves to protect users and make the internet more secure.

“Project Zero is dedicated to finding and patching 0-day vulnerabilities, and posting technical research designed to advance the understanding of novel security vulnerabilities and exploitation techniques across the research community,” a Google spokesperson said in a statement. “We believe sharing this research leads to better defensive strategies and increases security for everyone. We don’t perform attribution as part of this research.”

It’s true that Project Zero does not formally attribute hacking to specific groups. But the Threat Analysis Group, which also worked on the project, does perform attribution. Google omitted many more details than just the name of the government behind the hacks, and through that information, the teams knew internally who the hacker and targets were. It is not clear whether Google gave advance notice to government officials that they would be publicizing and shutting down the method of attack.

But Western operations are recognizable, according to one former senior US intelligence official.

“There are certain hallmarks in Western operations that are not present in other entities … you can see it translate down into the code,” said the former official, who is not authorized to comment on operations and spoke on condition of anonymity. “And this is where I think one of the key ethical dimensions comes in. How one treats intelligence activity or law enforcement activity driven under democratic oversight within a lawfully elected representative government is very different from that of an authoritarian regime.”

“The oversight is baked into Western operations at the technical, tradecraft, and procedure level,” they added.

Google found the hacking group exploiting 11 zero-day vulnerabilities in just nine months, a high number of exploits over a short period. Software that was attacked included the Safari browser on iPhones but also many Google products, including the Chrome browser on Android phones and Windows computers.

But the conclusion within Google was that who was hacking and why is never as important as the security flaws themselves. Earlier this year, Project Zero’s Maddie Stone argued that it is too easy for hackers to find and use powerful zero-day vulnerabilities and that her team faces an uphill battle detecting their use. 

Instead of focusing on who was behind and targeted by a specific operation, Google decided to take broader action for everyone. The justification was that even if a Western government was the one exploiting those vulnerabilities today, it will eventually be used by others, and so the right choice is always to fix the flaw today.   

“It’s not their job to figure out”

This is far from the first time a Western cybersecurity team has caught hackers from allied countries. Some companies, however, have a quiet policy of not publicly exposing such hacking operations if both the security team and the hackers are considered friendly—for example, if they are members of the “Five Eyes” intelligence alliance, which is made up of the United States, the United Kingdom, Canada, Australia, and New Zealand. Several members of Google’s security teams are veterans of Western intelligence agencies, and some have conducted hacking campaigns for these governments.

In some cases, security companies will clean up so-called “friendly” malware but avoid going public with it. 

“They typically don’t attribute US-based operations,” says Sasha Romanosky, a former Pentagon official who published recent research into private-sector cybersecurity investigations. “They told us they specifically step away. It’s not their job to figure out; they politely move aside. That’s not unexpected.”

While the Google situation is in some ways unusual, there have been somewhat similar cases in the past. The Russian cybersecurity firm Kaspersky came under fire in 2018 when it exposed an American-led counterterrorism cyber operation against ISIS and Al Qaeda members in the Middle East. Kaspersky, like Google, did not explicitly attribute the threat but nevertheless exposed it and rendered it useless, American officials said, which caused the operatives to lose access to a valuable surveillance program and even put the lives of soldiers on the ground at risk.

Kaspersky was already under heavy criticism for its relationship with the Russian government at the time, and the company was ultimately banned from US government systems. It has always denied having any special relationship with the Kremlin.

Google has found itself in similar water before, too. In 2019, the company released research on what may have been an American hacking group, although specific attribution was never made. But that research was about a historical operation. Google’s recent announcements, however, put the spotlight on what had been a live cyber-espionage operation. 

Who’s being protected?

The alarms raised both inside government and at Google show the company is in a difficult position. 

Google security teams have a responsibility to the company’s customers, and it is widely expected that they will do their utmost to protect the products—and therefore users—who are under attack. In this incident, it’s notable that the techniques used affected not just Google products like Chrome and Android, but also iPhones.

While different teams draw their own lines, Project Zero has made its name by tackling critical vulnerabilities all over the internet, not just those found in Google’s products. 

“Each step we take towards making 0-day hard, makes all of us safer,” tweeted Maddie Stone, one of the most highly respected members of the security team, when the latest research was published. 

But while protecting customers from attack is important, some argue that counterterrorism operations are different, with potentially life-and-death consequences that go beyond day-to-day internet security.

When state-backed hackers in Western nations find cybersecurity flaws, there are established methods for working out the potential costs and benefits of disclosing the security gap to the company that is affected. In the United States it’s called the “vulnerabilities equities process.” Critics worry that US intelligence hoards large numbers of exploits, but the American system is more formal, transparent, and expansive than what’s done in almost every other country on earth, including Western allies. The process is meant to allow government officials to balance the advantages of keeping flaws secret in order to use them for intelligence purposes with the wider benefits of telling a tech company about a weakness in order to have it fixed. 

Last year the NSA made the unusual move to take credit for revealing an old flaw in Microsoft Windows. That kind of report from government to industry is normally kept anonymous and often secret.

But even though the American intelligence system’s disclosure process can be opaque, similar processes in other Western nations are often smaller, more secretive, or simply informal and therefore easy to bypass.

“The level of oversight even in Western democracies about what their national security agencies are actually doing is, in many cases, a lot less than we have in the United States,” says Michael Daniel, who was White House cybersecurity coordinator for the Obama administration. 

“The degree of parliamentary oversight is much less. These countries do not have the robust inter-agency processes the US has. I’m not normally one to brag about the US—we’ve got a lot of problems—but this is one area where we have robust processes that other Western democracies just don’t.” 

The fact that the hacking group hit by the Google investigation possessed and used so many zero-day vulnerabilities so rapidly could indicate a problematic imbalance. But some observers worry about live counterterrorism cyberoperations being shut down at potentially decisive moments without the ability to quickly start up again.

“US allies don’t all have the ability to regenerate entire operations as quickly as some other players,” the former senior US intelligence official said. Worries about suddenly losing access to an exploit capability or being spotted by a target are particularly high for counterterrorism missions, especially during “periods of incredible exposure” when a lot of exploitation is taking place, the official explained. Google’s ability to shut down such an operation is likely to be the source of more conflict.

“This is still something that hasn’t been well addressed,” the official said. “The idea that someone like Google can destroy that much capability that quickly is slowly dawning on folks.” 

cisco jabber remote vulnerability

Cisco on Wednesday released software updates to address multiple vulnerabilities affecting its Jabber messaging clients across Windows, macOS, Android, and iOS.

Successful exploitation of the flaws could permit an “attacker to execute arbitrary programs on the underlying operating system with elevated privileges, access sensitive information, intercept protected network traffic, or cause a denial of service (DoS) condition,” the networking major said in an advisory.

The issues concern a total of five security vulnerabilities, three of which (CVE-2021-1411, CVE-2021-1417, and CVE-2021-1418) were reported to the company by Olav Sortland Thoresen of Watchcom, with two others (CVE-2021-1469 and CVE-2021-1471) uncovered during internal security testing.

Cisco notes that the flaws are not dependent on one another, and that exploitation of any one of the vulnerabilities doesn’t hinge on the exploitation of another. But in order to do this, an attacker needs to be authenticated to an Extensible Messaging and Presence Protocol (XMPP) server running the vulnerable software, as well as be able to send XMPP messages.

CVE-2021-1411, which concerns an arbitrary program execution vulnerability in its Windows app, is also the most critical, with a CVSS score of 9.9 out of a maximum of 10. According to Cisco, the flaw is due to improper validation of message content, thus making it possible for an attacker to send specially-crafted XMPP messages to the vulnerable client and execute arbitrary code with the same privileges as that of the user account running the software.

Besides CVE-2021-1411, four other Jabber flaws have also been fixed by Cisco, counting —

  • CVE-2021-1469 (Windows) – An issue with improper validation of message content that could result in arbitrary code execution.
  • CVE-2021-1417 (Windows) – A failure to validate message content that could be leveraged to leak sensitive information, which can then fuel further attacks.
  • CVE-2021-1471 (Windows, macOS, Android, iOS) – A certificate validation vulnerability that could be abused to intercept network requests and even modify connections between the Jabber client and a server
  • CVE-2021-1418 (Windows, macOS, Android, iOS) – An issue arising from improper validation of message content that could be exploited by sending crafted XMPP messages to cause a denial-of-service (DoS) condition.

This is far from the first time Norwegian cybersecurity firm Watchcom has uncovered flaws in Jabber clients. In September 2020, Cisco resolved four flaws in its Windows app that could permit an authenticated, remote attacker to execute arbitrary code. But after three of the four vulnerabilities were not “sufficiently mitigated,” the company ended up releasing a second round of patches in December.

In addition to the fix for Jabber, Cisco has also published 37 other advisories that go into detail about security updates for a number of medium and high severity issues affecting various Cisco products.