Learn how Google discovered several zero-day bugs that Apple patched on iOS 12.1.4

We’ve talked about it on countless occasions. The important thing about operating system updates (any, not just iOS) is not so much the new features or access to new features that are added. What is really important is the safety and bug fixes that can compromise our systems in a way that none are safe.

Today we will try to explain to you a succession of past and already solved errors months ago on iOS, that the Google Project Zero team has made public and that show us how important security research is and apply updates that correct errors in the systems.

The first thing we need to understand at the security level is the difference between the different possible threats that our systems may have:

  • Viruses: programs that run without our consent, hidden in execution and that are spread without our knowledge by different means. Only a program that searches for the chain of the same will be able to find it. These do not exist on macOS and Linux because their architecture prevents them from existing.

 

  • Trojans: programs that we run believing that they will do something else but contain a malicious function. Like a fake Flash installer you put a RAT (remote management tool) to take control of your computer without our knowledge. On macOS and Linux there are (wow). On iOS there either exist synot: it’s the App Store review that prevents them from reaching the system, but if some developer manages to sneak one without the review team detecting it, there will be one. However, running the apps on iOS prevents them from accessing the system, but they (with our prior consent) could access and use our data and steal them. These are very isolated cases, but we cannot categorically deny that they do not exist.

 

  • Exploits: or security flaws or zero-days. Actually the exploit would be the ability to exploit a security flaw. Absolutely no one is freed from these. They are bugs in the software, failures that a programmer has made when creating the code of a program or system, and that can be exploited so that a malicious code manages to do things that should not by how the security of a system is defined.

Clear the issue, let’s talk about security flaws. Serious failures that allow through programming errors, exploit and obtain privileges on the system that they should not have and that take advantage of them to «do the wrong». Privileges such as accessing the general file system on iOS, exiting the apps sandbox, or disabling code signing checking.

Security flaws in iOS 12.1.4

Project Zero, a division of Google, has been responsible in recent years for finding countless security flaws in both its own and other products. They found the vulnerabilities of Intel chips (and other companies) known as Spectre and Meltdown and many other big bugs on android, like iOS, Windows… they do a very professional work and always respecting the privacy of those affected, informing without publicity and bringing to light their discoveries when these have already been adequately solved.

This time, the Google Project Zero team has brought to light the discoveries about various bugs in iOS that allow us to see clearly how the issue of security affects everyone equally and we have to take it very seriously. We’re going to tell you and explain.

The TAG (Google Threat Analysis Group) team detected earlier this year a series of malicious web pages that were exploiting a series of security holes on iOS. Some of them, zero-days or day 0, because they had not yet been discovered by the team responsible for the software.

Simply by visiting a specific website, thousands of iPhones every week were attacked and installed with remote monitoring software. Up to 5 different exploits found Google on this page that allowed to compromise all versions of iOS from 10 to 12. A work done by a team of cyberdeliating experts on a job that is estimated has led to about two years of continuous research.

Working with the threat team, Google Project Zero found a total of 14 vulnerabilities in the system software across a chain of 5 different exploits. Seven of the errors are from the Safari browser, five of the kernel or system kernel and two errors that allowed any app or process to bypass the sandbox that protects the kernel so that the apps do not reach it and get full permission to modify and access it. That is, root or root access.

Specifically, it was found that a string of exploits using the web, in later versions of iOS, remained unpatched and were not known to Apple so on February 1 Cupertino was notified. The bugs in question, which Apple reported in this security report are execution permission elevation errors.

-CVE-2019-7286: An app could achieve elevated privileges through memory corruption, by an incorrectly validated data entry.

-CVE-2019-7287: The same memory corruption could affect the system input and output management library, and allow that app that has achieved elevated privileges, execute arbitrary code with kernel permissions (from system owner).

And we are going to get very serious about this, because this elevation of permissions allows you to permanently install and run a monitoring software on your device. Software that allows you to extract all your data, record calls and send such recordings to remote servers (or hear them in real time), get the exact location, activate the cameras and get video or photo of what’s in front of them, activate and hear the microphones and what happens around the device getting recordings… all without the user being absolutely aware of this except for a curious unusual battery expense on their device.

Chain of exploits

The first of the exploits, according to Google’s research, is from the iOS 10 era and allows the device to be jailbroken. Google, in an extensive article that you can read here gives us the exhaustive detail on how to exploit this vulnerability in all the technical detail. In the end, the attack on the input-output layer to the system got what we mentioned: circumvent the code signature check that is verified by the amfid or Apple Mobile File Integrity Daemon process. This process is responsible for enforcing every code to be run in memory comes from an Apple-signed, validated digital source.

And what is the signature? It is to obtain a hash or check data from a code and encrypt it for later verification. If I have a program, this is data. I calculate a hash or verification code to validate its authenticity. A unique data extracted from a series of arithmetic operations with all its data and that at the moment a single number of that data changes, we get a different hash.

I encrypt that data with the Apple certificate, with the part of the private key that only Apple has. A digital certificate always has two parts: the private part that allows encryption and the public part that allows only to decrypt what the private party encrypted. When I have that hash, I encrypt it and put it next to the app. The system recalculates the hash of a code to be executed, the previously encrypted value is decrypted, and if the hash calculated and the save matches, the signature is correct, and the code is authenticated that the code has not been modified in any way since Apple encrypted that check Ation. That’s the essence.

But if I manage to go over that process, what I get is that I can execute any code from any source without any restrictions (what we know as a jailbreak) in addition to being able to access any part of the system without restrictions and its memory. Something extremely dangerous. So when people talk freely about jailbreaking and how wonderful it is, they don’t realize the security problem so important it is to us and how with the simple visit of a website or an email they send us or an app that we get off that no one has supervised pr We may now open our device to anyone who wants to monitor it and/or extract all of its information for any unethical purpose.

Another of the exploits, it was made to attack systems from iOS 10.3 and was patched on iOS 11.2 by Apple. You can read the information here. This allowed you to read and write the system kernel memory (from the kernel of the system). Typically this memory is protected by tables that use the KASLR or Kernel Address Space Layout Randomization technique, a method by which memory that is reserved by kernel operations are in random zones of the same, never in the same place , and therefore for an attacker it is more difficult to deduce what address there may be that data or other resulting from a kernel process execution. Accessing this memory is another step to get to know where the amfid process is that allows you to skip the system signature check.

The next and third, whose information is here, allows you to trick the system into placing a code in the temporary folder of the same /tmp, and get execution privileges when any content in this path should not have them per system. Basically, it accesses the kernel trust database cache (there the failure) and replaces the credentials of a process allowed by those of this code placed in the temporary folder. And this tricks the system by allowing the app that has been loaded from the web and that should not be able to run (but it has been managed to run because we no longer verify the signature), escape the sandbox of the apps.

The fourth is the way the KASLR table is accessed so that it can determine where the system will save important information of the processes running in the kernel and be able to change it to divert the flow to malicious code.

And the latter, it’s a security flaw found at once by a member of the Google Project Zero team (which was credited by Apple in the iOS 12.1.3 notes where the bug was patched) and by the @S0rryMyBad hacker who won a $200,000 award at the IA TianFu Cup PWN security. A bug that, again, because of an error in the validation of the data in C, allowed an app to run outside the system sandbox. An error in the use of the certificate portfolio. The bug is explained here.

All these chain failures were basically designed so that a visit of a device, with versions from iOS 10 to iOS 12, with unpatched versions, would find a way to be exploited. By taking advantage of one or more of the 14 security errors.