Avionics applications on a 256 node REDEFINE MPP
Safran is a high-technology group with activities in aerospace, defense and security domains. Safran is interested in the realization of next generation avionics application with safety and security as the key attributes on the REDEFINE massively parallel processor (MPP) architecture. This needs to be approached both at the level of the architecture, and also at the level of the overall system.
At the level of the architecture, domain specific micro architecture enhancements of processing nodes will need to be carried out. At the level of the system, safety and security are the key concerns. For this, runtime reconfigurability of the REDEFINE MPP will be exploited for guaranteeing and attaining the desired level of availability of the system. Reconfigurability of the REDEFINE MPP will be exploited to incorporate fault-tolerance. This will necessitate enhancements to the runtime system, compilation methodology, and the micro architecture of the processing nodes.
REDEFINE massively parallel processing architecture is ideally suited for large scale and high performance applications. Processing nodes in REDEFINE communicate through a network-on-chip (NoC). Processing nodes can be customized for application domains by augmenting them with Custom Functional Units (CFUs). CFUs are exposed as Instruction Set Extensions to the REDEFINE ISA.
Applications expressed in high level programming languages are the projected compiler front-ends for parallel execution of applications on the REDEFINE MPP using the REDEFINE C compiler.
Figure 1, gives a block schematic of the REDEFINE MPP with 64 nodes.
Figure 2, gives the block schematic of a processing node in the context of the NoC.
Architecture space exploration of the REDEFINE MPP is critical to obtaining performance. Using the REDEFINE Simulator, it is possible to explore a wide variety of CFUs that are central to domain customization of the processing nodes in REDEFINE. CFUs perform macro-operations in hardware as multi-input, multi-output operations. HyperCells, which are re-configurable CFUs, will also be evaluated for their suitability in avionics applications. This is the key to high performance for a low energy budget, thereby offering higher efficiency compared to a general purpose processing architecture.
ViSMA – Virtualization and Security aware Multi-core Architecture
System virtualization enables the hosting of multiple, independent virtual machines (VMs) on a single physical machine. Each VM can be used to host an application complete with its environment. This not only allows for more efficient and improved utilization of system resources but also offers complete software isolation of independent applications.
The software isolation provides necessary security by encapsulating the application from threats arising from software bugs or loopholes external to it. Co-hosting multiple VMs on a single physical machine is normally achieved by sharing the same hardware across many VMs. Unconstrained and unchecked resource usage in such systems can lead to denial of service type of attacks. It is envisaged that resource usage monitoring and regulation will be the practice rather than an option in virtualized servers.
However, effectiveness of the resource controls will decide if denial of service threat can be deterred. Resource sharing mechanisms differ depending on the nature of the resource. Traditional OSes have used the process abstraction for resource allocation and management.
Various process schedulers have proven to be robust and very effective in time-sharing the CPU resource. Based on the goals of resource sharing, round-robin and credit based schedulers have been quite successful in managing CPU sharing without causing deadlocks or overpowering the CPU for exclusive use by a single process.
Extending the same model to virtualized servers by treating each independent VM as a process that is scheduled by the VMM has proven to be quite effective in terms of both performance and security. I/O resources like memory, disks and NICs are treated rather differently as compared to the CPU resource.
Current systems are designed to be managed by single OS. As a result of which these system resources are abstracted within the OS layers and projected as high-level resources like memory pages, disk files, sockets or packets, etc. The process is allocated these high-level resources and usage is mostly managed using these abstractions.
Actual data transfer to and from the physical device is managed and handled by the OS. Almost all of the these resources are accessed using functions of the privileged OS kernel, which in the case of general purpose OS like the Linux, is a non-pre-emptive kernel. In such systems, security breaches can happen because of faulty or buggy device drivers that capture the CPU in an infinite loop.
This issue is easily solved by restricting the VM’s CPU timeshare. Also, for devices that have drivers supporting unconstrained DMA, the drivers themselves become the security threat. Moving such device drivers from within the kernel space to user space has been proved to be the simplest method for solving the problem of unconstrained DMA. It is easy to support this idea on virtualized servers since all the VMs reside on a higher protection ring when compared to the virtual machine monitor (VMM) or the hypervisor.
Hence, a device driver attempting unconstrained DMA will naturally trap with a memory protection fault if it tries to read or write outside the VMs address space. Apart from these, the support for resource usage controls for these I/O devices lies high in the abstraction layers of the OS. Because of this the usage of the device tends to be unrestricted even though the application experiences constraints. This can lead to denial of service attack on the device. To prevent this, the device should provide resource usage controls. In order to support such a model, it is essential to have micro-architecture support from the I/O device, which the existing devices lack.
Virtualization leads to sharing of limited I/O devices by multiple, independent VMs. To guarantee performance and security it is necessary to provide concurrent device access to the shared I/O devices. Due to lack of micro-architecture support for concurrent device access, many of the prevalent virtualization architectures extend the “single-OS single-hardware” model to support virtual devices. Popular, prevalent I/O virtualization technologies use either a hosted VM or the VMM itself to contain the necessary software abstraction to support the virtual devices and concurrent access. As a consequence all shared devices also cause device access path to be shared. This model of sharing has two major pitfalls:
- Device access path is a software layer, which when shared can potentially cause all the sharing VMs to fail due to its failure.
- Since virtual device is an abstraction supported in software, device usage controls tend to be coarse grained and hence can be ineffective. This could lead to an easy denial of service attack.
We will therefore in this project come up with the micro-architecture of a processing core, in the context of a multi-core architecture which is both virtualization and security aware. This will enable application level control on the micro-architecture to address both performance and security.