Military Embedded Systems

Enabling next-generation Space IoT with a unified memory architecture

Story

June 16, 2022

Paul Armijo

Avalanche Technology

Kristine Schroeder

Avalanche Technology

The unique environmental challenges of space require truly distributed edge computing for scale and autonomous operation – these systems must have sufficient processing capability and revamped memory architectures to support the vast collection and processing of data in real time. Standardizing around common flexible architectures that embrace universal memory is critical to enabling robust designs with optimized size, weight, power, and cost (SWaP-C), all critical elements for the space community. However, until the advent of recent spin torque transfer (STT) magnetoresistive random access memory (MRAM) solutions, there were no legacy memory technologies that could support the reliability, speed, and robustness required for this pivotal role.

Emerging satellite constellations with increasingly advanced capabilities promise insights and enhanced communications in space that mirror those of terrestrial Internet of Things (IoT) systems and even data centers. From planetary climate monitoring to galactic exploration, scientific research is now intersecting commercial-use models in the final frontier, enabled by lower costs of entry and accelerated timelines.

Newer technologies such as synthetic aperture radar (SAR) augment traditional optical sensors and radio frequency (RF) imagery with far greater resolution, range, and wavelength discrimination to facilitate new discovery and opportunity, relevant to both commercial and national security interests. (Table 1).

[Table 1 | Common nonvolatile memory technologies are compared.]

These enhancements have resulted in a data stream magnification from megabits per second (Mbits/sec) a decade ago, then to gigabits per second (Gbits/sec) just a few years ago, to the staggering terabits per second (Tbits/sec) seen today. Such an escalation of advanced processing capability, sensor fusion data, and artificial intelligence (AI) has driven commensurate increases in communication speeds and memory-density requirements; however, these challenges are magnified in the environment of space.

To minimize latency for real-time processing, the gathered sensor data needs to be processed locally, on the Space IoT asset. Bandwidth to the ground is still highly limited, resulting in the need for dramatically larger memory resources than have been commercially feasible from legacy memory technologies. Factor in radiation resilience requirements – along with the complex power supply dynamics of satellites in orbit – and the universe of viable memory options with sufficient robustness, size, and performance narrows considerably.

Over time, the space community has witnessed a transition from large satellites to distributed small sats resulting in a standardization of small sat buses, communication systems, and propulsion subsystems. Programs like Blackjack out of DARPA [Defense Advanced Research Projects Agency] among several others have both driven innovation and opened the door for standardized platforms to improve efficiencies in design, test, and deployment.

Standardization through the use of technologies like flexible FPGAs [field-programmable gate arrays] enables designers and engineers to repurpose functions and move away from point solutions. Similar evolution is happening with flexible SBCs [single-board computers] and GPUs [graphics processing units], from prior generations of SBCs such as BAE Systems’ RAD750, used on the NASA Mars Rover platforms, to their latest RAD5545 along with other platforms including DDC’s SCS3740, MOOG’s GPU SBC, and Space Micro’s PROTON 600K. All this extra processing power previously limited to terrestrial systems requires even more memory to store the boot image for the systems and then execute computations. Figure 1 shows the progression of memory requirements for these evolving SBCs.

[Figure 1 | A graph shows the evolution of memory requirements of spaceborne single-board computers.]

These memories, much like the processors themselves, still require significant radiation-effects mitigation to overcome inherent vulnerabilities. Moreover, they functionally are fairly specialized and therefore limited in their utility. Upon examination of commonly used nonvolatile memory technologies, such as flash, exposure to radiation can physically displace electrons from the floating gate, changing the state of the cell and resulting in a bit error or SEU [single-event upset]. These flash errors from radiation exposure increase significantly at higher altitudes as the intensity of radiation increases. NAND flash is most susceptible to SEU, resulting in the need for significant redundancy and overhead and ultimately a reduction in storage density.

They also require reliability mitigation requiring external error correction and wear leveling, thereby expanding the required footprint. Legacy technologies like EEPROM, NOR, and SONOS offer recognized robustness, but as the legacies carry large geometry charge pumps, do not allow compact size and density scaling. Similarly, volatile DRAM and SRAM technologies are susceptible to single-event latch-up (SEL) from radiation exposure, requiring a power reset to recover, jeopardizing critical data, and potentially hurting the device itself. In a satellite orbiting Earth, this kind of failure can be devastating to system function and longevity.

A universal memory architecture that could handle both nonvolatile and volatile memory needs for these processing platforms reduces the need for added testing and radiation-mitigation resources that add size, weight, and power (SWaP) to a system. Nonvolatile spin torque transfer magnetic RAM (STT-MRAM) is available with densities that can compete with NAND and DRAM while not requiring external error correction or wear leveling, speeds resembling that of SRAM, and reliability and robustness optimal for use in space. STT-MRAM is both inherently immune to radiation and magnetic fields and is flexible and capable enough to replace volatile and nonvolatile memory instances in space.

Directly addressing these identified challenges within higher-performance platform designs, truly unified memory can be leveraged for things like FPGA and processor boot code, while also being able to store the data during collections or analytics. Specifically, it can support a full boot image for one of the latest FPGA platforms like the AMD/Xilinx Versal devices, which require 1 Gb for each copy; most users are also required to maintain a “golden copy” as well as a few extra copies.

This is a giant leap in requirements from previous device versions including SIRF/Virtex-5QV, whose boot image could fit within a single heritage 64 MB MCM toggle MRAM. Today, with more than 8 Gb in a single BGA [ball-grid array] package – and 16 Gb with a DDR3 interface expected by the end of 2022 – loading an RTOS such as Linux in addition to the boot images is now possible, while still having room for ongoing processing support, replacing NOR, DRAM, and NAND devices. This level of ubiquity, density, and robustness will enable optimization, standardization and scaling of these flexible processing platforms required for Space IoT and data centers to become a reality. (Figure 2.)

[Figure 2 | Shown: a notional spaceborne processing system.]

This platform optimization and standardization enables companies to innovate system capabilities in new dimensions with lower risk and economies of scale using proven hardware and available operating systems rather than having to develop, test, and mitigate their own AI hardware platforms. The emergence of space data centers shows parallels to terrestrial data centers, where companies can focus on leveraging advanced sensors and data analytics from proven processing and memory platforms already optimized for SWaP-C scaling. As the data being managed by these data centers grows to tens of terabytes, deterministic connections with low risk of data loss due to power supply interruptions is critical. Avalanche Technology supplies the L4 cache of the streaming links from hundreds of low Earth orbit (LEO) space IoT satellites and to the ground base stations that act as bulk storage for data analytics and ML/DL [machine learning/deep learning] model-generating computers. By using this high-density nonvolatile STT-MRAM capable of SRAM performance, the streaming transient data from the sensors can be stored before it is committed to NAND or always-on DRAM. (Figure 3.)

[Figure 3 | Shown: a simplified spaceborne processing system with a unified memory architecture.] 

Mega-constellations of Space IoT micro satellites are coming online in LEO looking to synchronize their data with space data centers without relying on a connection to Earth and will depend on unified memory architecture. This is the trial run for systems that will one day be deployed in orbits around the moon and Mars.

Paul Armijo is the Chief Technology Officer of Aerospace & Defense at Avalanche Technology. Paul has had the privilege of leading numerous flagship programs and technology development efforts over his career to further enable the space community, at companies including GSI Technology, Cobham Semiconductor Solutions, General Dynamics, and others. With particular specialty in radiation effects, Paul received his B.S. in electrical engineering from Arizona State University. He may be reached at [email protected].

Kristine Schroeder is the Sr. Director of Business Development at Avalanche Technology. Kristine has spent the bulk of her career in a sales and business development capacity within various aspects of the semiconductor industry, including foundry, IP, an independent manufacturer’s rep firm, and device OEMs such as Texas Instruments and Altera. In recent years, increasingly focusing her efforts on the critical needs of the defense community, Kristine received her B.S. in electrical engineering from the University of Vermont. She can be reached at [email protected].

Avalanche Technology     avalanche-technology.com/

Featured Companies

Avalanche Technology

3450 West Warren Avenue
Fremont, California 94538