By Adam Taylor

Engineers never lose sight of the need to deliver projects that hit the quality, schedule and budget targets. You can apply the lessons learned by the community of embedded system developers over the years to ensure that your next embedded system project achieves those goals. Let’s explore some important lessons that have led to best practices for embedded development.

Systems engineering is a broad discipline covering development of everything from aircraft carriers and satellites, for example, to the embedded systems that enable their performance. We can apply a systems engineering approach to manage the embedded systems engineering life cycle from concept to end-of-life disposal.

The first stage in a systems engineering approach is not, as one might think, to establish the system requirements, but to create a systems engineering management plan. This plan defines the engineering life cycle for the system and the design reviews that the development team will perform, along with expected inputs and outputs from those reviews. The plan sets a clear definition for the project management, engineering and customer communities as to the sequence of engineering events and the prerequisites at each stage.

In short, it lays out the expectations and deliverables. With a clear understanding of the engineering life cycle, the next step of thinking systematically is to establish the requirements for the embedded system under development. A good requirement set will address three areas. Functional requirements define how the embedded system performs. Nonfunctional requirements define such aspects as regulatory compliance and reliability. Environmental requirements define such aspects as the operational temperature and shock and vibration requirements, along with the electrical environment (for example, EMI and EMC).

Within a larger development effort, those requirements will be flowed down and traceable from a higher-level specification, such as a system or subsystem specification (Figure 1). If there is no higher-level specification, we must engage with stakeholders in the development to establish a clear set of stakeholder requirements and then use those to establish the embedded system requirements.

Generating a good requirement set requires that we put considerable thought into each requirement to ensure that it meets these standards:

It is necessary. Our project cannot achieve success without the requirement. It is verifiable. We must ensure that the requirement can be implemented via inspection, test, analysis, or demonstration. It is achievable. The requirement is technically possible, given the constraints. It is traceable. The requirement can be traced from lower-level requirements and can trace to higher-level requirements. It is unique. This standard prevents contraction between requirements. It is simple and clear. Each requirement specifies one function.

It is also common to use specific language when defining requirements to demonstrate intention. Typically, we use SHALL for a mandatory requirement and SHOULD for a nonmandatory requirement. Nonmandatory requirements let us express desired system attributes.

After we have established our requirements baseline, best practice is to create a compliance matrix, stating compliance for each requirement. We can also start establishing our verification strategy by assigning a verification method for each requirement. These methods are generally Test, Analysis, Inspection, Demonstration, and Read Across. Creating the requirementsalong with the compliance and verification matrices enables us to:

Clearly understand the system behavior.

Demonstrate the verification methods to both internal test teams and external customers. This identifies any difficult test methods early on in the development and allows us to determine the resources required.

Identify technical performance metrics. These spring from the compliance matrix and comprise requirements that are at risk of not achieving compliance.

Every engineering project encompasses a number of budgets, which we should allocate to solutions identified within the architecture. Budget allocation ensures that the project achieves the overall requirement and that the design lead for each module understands the module’s allocation in order to create an appropriate solution. Typical areas for which we allocate budgets are the total mass for the function; the total power consumption for the function; reliability, defined as either mean time between failures or probability of success; and the allowable crosstalk between signal types within a design (generally a common set of rules applicable across a number of functions). One of the most important aspects of establishing the engineering budgets is to ensure that we have a sufficient contingency allocation. We must defeat the desire to pile contingency upon contingency, however, as this becomes a significant technical driver that will affect schedule and cost.

From the generation of the compliance matrix and the engineering budgets, we should be able to identify the technically challenging requirements. Each of these at-risk requirements should have a clear mitigation plan that demonstrates how we will achieve the requirement. One of the best ways to demonstrate this is to use technology readiness levels (TRLs). There are nine TRL levels, describing the progression of the maturity of the design from its basic principles observed (TRL 1) to full function and field deployment (TRL 9).

Assigning a TRL to each of the technologies used in our architecture, in conjunction with the compliance matrix, lets us determine where the technical risks reside. We can then effect a TRL development plan to ensure that as the project proceeds, the low TRL areas increase to the desired TRL. The plan could involve ensuring that we implement and test the correct functionality as the project progresses, or performing functional or environmental/dynamic testing during the project’s progression.

Once we understand the required behavior of the embedded system, we need to create an architecture for the solution. The architecture will comprise the requirements grouped into functional blocks. For instance, if the embedded system must process an analog input or output, then the architecture would contain an analog I/O block. Other blocks may be more obvious, such as power conditioning, clocks and reset generation.

The architecture should not be limited to the hardware (electrical) solution, but should include the architecture of the FPGA/SoC and associated software. Of course, the key to modular design is good documentation of the interfaces to the module and the functional behavior.

One key aspect of the architecture is to show how the system is to be created at a high level so that the engineering teams can easily understand how it will be implemented. This step is also key for supporting the system during its operational lifetime.

When determining our architecture, we need to consider a modular approach that not only allows reuse on the current project but also enables reuse in future projects. Modularity requires that we consider potential reuse from day one and that we document each module as a standalone unit. In the case of internal FPGA/SoC modules, a common interface standard such as the ARM AMBA Advanced Extensible Interface (AXI) facilitates reuse.

An important benefit of modular design is the potential ability to use commercial off-the-shelf modules for some requirements. COTS modules let us develop systems faster, as we can focus our efforts on those aspects of the project that can best benefit from the added value of our expertise.

The system power architecture is one area that can require considerable thought. Many embedded systems will require an isolating AC/DC or DC/DC converter to ensure that failure of the embedded system cannot propagate. Figure 2 provides an example of a power architecture. The output rails from this module will require subregulation to provide voltages for the processing core and conversion devices. We must take care to guard against significant degradation of switching losses and efficiency in these stages. As we decrease efficiency, we increase the system thermal dissipation, which can affect the unit reliability if not correctly addressed.

We must also take care to understand the behavior of the linear regulators used and the requirements for further filtering on the power lines. This need arises as devices such as FPGAs and processors switch at far higher frequencies than a linear regulator’s control loop can address. As the noise increases in frequency, the noise rejection of the linear regulator decreases, resulting in the need for additional filtering and decoupling. Failure to understand this relationship has caused issues in mixed-signal equipment.

Another important consideration is the clock and reset architecture, especially if there are several boards that require synchronization. At the architectural level, we must consider the clock distribution network: Are we fanning out a single oscillator across multiple boards or using multiple oscillators of the same frequency? To ensure the clock distribution is robust, we must consider:

Oscillator startup time. We must ensure that the reset is asserted throughout that period if required.

Oscillator skew. If we are fanning out the oscillator across several boards, is timing critical? If so, we need to consider skew both on the circuit cards (introduced by the connectors) and skew introduced by the buffer devices themselves.

Oscillator jitter. If we are developing a mixed-signal design, we need to ensure a low-jitter clock source because increases in jitter reduce the mixed-signal converter’s signal-to-noise ratio. This is also the case when we use multigigabit serial links, as we require a low-jitter source to obtain a good bit error rate over the link.

We must also pay attention to the reset architecture, ensuring that we only apply the reset where it is actually required. SRAM-based FPGAs, for example, typically do not need a reset. If we are using an asynchronous assertion of the reset, we need to ensure that its removal cannot result in a metastability issue.

Formal documentation of both internal and external interfaces provides clear definition of the interfaces at the mechanical, physical and electrical levels, along with protocol and control flows. These formal documents are often called interface control documents (ICDs). Of course, it is best practice to use standard communication interfaces wherever possible.

One of the most important areas of interface definition is the “connectorization” of the external interfaces. This process takes into account the pinout of the required connector, the power rating of the connector pins and the number of mating cycles required, along with any requirements for shielding.

As we consider connector types for our system, we should ensure that there cannot be inadvertent cross connection due to the use of the same connector type within the subsystem. We can avoid the possibility of cross connection by using different connector types or by employing different connector keying, if supported.

Connectorization is one of the first areas in which we begin to use aspects of the previously developed budgets. In particular, we can use the crosstalk budget to guide us in defining the pinout.

The example in Figure 3 illustrates the importance of this process. Rearranging the pinout to place the ground reference voltage (GND) pin between Signal 1 and Signal 2 would reduce the mutual inductance and hence the crosstalk.

The ICD must also define the grounding of the system, particularly when the project requires external

EMC. In this case, we must take care not to radiate the noisy signal ground.

Engineers and project managers have a number of strategies at their disposal to ensure they deliver embedded systems that meet the quality, cost and schedule requirements. When a project encounters difficulties, however, we can be assured that its past performance will be a good indicator of its future performance, without significant change on the project.

FURTHER READING

Nuts and Bolts of Designing an FPGA into Your Hardware. Xcell Journal, 82, 42-49. A Pain-Free Way to Bring Up Your Hardware Design. Xcell Journal, 85, 48-51. Design Reliability: MTBF Is Just the Beginning. Xcell Journal, 88, 38-43.







Note: This article originally appeared in Xcell Software Journal, Issue 3.