un

guest
1 / ?
back to lessons

When Optimizing One Objective Costs Another

A system with two performance objectives — say, Subsystem A performance (P_A) and Subsystem B performance (P_B) — has a feasible region: the set of (P_A, P_B) pairs achievable given the shared resources.

Within the feasible region, the Pareto frontier is the boundary where you cannot improve P_A without degrading P_B, or vice versa. Every point on this frontier is a valid system optimum, depending on the weights assigned to each objective.

Component A optimum: maximize P_A without regard to P_B. This lies at the rightmost feasible point — on the Pareto frontier at the extreme where P_A is maximized and P_B is sacrificed.

Component B optimum: maximize P_B without regard to P_A. Similarly, at the top of the frontier with P_B maximized.

System optimum: somewhere on the interior of the Pareto frontier, balancing both objectives. It lies between the two component optima. Neither component runs at its individual maximum — but the system as a whole performs best.

Hamming's differential analyzer: the improved amplifiers maximized P_A (amplifier performance) but shifted the operating point away from the interface design envelope, degrading P_B (grounding/interference performance). The system optimum required backing off amplifier performance to stay within the interface's tolerance.

Pareto Frontier & System Optimum vs Component Optimum

Locate the System Optimum

A system has two subsystems. Subsystem A's performance P_A = 2x − x², achievable for x ∈ [0, 2]. Subsystem B's performance P_B = 2(1−x) − (1−x)², achievable for the same x. The shared variable x represents how a shared resource (say, bandwidth or power) is allocated between subsystems. Total performance: P_total = P_A + P_B.

Find the value of x that maximizes P_total. Then compare to the x that maximizes P_A alone and the x that maximizes P_B alone. Show that these three optima are at different values of x, and explain what this means for a systems engineer trying to decide how to allocate the shared resource.

Feasible Regions & Binding Constraints

A system subject to constraints operates inside a feasible region F in parameter space. The constraints define the boundary of F.

Binding constraint: a constraint that is satisfied with equality at the optimum (the optimum lies on the constraint boundary).

Non-binding constraint: a constraint satisfied with strict inequality at the optimum (the optimum lies strictly inside the boundary).

The maximum principle (a general result from optimization theory): for a linear objective over a convex feasible region, the optimum always lies at a vertex of the feasible region — i.e., at the intersection of binding constraints. The optimum never lies in the interior unless the objective is flat (constant) in some direction.

Hamming's rule 2 in geometric terms: the bounding conditions (constraints) of a system are often more important than the optimum values inside the bounds, because the optimum lies at the boundary, not in the interior. Designing the constraint structure correctly determines where the feasible region lies; once you have the region, the optimum is at its boundary.

Interface as shared constraint: the interface between two subsystems defines a shared constraint in the joint parameter space of both. Improving component A changes A's behavior at the interface — it may push the interface constraint out of the feasible region of component B.

Which Constraint Is Binding?

A communications system has three design variables: transmit power P (in watts), bandwidth B (in MHz), and noise figure NF (in dB). The data rate C = B · log₂(1 + P/(N₀ · B · 10^(NF/10))), where N₀ is the noise floor.

The system has three constraints: P ≤ 10 W (power budget), B ≤ 20 MHz (spectrum allocation), NF ≤ 6 dB (hardware limit). The objective is to maximize C.

Without detailed numerical optimization: explain which of the three constraints you would expect to be binding at the system optimum, and why. Use the structure of the C formula to reason about which variable has the highest marginal impact on C at the constraint boundaries.

The Interface as a Shared Constraint

Model two subsystems A and B as operating in their own parameter spaces P_A and P_B. The interface between them defines a shared constraint: a relationship between a parameter in P_A and a parameter in P_B that must hold for the system to function.

Example: in Hamming's differential analyzer, the amplifiers (subsystem A) output a current I_out. The grounding circuit (subsystem B) can tolerate a maximum current I_max. The interface constraint: I_out ≤ I_max.

When you improve subsystem A (better amplifiers), I_out increases. If I_out > I_max, the interface constraint is violated — the two subsystems are no longer in a valid operating region of their joint parameter space.

Interface design principle: the interface constraint defines the boundary between valid and invalid operation. The component designer must know this boundary. The systems engineer must verify it is not violated when any component changes.

The interface is not the property of A or B alone — it belongs to the joint system. This is why component-level testing (testing A alone, testing B alone) misses interface failures. The constraint is only visible in the joint parameter space.

Interface Failure Analysis

A software system has two services: Service A (data ingestion) and Service B (data processing). Service A writes records to a message queue; Service B reads from the queue. The interface constraint: the message queue can hold at most 10,000 messages. Service A's throughput: T_A messages per second. Service B's throughput: T_B messages per second.

Express the interface constraint as a mathematical inequality relating T_A and T_B. Then: the team optimizes Service B to run 3× faster (T_B increases 3×) without changing Service A. Under what condition does this improvement not affect queue utilization? Under what condition could this improvement actually cause Service A to fail (hint: think about backpressure and flow control)?