Technical overview

The high level design for the dialectic HDL follows a similar approach to what a team of engineers would follow. In this case dialectic will implement this as a pipelined series of grouped passes.

Note: This approach is largely inspired by the CIRCT project. With which this project may integrate with as an additional dialect at some point.

Passes are grouped logically to ensure that each pass can operate on similar data. For example consider the below group of passes.

graph TB
A[Define Environmental Constraints] --> C;
B[Define Manufacturing Constraints] --> C;
C[Finalize BOM and manufacturing constraints]

First we constrain the environmental conditions that the end product needs to operate in for example this may look something like;

let environment = dialectic::environment()
                    .temperature_range(-10.0,60.0)
                    .humidity_range(0.0, 80.0)
                    .pressure_range(500.mbar(), 1500.mbar());

// Manufacturing constraints from PCBway
let pcb_constraints = dialectic::pcb_manufacturing()
                    .stackup(
                        dialectic::Stackup::new(vec![
                            Material::Silkscreen(None),
                            Material::Soldermask(None),
                            Material::Copper(35.um()),
                            Material::Prepreg(0.11.mm(), 4.29.rel_permeability()),
                            // ...
                        ])
                    )
                    .width_range(5.0, 500.0)
                    .height_range(6.0, 1100.0)
                    .min_trace(0.1)
                    .min_spacing(0.1);
                    // more constraints... ;

let assembly_constraints = dialectic::assembly()
                    .components_on_top()
                    .components_on_bottom()
                    .max_reflow_temperature(270.0);
                    // more constrains... ;

let preconditions = dialectic::Preconditions::new(
    environment,
    pcb_constraints,
    assembly_constraints,
)

Note: For the time being we will be using Rust as a generator language. But this may change depending on requirements.

By setting these preconditions we can;

  • Build a set of filters from component selection based on;
    • Component size.
    • Spacing between pads on footprints.
    • Operational environment.
    • etc.
  • Define a set of constraints for placement and routing.

High level pipeline

While the first example show how a group of design passes might work it doesn't give a full overview of what the entire system will look like.

graph TB
A[Analysis Preconditions] --> C;
B[Behavioral definitions] --> C;
C[Construct BOM] --> D;
D[Generate netlist] --> E;
D -->|Failure| C;
E[Automatically place and route] -->|Success| F;
E -->|Failure| C;
F[Generate manufacturing files]

To the untrained eye this system may seem like a simple one to put together. However there are some very challenging and difficult problems to solve.

Analysis preconditions

Note: This section is incomplete.

This stage in the pipeline is used to constrain your design e.g.

  • The operating temperature, pressure, humidity ranges.
  • The PCB manufacturing process.
  • The PCB assembly process.

This stage isn't intended to do anything particularly fancy. Instead it is just a means of collecting data to constrain the design at later stages in the pipeline.

Operating environment

The following set of constraints are proposed to optionally constrain the design of your device;

  • Temperature range.
  • Pressure range.
  • Vibration.
  • Humidity.
  • Heat dissipation.

PCB Manufacturing

The following set of constraints are proposed to optionally constrain the design of your device;

Vias

  • Minimum width annular ring.
  • Drill sizes (list)
  • Microvias
  • Blind vias
  • Buried vias
  • Plug hole diameter

Clearance

  • Minimum trace width
  • Hole to hole clearance

Stackup

  • List of material variants
    • Fiberglass (FR4)
    • Copper
    • Prepreg
    • Solder-mask
    • etc.

Behavioral definitions

Note: This section is incomplete.

Functionally when we use a circuit we don't care how that circuit is implemented. Nor do we care how it functions. We care how it behaves and what value it adds to our lives.

So if we care about how a circuit adds value to our lives would it not make sense to start at a high level of abstraction and just say hey. What do I want my circuit to be able to do?

Consider an example of a fitness tracker that counts your steps and displays that on a screen. But does nothing else.

If we break this behavior down into it's components we get a;

  • Sensor to detect a step,
  • Computation system to interpret sensor output,
  • Counter to keep track of steps,
  • Display to show us the steps,

TODO: Add a code example here defining custom constraints.

Constructing a BOM

Now that we have defined our operational and manufacturing constraints. We can start selecting components. At a high level this process would follow the following approach.

  • Pre-filtering based on manufacturing and operating constraints.
  • For each behavior find a set of components/predefined sub-circuits that meet your needs. Narrow to single instantiate based on some heuristic.
  • Each of the narrowed components/sub-circuits have their own constraints. These constraints should be handled recursively.
  • An end state for BOM generation is met when each recursive component has it's requirements met.
  • Optionally create a set cost reduction passes. For example while the first set of 'seed' components might have been chosen to be the cheapest, the sum of their supporting parts (i.e. recursively chosen) parts maybe more expensive.

This type of process predicates on having highly detailed;

  • Footprints,
  • Component descriptions (aka schematic libraries),
  • Predefined functional sub-circuits.

Note: While automation is great, sometimes for very specific reasons you want to choose a very specific part. It is the intention that it be possible to manually override each stage/pass in this process.

Pre-filtering

Pre-filter parts database based on manufacturing and operating constraints. e.g. The following database of resistors with the following pre-filter;

  • Min temp: -20
  • Min copper spacing: 0.65mm
ValueMin tempPackage
10k-50402
10k-100805
10k-400805
11k-400805

Would be reduced to;

ValueMin tempPackage
10k-400805
11k-400805

Behavior matching

For each behavior search through your database of parts/predefined sub-circuits and find a set of 'candidate' components that meet your requirements. This may involve guiding this process of behavior down to function. e.g.

Using a some guiding heuristic (e.g. minimize component cost), choose a single component/sub-circuit that meets the behavior. In some cases this can be specialized down to a physical interface e.g.

A resistor with the follow constraints;

  • Value: 10k
  • Min temperature: -20
  • Package: 0402

It is not necessary to specialize down to a particular part number at this point as there are multiple parts that can meet the 0402 physical interface. This can only be done on parts that have the same physical footprint/interface.

These first set of matching components are considered 'Seeds' for the rest of the schematic, on which the schematic will naturally grow.

Recursive behavior matching

As previously mentioned the 'seed' components have their own set of constraints and requirements. e.g. An accelerometer sensor chosen in the previous step may need 1.8-3.3V to operate, therefore the next stage of the recursive matching might involve a buck converter and/or a battery.

BOM optimization (Optional)

There are many opportunities for BOM cost reduction. While each recursive stage might be optimized for cost. This involves a greedy and non-optimal search pattern. There are also multiple areas where constraints can be reduced to allow for a greater range of freedom in BOM reduction. e.g. Consider 3 parts A, B and C currently candidates for the BOM.

Part A has 2 pins

  • GND
  • VDD
    • Requires 1.8-3.3V @ up to 50mA
    • Requires 80-200nF of capacitance for decoupling

Part B has 2 pins

  • GND
  • VDD
    • Requires 3.0-3.3V @ up to 800mA
    • Requires 90-150nF of capacitance for decoupling

Part C has 2 pins

  • GND
  • VDD
    • Requires 1.8-2.5V @ up to 5mA
    • Requires 90-150nF of capacitance for decoupling

It's fairly reasonable to assume that;

  • A & C can share a power rail.
  • A & B can share a power rail.
  • B & C cannot share a power rail.

So at least two power rails will be required. However all three have a range of capacitances that overlap such that an acceptable capacitor can be chosen for all three reducing the unique component count. e.g. a 100nF capacitor with a 10% tolerance will work for all three.

Example

Note: This example has some known issue;

  • Doesn't address logic levels in database search.
  • Doesn't address multi-purpose pin-remapping and conflicts.
  • Doesn't address the relationship between cpu->spi_peripheral.
  • Doesn't address the relationship between firmware size and flash storage.
  • SPI bus is shared between accelerometer and display. But the count for GPIO chip selects isn't checked.
  • Doesn't include scenarios showing when multiple components match a given requirement.

There is a bit of a balance between writing a concise example and missing critical details. This is also a WIP and I haven't worked out all the details yet. If you think that I've missed something important please feel free to open an issue. If you think that you could improve this example feel free to contact me under the discussions tab.

Let's go back to our step counter, and see how this recursive component selection might work on a simplified database of parts.

So let's say we start with a 'sensor to detect a step'. First we would need to map this requirement to a physical phenomenon that can be measured. So when we take a step with accelerate slightly forward and then accelerate slightly backwards as our foot impacts the ground again. Not only that but we will likely do a little bob vertically as well. So a sensor that can measure acceleration with some post processing can count steps! Now it's likely that we will want to constrain this further. For example we don't want a sensor that can only sample once a second as we wouldn't be able to determine how many steps you take unless you take less than 0.5 steps/s. So the world record for 'skips'/seconds is 9.6 skips/s. This is roughly the same motion as a step so let's use that as our baseline. At that rate we would need at minimum 19.2Hz to prevent aliasing. But it's likely that we would want significantly more. So let's go with 4x9.6=38.4Hz as a minimum sampling rate.

NOTE: there might be other was of doing this at a later time e.g. using a ranging sensor that measures distance to the ground. So it's still useful to keep the original abstract concept of a step counter.

graph TB
A[Sensor to detect a step]
B[Computation system to interpret sensor output]
C[Counter to keep track of steps]
D[Display to show us the step count]
style A fill:#f9f
style B fill:#f9f
style C fill:#f9f
style D fill:#f9f

So this is where we can start searching for a part in our database. To simplify this example I've only put one sensor that will work. So find the 'Accelerometer' sensor. So we find the part 'Accel101' that matches our sampling requirements.

graph BT
A[Sensor to detect a step]
B[Computation system to interpret sensor output]
C[Counter to keep track of steps]
D[Display to show us the step count]
E[Accel101]-->|provides|A

Now we find that 'Accel101' has requirements on it's own. Specifically it needs power and an SPI controller. So we search for a power provider that will work for our accelerometer and we find a battery.

graph BT
A[Sensor to detect a step]
B[Computation system to interpret sensor output]
C[Counter to keep track of steps]
D[Display to show us the step count]
style A fill:#f9f
style B fill:#f9f
style C fill:#f9f
style D fill:#f9f

E[Accel101]-->|provides acceleration|A
F[Battery]-->|provides power|E

We then search for a device that can act as a SPI controller and find the 'ControlFreak1000' micro-controller.

graph BT
A[Sensor to detect a step]
B[Computation system to interpret sensor output]
C[Counter to keep track of steps]
D[Display to show us the step count]
style A fill:#f9f
style B fill:#f9f
style C fill:#f9f
style D fill:#f9f

E[Accel101]-->|provides acceleration|A
F[Battery]-->|provides power|E
G[ControlFreak1000]-->|provides SPI controller|E

The control freak micro-controller also by coincidence happens to provide functionality for two more of our behavioral definitions via it's cpu;

  • Computation system to interpret sensor output,
  • Counter to keep track of steps.
graph BT
A[Sensor to detect a step]
B[Computation system to interpret sensor output]
C[Counter to keep track of steps]
D[Display to show us the step count]
style A fill:#f9f
style B fill:#f9f
style C fill:#f9f
style D fill:#f9f

E[Accel101] --> |provides acceleration| A
F[Battery] --> |provides power| E
G[ControlFreak1000] --> |provides SPI controller|E
G --> |Provides computation| B
G --> |Provides counter| C

The micro-controller however has a it's own requirements. Again we have to solve the power problem. However as we've already chosen the battery, and the voltage should work we just have to check that the battery can provide enough power for both the accelerometer and the micro-controller at the same time. 0.1 + 0.01 < 1 so we can keep the battery in our proposed BOM.

graph BT
A[Sensor to detect a step]
B[Computation system to interpret sensor output]
C[Counter to keep track of steps]
D[Display to show us the step count]
style A fill:#f9f
style B fill:#f9f
style C fill:#f9f
style D fill:#f9f

E[Accel101] --> |provides acceleration| A
F[Battery] --> |provides power| E
G[ControlFreak1000] --> |provides SPI controller|E
F[Battery] --> |provides power| G
G --> |Provides computation| B
G --> |Provides counter| C

We have one final behavior that we need to meet 'Display to show us the step count', So we search through our database and find our 'Display100' part. Again the display requires some power, we check our current BOM to see if we have anything that can provide that. We find the battery, we just need to check that the battery can supply power to the micro-controller, accelerometer and display all at once. It can so we at this point we have an automatically generated BOM, that meets our needs.

graph BT
A[Sensor to detect a step]
B[Computation system to interpret sensor output]
C[Counter to keep track of steps]
D[Display to show us the step count]
style A fill:#f9f
style B fill:#f9f
style C fill:#f9f
style D fill:#f9f

E[Accel101] --> |provides acceleration| A
F[Battery] --> |provides power| E
G[ControlFreak1000] --> |provides SPI controller|E
F[Battery] --> |provides power| G
G --> |Provides computation| B
G --> |Provides counter| C
H[Display] --> |Provides display|D
G[ControlFreak1000] --> |provides SPI controller|H

Database

Part: ControlFreak1000

  • cpu: Cortex-M0
  • Spi: Provides controller
    • Count: 1
  • GPIO: Provides in or output
    • Count: 4
  • Power: Requires 1.8-3.3V @ up to 100mA
    • Decoupling:
      • Count: 4
      • Value: 100-200nF

Part: Cap1

  • value: 1uF,
  • tolerance: 10%,

Part: Cap3

  • value: 120nF,
  • tolerance: 10%,

Part: Accel101

  • Acceleration: Provides 1-10G 24bit 200Hz sampling rate
  • Spi: Requires controller
  • CS: Requires GPIO
  • Power: Requires 1.8-3.3V @ up to 10mA
    • Decoupling:
      • Count: 1
      • Value: 50-150nF

Part: Battery

  • Provides 1.8-2.2V @ up to 1A

Part: Display100

  • Display: Provides display
  • Power: Requires 1.8-5.0V @ up to 100mA
    • Decoupling:
      • Count: 1
      • Value: 0.5-10uF
  • Spi: Requires controller

Generating a netlist

Note: This description is still a work in progress.

The process for generating a net-list for a design is not dissimilar to the process of generating the BOM.

We take each constraint in the BOM and match abstract concepts against physical interfaces. In the previous example of a step counter we had an accelerometer that interfaced with a microcontroller via SPI. In many cases it's useful to defer a concrete net-list to the routing stage. Instead of saying that PinA1 is going to be SPI1-Clock and then routing as is, we can say that the accelerometer uses one of the SPI interfaces provided by the microcontroller and allow the routing algorithm to pin-swap to optimize the circuit.

Note: There is of course a problem with this approach and that is each pin has a finite set alternative functions, and even if a micro-controller provides 4 SPI interfaces it might not be possible to use all of them if other GPIO pins are also being used. This seems like a solvable problem, but as it currently stands an algorithm to solve this hasn't been developed for dialectic.

Automatic place and route

If you think that you could improve or feel like I've missed something important this page feel free to contact me under the discussions tab. Though keep in mind that this description is far from complete.

It's worth pointing out that the PCB layout community is understandably jaded when it comes to auto-routing/placement. This comes from a long history of sub-par auto-routers. I personally have used commercial auto-routers and had them absolutely butcher my layout, ripping up more traces than it routed. So I am approaching this problem with equal parts optimism and deep shades of jade.

Auto-routing is hard! A quick read over the Wikipedia page describing auto-routing and you'll see phrases like;

"Almost every problem associated with routing is known to be intractable."

However the fact that these problems are 'intractable' is no excuse for poor performance from an auto-router. After all we are able to route circuits manually, usually with 'better' results.

So if we can manually route a circuit there is no reason why we can't develope an algorithm to do the same thing automatically. After all we didn't evolve to layout circuits so we don't have some fundamental evolutionary advantage that we developed over 10,000s of years that a CPU does not.

So why don't auto-routers perform up to our expectations? I don't have all the answers, but I have a set of hunches (which are open challenge).

  1. Auto-routers don't have the same level of control as what we do. e.g. usually they can't pin-swap, they can't move parts around (e.g. to make more space for vias or length matching). They can't completely change package if a different footprint would work better.
  2. We are biased to prefer manually routed circuits. In some cases auto-routers may be performing as good as our hand routed designs but we just don't recognize that because we are trained to like our circuits routed in a particular way.
  3. Typically engineers will route a PCB using 'rules of thumb', which are usually simplifications of more complex physics. Often search algorithms (i.e. auto-routers) will quickly find edge cases and loop holes in simplified rules. Therefore we should give auto-routers more information as to the underlying physics associated with electronics.
  4. We don't give auto-routers enough information about what we want our routing to look like. Without a suitable set of constraints we can't expect an auto-router to route a decent circuit. In other words garbage in garbage out.

Proposed solution for dialectic auto-routing

The current proposal for auto-routing is to use a multi-stage iterative approach. giving the auto-router full controllability over the design process. So simplified pipeline would look something like the following.

graph TB
S[Start] --> A
style S fill:#f9f
A[Auto-place parts] --> B
subgraph Attempt place and route N times
B[Auto-route] --> |Failure| C
C[Slightly move parts] --> B
end
B --> |Failure| E
E[Modify part variants] --> A
B --> |Success| D
D[Done]

Routing algorithm

Routing algorithms started off fairly simple. With the first auto-router using Lee's maze solving algorithm. Which is essentially a grid based breadth first search with backtracking. Today most high end auto-routers use an algorithm based on topological auto-routing which tends to work better for high-density circuits.

Potential improvements on topological-routing

There are a couple of improvements that can be made to typical auto-routers;

  1. Return path routing
Return path routing

Currently (at least in the auto-routers I've used) there is very little effort put into routing/managing the signal return path. What I'd like to propose is that each signal be treated as a set of two links (see topological link/knot theory). At least one of the two links with either a ground or power return path must be routed into the board. By simultaneously routing both the signal and return path you can minimize EMI and reduce signal integrity. While a power and/or ground planes may still be used it will still likely be useful to keep track of your routed return paths through those planes as intersections on the return path routing may result in increased current density.

graph LR

A[Pin 1] -->|Signal| B[Pin 2]
B -->|Vdd return path| A
B -->|GND return path| A

This approach has the most potential when routing 1-2 layer boards. For example a ground trace can be routed parallel to signal traces to minimize EMI and maximize signal integrity.

There is also significant gains to be made in very high speed designs, where signal integrity and EMI are much more difficult to manage.

Other constraints

In many cases there will be other constraints that need to be met for the circuit to work as part of a larger electromechanical design e.g. connector locations PCB shape etc. How these constraints are applied is yet to be determined.