Git Product home page Git Product logo

align-public's Introduction

CircleCI License Documentation Status

ALIGN: Analog Layout, Intelligently Generated from Netlists

ALIGN is an open-source automatic layout generator for analog circuits jointly developed under the DARPA IDEA program by the University of Minnesota, Texas A&M University, and Intel Corporation.

The goal of ALIGN (Analog Layout, Intelligently Generated from Netlists) is to automatically translate an unannotated (or partially annotated) SPICE netlist of an analog circuit to a GDSII layout. The repository also releases a set of analog circuit designs.

The ALIGN flow includes the following steps:

  • Circuit annotation creates a multilevel hierarchical representation of the input netlist. This representation is used to implement the circuit layout in using a hierarchical manner.
  • Design rule abstraction creates a compact JSON-format representation of the design rules in a PDK. This repository provides a mock PDK based on FinFET technology (where the parameters are based on published data). These design rules are used to guide the layout and ensure DRC-correctness.
  • Primitive cell generation works with primitives, i.e., blocks at the lowest level of the design hierarchy, and generates their layouts. Primitives typically contain a small number of transistor structures (each of which may be implemented using multiple fins and/or fingers). A parameterized instance of a primitive is automatically translated to a GDSII layout in this step.
  • Placement and routing performs block assembly of the hierarchical blocks in the netlist and routes connections between these blocks, while obeying a set of analog layout constraints. At the end of this step, the translation of the input SPICE netlist to a GDSII layout is complete.

Documentation

ALIGN documentation

Inputs

  • Circuit design inputs

    A SPICE file and constraint files (optional) need to be placed in a common folder. The name of the folder, SPICE file, and top-design name should match. Some examples are provided to showcase the applications of constraints to control the layout of the design.

  • Library:(SPICE format)

    A basic set of libraries is predefined within ALIGN to create a hierarchical layout. Designers can modify this based on their design style.

  • PDK: Abstracted design rules

    PDK setup needs to be configured for any new technology node. We provide multiple open-source PDK options.

Outputs

  • Design JSON: Final layout in JSON form which can be viewed using the ALIGN Viewer.
  • Layout GDS: Final layout of the design. The output GDS can be imported into any GDSII viewer.

Getting started

Docker image

If you are an user who does not require any changes to the source code, the recommended method to use ALIGN is the docker image hosted in dockerhub as: darpaalign/align-public. Use the image darpaalign/align-public:latest for the latest version of ALIGN. You will need to first build a personalized image based to ensure the files generated by containers have the appropriate user/group ID and permissions. You can do this using using the dockerfile and build.sh script in the install/ directory. Detailed instructions to pull, build, and run the docker image are in docker/README.

Steps 0-3 below are to install ALIGN locally. Step 4 is to run ALIGN either locally or inside a docker container.

Step 0: Check prerequisites

The following dependencies must be met by your system:

  • gcc >= 6.1.0 (For C++14 support)
  • python >= 3.7 (For PEP 560 support) You may optionally install Boost & lp_solve using your distro package manager (apt, yum, etc) to save some compilation time.

Note: In case you have multiple gcc versions installed on your system, we recommend explicitly setting the compiler paths as follows:

$ export CC=/path/to/your/gcc
$ export CXX=/path/to/your/g++

Step 1: Clone the ALIGN source code to your local environment

$ git clone https://github.com/ALIGN-analoglayout/ALIGN-public
$ cd ALIGN-public

Step 2: Create a Python virtualenv

Note: You may choose to skip this step if you are doing a system-wide install for multiple users. Please DO NOT skip this step if you are installing for personal use and/or you are a developer.

$ python -m venv general
$ source general/bin/activate
$ python -m pip install pip --upgrade

Step 3a: Install ALIGN as a USER

If you already have a working installation of Python 3.8 or above, the easiest way to install ALIGN is:

$ pip install -v .

Step 3b: Install ALIGN as a DEVELOPER

If you are a developer, you may wish to install ALIGN with some additional flags.

For python developers:

$ pip install -e .[test]

The -e or --editable flag generates links to the align package within your current directory. This allows you to modify python files and test them out immediately. You will still need to re-run this command to build your C++ collateral (when you are changing branches for example). More on that is below.

For ALIGN (C++) Extension developers:

$ pip install setuptools wheel pybind11 scikit-build cmake ninja
$ pip install -v -e .[test] --no-build-isolation
$ env BUILD_TESTING='ON' pip install -v --no-build-isolation -e . --no-deps

The second command doesn't just install ALIGN in-place, it also caches generated object files etc. under an _skbuild subdirectory. Re-running pip install -v -e .[test] --no-build-isolation will reuse this cache to perform an incremental build. We add the -v or --verbose flag to be able to see build flags in the terminal.

If you want the build type to be Release (-O3), you can issue the following three lines:

$ pip install setuptools wheel pybind11 numpy scikit-build cmake ninja
$ pip install -v -e .[test] --no-build-isolation
$ env BUILD_TYPE='Release' BUILD_TESTING='ON' pip install -v --no-build-isolation -e . --no-deps
or
```console
$ pip install setuptools wheel pybind11 numpy scikit-build cmake ninja
$ pip install -v -e .[test] --no-build-isolation
$ env BUILD_TYPE='RelWithDebInfo' BUILD_TESTING='ON' pip install -v --no-build-isolation -e . --no-deps

Use the Release mode if you are mostly developing in Python and don't need the C++ debugging symbols. Use the RelWithDebInfo if you need both debug symbols and optimized code.

To debug runtime issues, run:

python -m cProfile -o stats $ALIGN_HOME/bin/schematic2layout.py $ALIGN_HOME/examples/sc_dc_dc_converter

Then in a python shell:

import pstats
from pstats import SortKey
p = pstats.Stats('stats')
p.sort_stats(SortKey.TIME).print_stats(20)

To run tests similar to the check-in and merge-to-master CI runs run:

cd $ALIGN_HOME
# Checkin
pytest -vv
CI_LEVEL='checkin' pytest -n 4 -s -vv --runnightly --placer_sa_iterations 100 -- tests/integration/
# Merge to master
CI_LEVEL='merge' pytest -n 8 -s -vv --runnightly --maxerrors=20 --placer_sa_iterations 100 -- tests/integration/ tests/pdks

Step 4: Run ALIGN

You may run the align tool using a simple command line tool named schematic2layout.py For most common cases, you will simply run:

$ schematic2layout.py <NETLIST_DIR> -p <PDK_DIR> -c

For instance, to build the layout for telescopic_ota:

$ mkdir work && cd work
$ schematic2layout.py ../examples/telescopic_ota -p ../pdks/FinFET14nm_Mock_PDK/

For a full list of options supported by the tool, please use the following command:

$ schematic2layout.py -h

If you get an error libOsiCbc.so: cannot open shared object file, please add ${ALIGN_HOME}/_skbuild/<OSname_Arch_PythonVer>/cmake-install/lib to your LD_LIBRARY_PATH. ${ALIGN_HOME} is the path where ALIGN is installed. For e.g.:

$ export LD_LIBRARY_PATH=${LD_LIBRAR_PATH}:${ALIGN_HOME}/_skbuild/linux-x86_64-3.8/cmake-install/lib

Design database:

Viewer :

The final output GDS can be viewed using by importing in virtuoso or any GDS viewer

  • KLayout: GDS viewer (WSL users would need to install xming for displays to work)
  • Viewer: Layout viewer to view output JSON file

align-public's People

Contributors

854768750 avatar align-analoglayout avatar arvuce22 avatar codacy-badger avatar cristhianroman667 avatar dependabot[bot] avatar desmonddak avatar fzl1029 avatar jiteshp01 avatar kkunal1408 avatar kuangban avatar lastdayends avatar meghna09 avatar mr-fang-vlsi avatar parijatm avatar sapatnekar avatar soneryaldiz avatar srini229 avatar stevenmburns avatar tonmoydhar avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

align-public's Issues

Fix primitive to global router pin porting

Original comment from Steve below:

Let me know how you want to do the M3 porting. I currently get opens when I port both M2 and M3 lines (because I can't port the V2 without the global router crashing.)

We should do what makes sense and fix the issues that it causes.
Options seem to be:

  • Port metal2 only --- router doesn't know that it is connected internally
  • Port metal3 only --- router can't use existing metal2 for routes (metal2 in LEF as an obstruction)
  • Port metal2 and metal3 --- router thinks there is an open when only looking at the LEF view (can be solved by looking at all the layout
  • Port metal2, metal3 and via2 --- router crashes but could be fixed

Remove # fins, polys & dummies from canvas init

Having MockPDK canvas rely on # fins, polys & dummies discourages canvas reuse & forces us to pass these parameters around to downstream processes.

For example, PnRPython has these values hardcoded right now. The whole flow will fail dramatically the minute someone tries a different combination of fins, polys & dummies.

This fix is needed before #203 can be committed to master.

Add testbench collateral for example circuits + Add more examples

Test benches should come with whatever simulation collateral is needed to successfully simulate the schematic view in Xyce. This includes spice models, test circuits and the tolerances associated with each circuit property. It is entirely possible we may need to do some Python based post-processing to actually extract the properties of interest from the Xyce waveforms. I suggest starting with the simplest properties that can be directly measured.

Please let me know if you need any help @meghna09 @tonmoydhar @arvuce22.

Unclear model maps + redefined models in sub_circuit_identification

There are a few separate but related issues here:

  1. There is no mapping from the Transistor (M*) device types represented in basic_element.sp to default devices in SPICE. We should either refactor basic_elements.sp to refer exclusively to NMOS & PMOS for M* devices... Or we should add appropriate .model statements so that the mapping is clear. I elected to take the second route in Circuit/tests/basic_template.sp as shown below.
.model NMOS_VTL NMOS
.model PMOS_VTL PMOS
.model nmos_rvt NMOS
.model pmos_rvt PMOS
.model nch NMOS l=60N m=1 w=1u
.model pch NMOS l=60N m=1 w=1u
  1. We should not be overriding default SPICE types as it results in significant confusion. For example, we are overriding Res & Cap with subcircuit definitions in basic_element.sp. This makes it very confusing as to whether future Res / Cap instantiations should start with X* or R* / C*. For the new Spice Parser in Circuit, I am simply throwing an assertion if the user attempts to redefine something that's already in the library. I have commented out the lines of concern in Circuit/tests/basic_template.sp.
** You may not redeclare an existing library element
; .subckt Cap PLUS MINUS
; CC1 PLUS MINUS 60f
; .ends Cap
** You may not redeclare an existing library element
; .subckt Res PLUS MINUS
; RR1 PLUS MINUS 10k
; .ends Res
  1. Number of terminals is often ambiguous. We cannot be using 2 terminal devices to be creating 3 terminal ones. I have circumvented the issue by creating empty subcircuit definitions in Circuit/tests/basic_template.sp as shown below.
** CANNOT USE 2 TERMINAL CAP TO CREATE 3 TERMINAL CAP
; .subckt Cap_b PLUS MINUS BULK
; CC1 PLUS MINUS BULK cap cap=60f
; .ends Cap_b
.subckt Cap_b PLUS MINUS BULK
.param cap=60f
.ends Cap_b
.subckt spiral_ind PLUS MINUS BULK CTAP
.param w=9u
** DEFAULT INDUCTOR IS 2 TERMINAL NOT 4.
** SUBCKT SHOULD STAND ON ITS OWN
; L0 PLUS MINUS BULK CTAP spiral_sym_ct_mu_z w=9u
.ends spiral_ind

Summary:
I have made proposed fixes in basic_template.sp in Circuit/tests/basic_template.sp but this is not yet being used by sub_circuit_identification. Please let me know if there is a good reason for the redefinitions / lack of .model mapping from a system-level perspective. I believe it was mainly to construct a list of supported primitives but we should be able to create default spice devices anyway as I highlighted in #221.

Can't make compose flow work in either mode

@desmonddak

This could be a documentation issue.
The first suggested flow fails because there is no virtual environment in "/general" to source. (It is installed elsewhere and probably should be assumed to be activated.)

The second suggested flow fails because the last line is incorrect:

docker-compose exec make-docker-service make -f $ALIGN_HOME/DESIGN=<design>

Tried a bunch of other things but couldn't figure it out.

Replace the *_lef.sh primitive generation bash script with something in Python

@arvuce22 @kkunal1408
Currently we using python to write out a bash script, that when run calls multiple python scripts.
There are multiple problems with this but the main one is that we aren't catching errors (if one of the primitive fails to generate we don't know.)

There are several ways to fix this. One idea is to just import all the python primitive generation code into the python script that current writes out the bash script and generate the primitives from there.
If we want more decoupling, we can write out a json file with the kinds for primitives we want to generate and use that as input to the primitive generation script.

Keeping test flow separate than run flow.

Hi Desmond/Steve,
Thanks for setting up the compose flow. In this flow I found that for PnR compiler installation I need to do have googletest which is an extra task while installing the flow on a standalone server. Can we keep the test flow separate than running flow?
Kunal

Very large LEF file (3 GB) generated in the sc_dc_dc_converter testcase

@arvuce22 @kkunal1408 @Lastdayends
One bottleneck in runtime for this example is that PnR takes a long time to read in the LEF file. It seems to be very large and from what I can tell some of the component descriptions are repeated 10000 times in the file. I think we need only one copy of each template.

(general) smburns@smburns-XPS-13-9370:~/DARPA/ALIGN-public/PlaceRouteHierFlow$ egrep MACRO ../testcase_latest/sc_dc_dc_converter.lef | sort | uniq -c
    749 MACRO cap_10f
     72 MACRO cap_12f
   9924 MACRO CMC_NMOS_n10_X2_Y2
   4998 MACRO CMC_NMOS_n10_X3_Y1
    749 MACRO CMC_NMOS_n10_X40_Y25
      7 MACRO CMC_NMOS_n12_X139_Y6
    378 MACRO CMC_NMOS_n12_X2_Y1
   9924 MACRO CMC_NMOS_n12_X3_Y1
   9924 MACRO CMC_PMOS_n10_X3_Y1
   4998 MACRO CMC_PMOS_n10_X4_Y3
   9924 MACRO CMC_PMOS_n12_X2_Y1
    378 MACRO CMC_PMOS_n12_X5_Y2
   4998 MACRO CMC_PMOS_S_n10_X1_Y1
   9924 MACRO CMC_PMOS_S_n10_X2_Y1
   9924 MACRO CMC_PMOS_S_n12_X1_Y1
   9924 MACRO DCL_NMOS_n10_X3_Y1
    378 MACRO DCL_NMOS_n12_X1_Y1
   9924 MACRO DCL_NMOS_n12_X2_Y1
    378 MACRO DCL_PMOS_n12_X1_Y1
    288 MACRO DCL_PMOS_n12_X5_Y1
   9924 MACRO DP_NMOS_n10_X11_Y1
   1488 MACRO DP_NMOS_n10_X1_Y1
   4998 MACRO DP_NMOS_n10_X3_Y1
    749 MACRO DP_NMOS_n10_X40_Y25
      7 MACRO DP_NMOS_n12_X139_Y6
     72 MACRO DP_NMOS_n12_X1_Y1
    378 MACRO DP_NMOS_n12_X3_Y1
     72 MACRO DP_NMOS_n12_X3_Y2
   9924 MACRO DP_NMOS_n12_X3_Y3
   2688 MACRO SCM_NMOS_n10_X1_Y1
   4998 MACRO SCM_NMOS_n10_X2_Y1
   2688 MACRO SCM_NMOS_n10_X3_Y1
    288 MACRO SCM_NMOS_n12_X1_Y1
    378 MACRO SCM_NMOS_n12_X2_Y1
   9924 MACRO Switch_NMOS_n10_X11_Y1
   9924 MACRO Switch_NMOS_n10_X2_Y2
      7 MACRO Switch_NMOS_n12_X139_Y6
    288 MACRO Switch_NMOS_n12_X1_Y1
    378 MACRO Switch_NMOS_n12_X2_Y1
   9924 MACRO Switch_NMOS_n12_X3_Y1
     72 MACRO Switch_NMOS_n12_X3_Y2
   9924 MACRO Switch_NMOS_n12_X3_Y3
   9924 MACRO Switch_PMOS_n10_X2_Y1
   9924 MACRO Switch_PMOS_n10_X3_Y1
   9924 MACRO Switch_PMOS_n12_X1_Y1
   9924 MACRO Switch_PMOS_n12_X2_Y1
    378 MACRO Switch_PMOS_n12_X5_Y1
    378 MACRO Switch_PMOS_n12_X5_Y2
    288 MACRO Switch_PMOS_n12_X5_Y4
   9924 MACRO test

PnR crashes if you use a capacitor and don't have a corresponding CC constraint

@Lastdayends PnR seg. faults if you don't have a CC constraint corresponding to the capacitors in the design (I guess the CC constraint causes the capacitor array to be generated---otherwise there are pins with missing components in the placer.)

We should probably explicitly check for this and assert with a decent error message. It took a while for me to figure out what was going one.

expression validation / casting in Circuit.parser

Expressions are being passed as a string directly to Circuit where it gets evaluated in a lazy fashion if user elects to call Circuit.flatten(). Use of eval is dangerous and may not work in case user tries to do something of the sort {no_of_fins*10n} as SPICE scale factors are not recognized by Python.

A better way to implement this would be to use lambda functions (or plain old functions) wherever expressions are needed. The parser would then need to construct these lambda functions.

Deferring this item until we have a better test-case.

Place and Route for telescopic ota is not on grid and visually has design rule errors

@Lastdayends @arvuce22 @kkunal1408
I was looking through the result of the full flow on the telescopic_ota and I see a few things that we need to fix:

  • When we make a connection from new metal3 down to our metal2 pins we seem to be adding off-grid metal2 (not quite creating a short but definitely a design rule violation) A good example is the left side metal3 in the picture below.
  • We seem to be doing the same thing when we add metal2 pins. They don't seem to be on-grid. See (vinn, voutp,vbiasn) in the picture below.
    Screenshot from 2019-08-02 06-35-12

Centralized schematic parsing / storage package

Schematic information is currently specified using spice, translated to verilog, map files etc and passed to the placement / routing engine. A lot of critical information such as number of polys, fins etc, gets lost in the process which is critical for successful LVS & parasitic extraction.

Instead of passing all this information back and forth over files, it would make more sense to create a centralized schematic representation that we transform in sub_circuit_identification, has hooks to the CellFabric primitive generators and (eventually) calls the placement / routing engine for whatever extra connections are needed. The same schematic representation can be used to represent the extracted netlist & generate the SPICE subcircuit representation.

Bug/circuit gen for variable_gain_amplifier

The basic steps of read_netlist, match_graph, and write_verilog FAIL on variable_gain_amplifier:
It does not matter whether flat is set to 0 or 1.

(general) root@desmond-v01:/dataVolume/variable_gain_amplifier/topology_output#
(general) root@desmond-v01:/dataVolume/variable_gain_amplifier/topology_output# !211
python3 /ALIGN-public/sub_circuit_identification/src/read_netlist.py --dir ./input_circuit/ -f variable_gain_amplifier.sp --subckt variable_gain_amplifier --flat 1
Reading netlist file: variable_gain_amplifier.sp
INFO: PARSING INPUT NETLIST FILE DONE
circuit graph written in dir: circuit_graphs
(general) root@desmond-v01:/dataVolume/variable_gain_amplifier/topology_output# !212
python3 /ALIGN-public/sub_circuit_identification/src/match_graph.py
Start matching graphs
(general) root@desmond-v01:/dataVolume/variable_gain_amplifier/topology_output# !214
python3 /ALIGN-public/sub_circuit_identification/src/write_verilog_lef.py -U_cap 12 -U_mos 12
Traceback (most recent call last):
File "/ALIGN-public/sub_circuit_identification/src/write_verilog_lef.py", line 422, in
ALL_LEF, UNIT_SIZE_MOS, UNIT_SIZE_CAP)
File "/ALIGN-public/sub_circuit_identification/src/write_verilog_lef.py", line 328, in generate_lef
" --length " + str(values['l']))
KeyError: 'w'

Cap array generator is producing m1 pins that are not on the routing grid

@Lastdayend

It seems that metal1 pins generated in the cap array are not on grid, so the top-level router does not properly connect to them (vias are missing). You can see this in the switch_capacitor_filter and especially in the subblock switch_capacitor_combination.

This is more our CI designs in examples/ using the Mock PDK.

All examples not in CI fail to run through the flow

@meghna09 @kkunal1408 @tonmoydhar @sapatnekar

The following designs in $ALIGN_HOME/examples fail to run through the flow.

  • sc_dc_dc_converter
  • adder
  • linear_equalizer
  • single_to_differential_converter
  • variable_gain_amplifier

The last four fail due to missing parameters.
The first fails with a core dump in the router.

We should fix these or move them somewhere else. We shouldn't be advertising test cases that fail.

DefaultCanvas repeat = 1 may be too simplistic

Discussion from #64 Summarized Below

@parijatm @arvuce22 I think having the unit cell height (and for that matter, the unit cell width) is a nice thing to be encoded in the grid. I think at least the m2 centerline grid should have a repeat of the number of m2 lines per unit cell. Arvind and I discussed this a while back. Unfortuanately, this doesn't help with the stop point grid. Maybe we need to support triples as well so we can get the (row, grid line in the row, minor grid line of the stop point grid). This should be straightforward to do but might be confusing to use. Maybe we can skip it for now (add an issue so we can remember to look at it later) since we shouldn't need to manipulate stop point grids very often (for example if we use ASCII diagrams.)

The triplet does sound easy to implement. And I think usage should be pretty straightforward as well in case the cell height is an integral multiple of relevant metal pitches. My main concern is what happens in case of a non-integral repeat. We can error out of course and force the user to specify a valid cell height. But this might be hard to do for all layers without making the UnitCell too big.

Alternatively, we could just pick one layer (such as M2) like we had done in our previously. If its just one layer though, we could think of some mechanism to override the default repeats as well... As in construct the DefaultCanvas with repeat = 1 & provide some mechanism to override the repeat.

Dangerous coding style led to difficult memory bug

ReadLEF.cpp: ReadLEF
interMetals.resize(interMetals.size()+1);
interMetals.back().metal=temp[1];

This is dangerous programming style. It requires that you resize correctly. In one later case, parsing 'LAYER', the resize was entirely forgotten. back() referred to a bad memory location. This resulted in a seg-fault but only when not running gdb, so very difficult to find. We should not see memory errors when using C++ properly.

The right way to code this is to create a temporary structure and then append it, which eliminates any chance of an allocation error.
PnRDB::contact tmpCon;
tmpCon.metal = temp[1];
interMetals.push_back(tmpCon);

This bug was found using valgrind which showed a bad write at line 91 where a back() call was made. A seg-fault happened only because the back() call happened on a zero length -- otherwise only bad behavior would be observed and valgrind would not have detected (we would have overwritten another contact).

We should probably eliminate any use of back() used on the LHS.
``

Cell_Fabric_FinFET__Mock Simplification

Cell_Fabric_FinFET__Mock still contains too many files with a lot of duplication (see https://app.codacy.com/app/ALIGN-analoglayout/ALIGN-public/pullRequest?prid=3783741).

Areas for improvement include:

  1. Currently, gen_gds_json and gen_lef have technology dependent parameters
    (e.g. list of layer number and via sizes) and are therefore duplicated in the MockPDK directory. Need to figure out how to reuse the base classes in Canvas.
  2. There is still a lot of duplication among the fabric_* files based off of CanvasNMOS. It should be possible to merge all the command line tooling into a single file with an additional parameter (Primitive Name).
  3. Module imports need to be relative or based off of the CellFabric package itself. Where to store the MockPDK may also warrant additional discussion.
  4. The generator classes are all called "UnitCell". Might be better to rename them based on purpose.

This is mainly refactoring work. Not very high priority at the moment so creating a new issue to resolve later.

generalize gen_gds_json & gen_lef

They are both technology specific right now (gen_gds_json partially reuses JSON layer info). Moving all the data to the JSON & making these technology agnostic would be preferable.

Exercise MockPDK primitive generation code or remove untested code

I am not quite sure where the following primitive generators are being used:

  1. fabric_CMB3_NMOS.py
  2. fabric_CMB5_NMOS.py
  3. fabric_Res.py

We should either be testing these modules or removing them from the codebase.

Feels like fabric_CMB* needs to be collapsed into a more generic primitive generator & fabric_Res needs to be tested.

@arvuce22 Can you please look into this? At the very least, every MockPDK primitive generator should have simple tests that perform LVS & DRC (See FinFET14nm_Mock_PDK/test_all.py::test_ALL)

Flow with multiconnections at cell generation and PnR

The cell generation and PnR has features to do multiple connections we need them to be integrated in the full flow. The multiple parts within this issue are:

  1. Definition be designer
  2. Passing it through subcircuit identification
  3. How to pass parameters to cell generation.
  4. AT PnR stage what happens if the width required by no of routes required is more than given width of the cell pin.

Support for variable cell height

There are two parameters for height in ALIGN_primitives.py right now: nfin & height. height currently defines the unit cell height & nfin is supposed to define the variable cell height within the unit cell (unused right now). Needs to be fixed at some point.

seg-fault in Global Router

Parijat has created some new leaf cells with M3 terminals and the global router seg-faults.

The bug is in Bugs/telescopic_ota with a RUNME. The router should not fault but should report an error (in this case, I think it is a net that has no source)?

I supplied the inputs/ directory so you should be able to run pnr_compiler on these inputs and reproduce the bug.

Need additional PDKs in public repo

We should try & get as many dissimilar PDKs as we can. For example, an older bulk technology node and a dissimilar FinFET node. Intentionally creating PDKs to excite corner cases such as misaligned M1 / M3 layers is also an option.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.