Git Product home page Git Product logo

circuitnet's People

Contributors

apri0426 avatar endeavour10020 avatar lapchiu-super avatar limbo018 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

circuitnet's Issues

Data set processing considerations

First, you can't use circuitNet14 to build a dataset because the file is newly released, and train_set cannot be built because the file length in IR_drop_features does not match. You should use CircuitNetN28 to build successfully, here just follow the steps, just a little tip:decompress_IR_drop.py line 8 '.' should be '* *'.

关于Timing中不同阶段net_edges卷标的问题

您好,正在做时序方面的工作,想请问比如在N14数据集中,考虑同一个design在cts和route阶段的net_edges卷标,卷标中的node index如果一致,是否指的是设计层面上的他们表示同一个node?

For the issue of mismatched files in the routing features section of N14

When I run python generate_training_set.py for CN-14,There are 10 files in the label that do not match.
20240524132204

I looked at the question commented by luoxiaotian521 on April 4th, and you mentioned the CSV file for N14. I generated a name_list using the file name provided in the CVS file to run generate-training_set.py, but there is still an issue with the file not being present.

20240524132938
so,how can i solve these questions

python process_data.py

Traceback (most recent call last):
File "/data/gf/anaconda3/envs/circuit-gnn/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/data/gf/anaconda3/envs/circuit-gnn/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/data/gf/CircuitNet-icisc_2023/CircuitNet-icisc_2023/feature_extraction/process_data.py", line 25, in read
process_log.get_IR_drop_features()
File "/data/gf/CircuitNet-icisc_2023/CircuitNet-icisc_2023/feature_extraction/src/read.py", line 61, in get_IR_drop_features
x, y = i.split(',')
AttributeError: 'float' object has no attribute 'split'
Is there any problem?

FeatureExtract different from FeatureDownload from N28

When I used the code in the feature_extraction folder to extract features, I found that the npy features generated by the script were inconsistent with the routability_features_decompressed features downloaded from googledrive. For example, I used 10129-zero-riscy-b-3-c5-u0.9-m1- The two defs corresponding to the p6-f0 design (place stage and route stage) and circuitnet.lef generate cell_density feat. They are inconsistent with the downloaded cell_density features and there are a lot of strange noises. The two defs I used can be found in the compressed package. To obtain, the code uses the feature extraction in the repo and only modifies the path.
place&route-DEF.zip

tmp-base npy
tmp npy

About congestion label

I found that the label value of many areas without node in the congestion label is a very small value like 0.0612(4466-RISCY-FPU-a-1-c20-u0.8-m4-p4-f0), but not 0. Why is this?

Possible release of tech files / tool scripts?

Hi,

Thanks again for releasing this dataset! My group is interested in running the same flow used to generate the data. For example, if we want to take a layout from the circuitnet dataset, make an adjustment to the layout, do PR / DRC flow on the adjusted layout, and generate a new set of features / label.

Is there any way to release these details? We are also open to collaboration and willing to discuss more details. Please let me know!

真实的node不在node_attr内?

instance_placement是真实node的放置位置,mapper = node_attr[0] #将node index映射为真实放置名 因此mapper长度一般大于instance_placement,按道理mapper 一定包含instance_placement,这样instance_placement才能通过mapper查询到node index,接着可以查询pin_attr找到node的pin,然后就能查询到node链接的net,但是我突然发现instance_placement有2000多个在mapper 中查询不到?如下图一个例子
image

接着我查询了整个instance_placement
image
得到misscnt=2426,为什么?是意味着这些真实node没有pin?

RUDY map pre-process

Are your RUDY map of RISC-V dataset pre-processed by smoothen? But the ISPD2015 dataset provided by you are not?
Because the data distribution is very different.

Fail to decompress IR-drop data in CircuitNet-N14, in files ended with .gz00/01/02......

I try to run the "decompress_IR_drop.py" but always get the error message "unexpected end of file".

Even if I try to run the gzip commands directly in terminal, I either get "unexpected end of file, uncompress failed" or "unknown suffix -- ignored ".

I've also updated my gzip and try to open the files in graphic UI, but Mac shows "no application set to open the document".
I would really appreciate it if you can tell me where is the error.

4091712797467_ pic

Will CircuitNet include more EDA tasks?

CircuitNet really helps us a lot in congestion prediction research. Now we are working on various EDA tasks, for example, global placement. The open-source datasets on placement are few and they are in different formats (bookshelf/lefdef) which are hard to convert. We use ISPD and DAC datasets in DREAMPlace, but it might not be enough to produce convincing results. It will be greatly appreciated if CircuitNet can include more EDA tasks like global placement.

About File Mismatch Issues

1.When I run python generate_training_set.py for CN-14,there are 11 files just in feature directory,but not in label directory.
image
image
2.when i run test.py, i found the test.csv is not matched to the training_set
image
so,how can i solve these questions

python precess.py问题

我运行这个函数只得到两行输出,不知道是什么原因,我的pytorch版本是pytorch11.3+cu116,运行这个函数不会报错,但是到训练的时候会因为缺少文件报错
image

Problems with dataset visualization

Hi, I had some confusion when visualizing 1-RISCY-a-1-c2-u0.7-m1-p1-f0,I've visualized the file in ‘instance_placement/1-RISCY-a-1-c2-u0.7-m1-p1-f0’ and it doesn't seem to match the given feature(Macro_region,RUDY, Pin_RUDY)

See the detailed visualizing results in https://github.com/Doctor-James/CircuitNet/tree/master/images
and my visualizing code in https://github.com/Doctor-James/CircuitNet/blob/master/view_data.py (For the sake of privacy,the absolute path is blurred)

Thanks!

Question about RISC-V designs

Hello! I recently read your paper, you used 6 open source RISC-V designs to generate data. I want to know what open source data you used, can you give me a download link?

About N-14 dataset

Hello! I noticed that your team mentioned a new dataset in the published Circuitnet2.0. In the N14 dataset available on GitHub, I found five datasets: RISCY, RISCY-FPU, NVDLA-small, Vortex-small, and zero-riscy. However, the mentioned dataset in the article includes seven datasets: RISCY, RISCY-FPU, zero-riscy, OpenC910-1, Vortex-small, Vortex-large, and NVDLA-large. I have a few questions: It appears that OpenC910-1, NVDLA-large, and Vortex-large are missing, while NVDLA-small is included. Additionally, I would like to inquire about the possibility of open-sourcing the corresponding LEF and DEF files for further research. Thank you very much!

Expand the dataset

Hi,

Is there any way to expand the dataset to include such like flip-flop cell density map, and cell pin density map?

Best,

Magi

解压N14_timing_features_post_cts中的使用cat指令合并后的net_edges.tar.gz文件时遇到的问题

您好,我想问一下circuitnet里14nm工艺中的时序预测相关的问题,我在解压N14_timing_features_post_cts中的使用cat指令合并后的net_edges.tar.gz文件时遇到了下述问题:
gzip: stdin: unexpected end of file
tar: Unexpected EOF in archive
tar: Unexpected EOF in archive
tar: Error is not recoverable: exiting now
是否是文件不完整导致的,还是说是网络问题导致的,能否提供一些解决办法。

About using circuitnet.lef

Hello, I have the following two questions to ask you.
First of all, I want to ask if your def file is obtained after the placement of the whole process runs?
Secondly, I want to view the placement results by importing your circuitnet.lef and def files into EDA business tools, but I found that there was an error when I first imported the circuitnet.ef file. Do you have any suggestions?

Error occurred when I imported the lef file:
Loading LEF file /mnt/hgfs/mkshare/circuitnet.lef ...
**ERROR: (IMPLF-53): The layer 'M2' referenced in pin 'Z' in macro 'AN2_0010' is not found in the database. A layer must be defined in the LEF technology LAYER section before it can be referenced from a macro. Review the LEF files specified in the init_lef_file variable to see if the layer does not exist or is specified after the one that defines the macro.
Type 'man IMPLF-53' for more detail.
**ERROR: (IMPLF-3): Error found when processing LEF file '/mnt/hgfs/mkshare/circuitnet.lef'. The subsequent file content is ignored. Refer to error messages above for details. Fix the errors, and restart tool again.
Type 'man IMPLF-3' for more detail.
**ERROR: (IMPLF-26): No technology information is defined in the first LEF file.
Please rearrange the LEF file order and make sure the technology LEF file is the
first one, exit and restart tool.
**ERROR: (IMPLF-26): No technology information is defined in the first LEF file.
Please rearrange the LEF file order and make sure the technology LEF file is the
first one, exit and restart tool.

Problems with Gate-level Netlist

I'd like to ask some questions about Gate-level Netlist,I looked at the contents of node_attr,it's a two-dimensional array,The first dimension starts with "pulpino_top", followed by some node names,the second dimension starts with "module", followed by some module names. Does that mean nodes which have the same module name are connected to each other?

build_graph.py问题

请问使用post_place中的数据运行build_graph.py时,报错

Process Process-4:
Traceback (most recent call last):
  File "/home/xx/.conda/envs/env1/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/home/xx/.conda/envs/env1/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "build_graph.py", line 48, in build_graph
    g.ndata['nf'] = torch.tensor([pin_positions[nodes[i.item()].replace('\\','')][0:4] for i in g.nodes()]).type(torch.float32)
  File "build_graph.py", line 48, in <listcomp>
    g.ndata['nf'] = torch.tensor([pin_positions[nodes[i.item()].replace('\\','')][0:4] for i in g.nodes()]).type(torch.float32)
KeyError: 'core_region_i/CORE.RISCV_CORE/id_stage_i/U148/A1

类似报错很多,均是KeyError
KeyError: 'core_region_i/CORE.RISCV_CORE/ex_block_i/alu_i/FE_OFC299_div_en_ex/I'
KeyError: 'axi_interconnect_i/axi_node_i/_RESP_BLOCK_GEN[0].RESP_BLOCK/AW_ADDR_DEC/U31/A1'
...
而build_graph.py在post_cts和post_route上能正确运行没有上述错误。
补充:在post_route上运行时发现缺少几个pin_positions,修改代码跳过后能运行。

Align instance placement and node attributes

Hi! Maybe a dumb question, but the docs mention that there is one-to-one correspondence between the node name from the node attribute array and the keys in the instance placement dictionary.

However, when I check examples, it does not look one to one. For example if I run

nodeattr = np.load('node_attr/RISCY-a-1-c2_node_attr.npy', allow_pickle=True)
instance_placement = np.load('instance_placement/9-RISCY-a-1-c2-u0.7-m2-p1-f0',allow_pickle=True).item()
len(nodeattr[1]), len(instance_placement)

I get (53587, 50052), different sizes and I see that different instance placements for the same design also have different number of standard cell placements. Do you have any guidance for this or an example of the node attribute alignments with instance placement?

I have some questions about data set production

Hello! I recently read your paper and code, also want to make your own dataset training attempt, but I found that your dataset is used innovel software analysis reports into NPY files? I'd like to talk about some of the benchmarks of the ISPD contest being converted into datasets. At present, in addition to the notes and https://www.ispd.cc/contests/11/other_files/benchmark_format.pdf provided by the benchmarks, after the benchmarks I used some open source software to generate the global placement file PL (containing all pin lower left coordinates, one layer) , as well as the global routing file GR (including all pin connections, multi-layer wiring, 0-9 layers) , I observed that your dataset is a single layer of files and wiring, can you make a dataset? Thank you for your reply

question about downloading datas

hi, I want to download your dataset files with google drive on my linux server.
I know you provided second option as baidu netdisk, but I don't know how to use baidu.
So I tried downloading with google drive:
gdown --folder https://drive.google.com/drive/folders/1Xp2y29Le6Doo3meKhTZClVwxG_7z2QuF?usp=sharing
(this is your CircuitNet-N28)

But it failed with below messages:
Sorry, you can't view or download this file at this time.
Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.

I googled this, but couldn't find a solution.
Could you tell me how to download your files with google drive in Ubuntu terminal?

decompress.py directory setup issues

Hi-- Thanks so much for working on this!

Re the decompression scripts in the data gdrive directory, I think there may be some communicability issue with the decompression scripts. For example, in decompress_routability.py only the decompress_path='routability_features_decompressed' directory is created, but none of the sub-directories are created (e.g. routability_features_decompressed/RUDY).

In other words, when I run decompress_routability.py, the RUDY files are not generated. To resolve, I think there should be a line to create the child directory. Maybe something like os.system("mkdir -p %s" % (parent.replace('routability_features','routability_features_decompressed'))) after the first loop. I used this:

import os

decompress_path = '../routability_features_decompressed'
os.system("mkdir -p %s " % (decompress_path))
filelist = os.walk('../routability_features')

for parent,dirnames,filenames in filelist:
    os.system("mkdir -p %s " % (parent.replace('routability_features','routability_features_decompressed')))
    for filename in filenames:
        if os.path.splitext(filename)[1] == '.gz':
            filepath = os.path.join(parent, filename)
            os.system('gzip -dk %s' % filepath)
            os.system('tar -xf %s -C %s' % (filepath.replace('.gz',''), parent.replace('routability_features','routability_features_decompressed')))

About align instance placement and node attributes

Hello, I would like to ask for advice:In issues 11 I can understand “the node attribute always has more values than instance placement”, but I found that some of the cells in the instance placement are not found in the node attribute, which means they are placed but not connected to any other cells?

Question about label

I would like to ask about the label in the congestion prediction and DRV prediction problem. The value of each pixel in the label in the dataset is between 0 and 1. What is the actual physical meaning of this? I see that some other articles define congestion prediction as a classification problem, and the label value is 1 or 0. Is there also a threshold so that the pixel value above the threshold represents congestion? The label in DRV prediction is a similar problem

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.