Git Product home page Git Product logo

fusion360gallerydataset's People

Contributors

chuhang-autodesk avatar dmsteck avatar ederfduran avatar evanthebouncy avatar joelambourne avatar karldd avatar panthersuper avatar rl-2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

fusion360gallerydataset's Issues

Is face based extrusion sufficient to represent all shapes in the dataset?

In the paper, the proposed network works on faces, where two faces are selected and operation is predicted. It seems to me that shapes with subtraction operation where subtraction creates a cavity throughout the shape, cannot be represented by this strategy. An example posted below encircled in red.
unnamed

Some of these subtraction cases can be represented using add operation using trimmed faces, but I think it is not always possible to represent all shapes in this dataset with this strategy.

Doubts about datasets

Thank you for open sourcing such a great project!
But I have a few questions about Assembly Dataset that I hope you can answer.
Hierarchical information about the assembly dataset, such as Tree, root, etc. There are some variables that I somewhat don't quite understand after reading the paper and in the presentation of Fusion 360 Gallery, as follows:

  1. In the paper you say: "The tree contains this hierarchy information by linking to occurrence UUIDs." Is the tree the whole diagram in Figure 9 right in the paper? Does it include the parts below? (I understand that the parts should be the bodies in the paper.) I may be a bit vague about the definition of the tree.

  2. For example, take an assembly(20215_e2eb3082)that I randomly selected from the Assembly Dataset. Looking at the JSON file, the Tree only contains a dictionary with a key of root and a value of occurrences, but at the root level of the data it contains only components and bodies, which shouldn't mean the same thing as root, right?

  3. Is the 'name' of Occurrence in the dataset the proper name for the component in a particular industry, and does it contain other information such as location or order that is useful for assembly?

  4. About 'body' inside the 'name' naming method for 'Body' + 'number', and different 'bodies' may contain the same name, what does this number indicate?

5.The last point, about the naming of bodies, occurrences, components, similar to 20215_e2eb3082 in occurrences: 03fd7a74-05b5-11ec-ba40-0a17b33ae929, component: 03fcde40-05b5- 11ec-8cdf-0a17b33ae929, bodies: 03f2f326-05b5-11ec-9d62-0a17b33ae929, do these numbers and letters have any special meaning?

Hope to get your reply!
Thank you as always and have a nice day !

Some problems encountered when running assembly2cad

Why this file only has no code for the function to run, what parameter should be passed to the context of the run function, I tried to pass a random number and run it. But unfortunately, he has an error, in the following two lines of code:
temp_brep_mgr = adsk.fusion.TemporaryBRepManager.get()
breps = temp_brep_mgr.createFromFile(str(smt_file))
This return value is shown below:
temp_brep_mgr :is_Valid = False
objectType = ' '
breps : count = 0
is_Valid = False
objectType = ' '
Is this a case of not getting the smt file data? I checked the smt_file path and there is no problem.
I hope you can give me a little help when you have the time, thank you very much!

Method to create reconstruction json files

I would like to create my own reconstruction datasest. I'm wondering if there is a tool to convert a design in .smt/.step/... format to a reconstruction json file. Thanks in advance!

Issue with CAD model + its parts

Hi,
As per my understanding of the dataset, in the reconstruction folder we have extrude sequences like "20203_7e31e92a_0000_0001" and its final output after several extrude operations -eg- "20203_7e31e92a_0000". But this final output is the part of some CAD model.
Do we have information about (CAD model + All the parts involved in making it + the sequence in which they were joined to make the CAD model)
Thanks

structuring /tools/search/ 1) random roll-out

2020-08-27

we'll organise the search structure to admit easy experiments and plot generations

base_search.py is perfect the way it is, but, I would rename BaseSearch to ReplEnv, and have it handle interacting with the fusion server, and leave the logging somewhere else. For now leave it as is is fine.

we'll be modifying random_search.py, factoring it to allow easy modifications and additions of other agents and search procedures

the general idea of performing search is that, while an agent is unlikely to produce the correct reconstruction in one go, repeated usage of the agent in clever ways will increase the likelihood of reconstruction.

there are different kinds of agents:

  • the random agent performs the actions according to a uniform distribution (call this AgentRandom)
  • the supervise trailed neural network agent performs actions according to a more informative distribution (AgentSupervised)
  • the RL trained neural network agent (if time permits) performs actions according to a different distribution (AgentRL)

there are different kinds of search procedures:

  • random-rollout is a search procedure that repeatedly sample the agent's distribution for the next action
  • beam-search is a search procedure that keep track of the top-k sequences w.r.t. the probability of generation
  • stochastic beam-search is yet another search procedure (will implement if time permits)

as we can see, the agents and the search-procedures are factorized and decoupled, so that any agent can be leveraged with any search procedure (for a total of 9 combinations here).

we'll start by building one of such combinations: AgentRandom with random-rollout, keeping the two decoupled so when we get the trained NN agent, AgentSupervised, we can swap out the random-agent to get the combination of AgentSupervised with random-rollout.

Agent

Agent can be an abstract class, it needs to implement the following methods:

  • init(...) : whichever init you need, for the AgentSupervised you will likely have to pass in a neural network here
  • get_actions_prob(current_graph, target) : given a current graph, and the target graph, give two lists:
    1. a list of all possible actions, where each action is a triple (start-face, end-face, operation)
    2. the associated probability for each action
class RandomAgent(Agent):

    def __init__(self, target_file):
        
        # Store a list of the planar faces we can choose from
        self.target_faces = []
        for node in self.target_graph["nodes"]:
            if node["surface_type"] == "PlaneSurfaceType":
                self.target_faces.append(node["id"])
        assert len(self.target_faces) >= 2

        self.operations = ["JoinFeatureOperation", "CutFeatureOperation"]



    # we'll take in these arguments for consistency, even some of them might not be needed
    def get_actions_prob(current_graph, target):
        list_actions = []
        list_probabilities = []
        for t1 in self.target_faces:
            prob_t1 = 1 / len(self.target_faces)
            for t2 in self.target_faces:
                if t1 != t2:
                    prob_t2 = 1 / (len(self.target_faces) - 1)
                    for op in self.operations:
                        prob_op = 1 / 2

                        action = (t1, t2, op)
                        action_prob = prob_t1 * prob_t2 * prob_op

                        list_actions.append(action)
                        list_probabilities.append(action_prob)

        return list_actions, list_probabilities

Search Procedures

In a nutshell, a search procedure should amplify the success rate of any agent, by running it multiple times in a clever way, it needs to implement the following methods:

  • init(target_file) : initialize, and set the target for this particular search
  • get_score_over_time(agent, budget, score_function): given a particular agent, a search budget (measured in number of repl invocations, specifically, the number of "BaseSearch.extrude" function calls), and a particular scoring function (iou or complete reconstruction). return for up to each repl invocation, the best score obtained from the set of explored programs in the search
class Search:
    # move the logging functions from BaseSearch here
    # suggest to rename BaseSearch to ReplEnv
    def __init__(self, target_file):
        pass
    def get_score_over_time(self, agent, budget, score_function):
        pass

class RandomSearch(Search):

    def __init__(self, target_file):
        self.log = Log() # suggest to make a Log class and plug it in here
        self.target_file = target_file

    # ignoring score function for now
    def get_score_over_time(self, agent, budget, score_function):
        target_graph = get_target_graph(target_file)
        # the length of rollout is the same as the number of faces as a maximum
        rollout_length = len([node for node in target_graph["nodes"] of node["surface_type"] == "PlaneSurfaceType"])

        used_budget = 0
        best_score_sofar = 0
        best_score_over_time = []

        while used_budget < budget:
            # open a "fresh" ReplEnv. probably try to avoid closing fusion and opening it again as that will be inefficient
            env = get_fresh_env()
            cur_graph = env.setup(target_file)
            for i in range(rollout_length):
                actions, action_probabilities = agent.get_actions_prob(cur_graph, target_graph)
                sampled_action = numpy.random.choice(actions, 1, p=action_probabilities)
                cur_graph, cur_iou = env.extrude(sampled_action)
                # do some logging
                best_score_sofar = max(best_score_sofar, cur_iou)
                best_score_over_time.append(best_score_sofar)
                used_budget += 1

        # again, this should be done with some logging, but I'm explitictly returning it for now
        return best_score_over_time

Now we have the best_score_over_time for a particular target_file, of a particular search procedure, with a particular agent. Let us generate the plot for it. You'll probably have to adapt it a bit to fit with the rest.

search_budget = 100
best_over_time_all_tasks = []


for target_file in all_task_files:
    random_agent = RandomAgent(target_file)
    random_search = RandomSearch(target_file)
    score_over_time = get_score_over_time(random_agent, search_budget, None)
    best_over_time_all_tasks.append(score_over_time)

# now the list is organized as all iou of all tasks on step 0, then all ious on step 1, etc
best_over_time_all_tasks = list(zip(*best_over_time_all_tasks))
means = [numpy.mean(x) for x in best_over_time_all_tasks]
stds = [numpy.std(x) for x in best_over_time_all_tasks]


import matplotlib.pyplot as plt

# Build the plot
fig, ax = plt.subplots()

ax.bar([_ for _ in range(search_budget)], means, yerr=stds,)

No extrude IntersectFeatureOperation in Reconstruction Dataset

Is there really no extrude IntersectFeatureOperation in the whole dataset according to the very bottom of the stats page
(https://github.com/AutodeskAILab/Fusion360GalleryDataset/blob/master/docs/reconstruction_stats.md)?

If so, what might be the reasons for that? Do users rarely extrude 2 shapes to later intersect them to get a smaller shape?

Isn't that a problem for an agent solving the gym environment, bootstrapped with imitiation learning from the reconstruction dataset, since the IntersectFeatureOperation action will get initialized with 0 probability and never be explored?

Data that have both construction sequence and segmentation information?

Hi,

I wonder if there are data that contains both construction sequence and segmentation information. Specifically, I need the CAD data with construction sequence and also the correspondence between the final B-rep faces and the features.

I do notice that in the reconstruction dataset, each feature entity in the json file has IDs for the faces that it creates. For example, a ExtrudeFeature has a field named extrude_faces that gives face IDs. But there is no correspondence of those IDs in the B-rep file (.step).

The segmentation dataset seems to have the correspondence but no full construction sequence.

Will such data that I described be provided or is there a way to work around? Thanks in advance!

Different ID?

Hi, I wonder if the file id (e.g. 20440_27177360_0004) is the same for reconstruction and segmentation data? There are around 4500 cad models with the same id, but not all of them point to the same object. Is there any way to find the correct overlapping objects in the two datsets? Thank you.

Acquire the spatial geometry data of every extrude step

Hi,

I am now parsing the JSON file to get the sketch extrusion step data, not the surface or BRep file that already released, but the spatial profiles and the corresponding extruded profiles (offset), as shown in the figure below (P and P_off):

image

Now I am seeking help from the python tool sketch_extrude_importer.py, and it seems that there are some ways to achieve it, but I still not very clear. Some problems are:

  1. how to get the correct spatial positions, e.g., some sample points on a circle profile? is there any function, like the evaluator in BRep to get the real geometry?
  2. I found only the functions reconstruct_sketch_curve involves the 3D point, is it the only way to acquire the geometry?
  3. Is there a way to extract the profile and offset curves directly from the extrude feature object? As my understanding, it should contain the base and offset curves to construct the new object.
  4. Is the extrude direction vector stored in the JSON file, cannot find it

If you could help to propose a clear solution, that would be very nice :)

Thanks very much

Problems about processing Assembly - Joint dataset with new files

Hi, thanks for providing such a great dataset and tools.

For the Assembly - Joint dataset, I would like to know how to generate a json file from a new .step or .smt file for prediction by the JoinABLe model, including

  1. For the parts, refer to #92 (comment), can I generate a networkx json file that can be used for the JoinABLe model by modifying solid_to_graph.py.?
  2. For joint sets, how to generate joint_set_****.json file from CAD file? That is to reverse the process of joint2cad.

Looking forward to your reply and thanks for your help!

Improve documentation on joint data references

As mentioned in #90 we need to provide better documentation about how the joint data references other stuff. Namely:

  • References from json graph to entities
  • References from json graph nodes to links

Issue for launching server

Hi, I followed the steps in "server", and tried to launch the server mode of fusion360. But the fusion 360 is opened but still can be controlled from UI. Fusion360 should be no response. Is there any setting or error?

Here is the message:

D:\Fusion360GalleryDataset-master\tools\fusion360gym\server>python launch.py --instances 1
Fusion 360 found at C:\Users\ylliu\AppData\Local\Autodesk\webdeploy\production\7a444c5b9266cf0505b3a85c0def24a04f033e63\Fusion360.exe
Launching Fusion 360 instance: 127.0.0.1:8080
Fusion launching from C:\Users\ylliu\AppData\Local\Autodesk\webdeploy\production\7a444c5b9266cf0505b3a85c0def24a04f033e63\Fusion360.exe
"C:/Users/ylliu/AppData/Local/Autodesk/webdeploy/production/7a444c5b9266cf0505b3a85c0def24a04f033e63/plugins"
qt.webenginecontext:

GLImplementation: desktop
Surface Type: OpenGL
Surface Profile: CompatibilityProfile
Surface Version: 4.6
Using Default SG Backend: yes
Using Software Dynamic GL: no
Using Angle: no

Init Parameters:
  *  application-name Fusion360
  *  browser-subprocess-path C:\Users\ylliu\AppData\Local\Autodesk\webdeploy\production\7a444c5b9266cf0505b3a85c0def24a04f033e63\QtWebEngineProcess.exe
  *  create-default-gl-context
  *  disable-d3d11
  *  disable-es3-gl-context
  *  disable-features DnsOverHttpsUpgrade,ConsolidatedMovementXY,InstalledApp,BackgroundFetch,WebOTP,WebPayments,WebUSB,PictureInPicture
  *  disable-gpu-rasterization
  *  disable-speech-api
  *  enable-features NetworkServiceInProcess,TracingServiceInProcess
  *  enable-threaded-compositing
  *  ignore-gpu-blocklist
  *  in-process-gpu
  *  log-severity disabled
  *  no-proxy-server
  *  use-gl desktop


DevTools listening on ws://127.0.0.1:9766/devtools/browser/a6b02014-aef8-4497-8f8b-289a86c76dd4

D:\Fusion360GalleryDataset-master\tools\fusion360gym\server>BUG OptionAdapter UseEagleRc called before setting callbacks.EagleAPI.Version = 0.1.2
BUG OptionAdapter UseEagleRc called before setting callbacks.BUG OptionAdapter UseEagleRc called before setting callbacks.QString::arg: 1 argument(s) missing in %1/scripts
QString::arg: 1 argument(s) missing in %1/scripts
BUG OptionAdapter UseEagleRc called before setting callbacks.QString::arg: 1 argument(s) missing in %1/ulps
QString::arg: 1 argument(s) missing in %1/ulps
BUG OptionAdapter UseEagleRc called before setting callbacks.QString::arg: 1 argument(s) missing in %1/design rules
QString::arg: 1 argument(s) missing in %1/design rules
BUG OptionAdapter UseEagleRc called before setting callbacks.QString::arg: 1 argument(s) missing in %1/spice
QString::arg: 1 argument(s) missing in %1/spice
Registering module:  "uiHelper"
20:49:31 Will ignore disconnects: false
20:49:31 Using ws on port: 59404
20:49:31 Installing exception handlers
20:49:31 Global exception handler installed
20:49:31 Installing signal handlers
20:49:31 Installed SIGHUP
20:49:31 Installed SIGTERM
20:49:31 Installed SIGINT
20:49:31 Installed SIGBREAK
20:49:31 (node:27308) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
20:49:32 [socket client] connected
20:49:33 [scheduleBearerTokenRefresh]: Token expiring in 16394 seconds
20231227T124933 INFO 27308  Use SQLite database at C:\Users\ylliu\AppData\Local\Autodesk\Autodesk Fusion 360\K5Q2QBHBQ8G2Q8K7\PIM.login\By92P05KAYC7hjX0IJZ56Q_L2C_v6.sql
20231227T124933 INFO 27308  [L8 Init Journal Instance] Created journal instance [forgeSchemaStoreJournalId_8rNX8W9c4XmO] of type [remote] for collection [forgeSchemaStoreJournalId_8rNX8W9c4XmO], legacySync [false]
20231227T124933 INFO 27308  ________________________________________________________________________________________________________________________
20231227T124933 INFO 27308  FDModelingSDK.loadSchemas                          ###################################################################### 223ms
20231227T124933 INFO 27308  PIMClient created with plugins:  ["PIMCorePlugin","ProductPlugin","DocumentationPlugin","PIMConfigurationPlugin","PIMInspectionPlugin","PIMPmiPlugin","PIMShopFloorPlugin","PIMDataExtensionPlugin","PIMPhysicalPropsPlugin","PIMBomPlugin","PIMManagePlugin"]
20231227T124933 INFO 27308  ________________________________________________________________________________________________________________________
20231227T124933 INFO 27308  FDModelingSDK.getJournals                          ###################################################################### 1ms
20231227T124933 INFO 27308  [L8 Init Journal Instance] Created journal instance [2a7OeLaqefZL3vgcW1zC0] of type [main] for collection [By92P05KAYC7hjX0IJZ56Q_L2C], legacySync [false]
20:49:33 [rpcHandler]: Receiving command setProxy
20:49:33 Disabling proxy
20:49:33 [rpcHandler]: Receiving command setOnlineState
20:49:33 Received event setOnlineState, onlineState: 0
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
Remote debugging server started successfully. Try pointing a Chromium-based browser to http://127.0.0.1:9766
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
MAG.Default: [Idle Status] Report Idle(Idle or Not):  false
None
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::setMinimumSize: (LearningPanelPalette_1/QTDockWidget) Negative sizes (0,-1) are not possible
QWidget::setMinimumSize: (LearningPanelPalette_1/QTDockWidget) Negative sizes (0,-1) are not possible
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
20:49:45 Initialize: found existing SDK instance
20231227T124945 INFO 27308  [L8 Init Journal Instance] Created journal instance [2a7OeLaqefZL3vgcW1zC0] of type [main] for collection [By92P05KAYC7hjX0IJZ56Q_L2C], legacySync [false]
20:49:45 [rpcHandler]: Receiving command featureFlagsDownloaded
20:49:45 [featureFlagsDownloaded]: Received event
20:49:45 [featureFlagsDownloaded]: Feature flags on hub o3651331 with tenant 43612202: {"ffBrep":false,"ffCompVer":true,"ffAnyCADInvPart":true,"ffLightType":false,"ffProperties":true,"ffWrite":true,"ffConfiguration":false,"ffEliminateMigration":true,"ffFailureMock":false,"ffTransactionSave":true,"ffPMI":false,"ffInspection":false,"ffShopfloor":false,"ffMeCacheCommands":false,"ffPC":false,"ffPCW":false,"ffClobberSaveClearCache":true,"ffRemoveSqlV2V3":false,"ffDiagnostics":false,"ffPPExtraction":false,"ffPPExtractionForSave":false,"ffDataValidation":false,"ffResiliencySync":true,"ffUseWorkerThread":false,"ffCmdsRateLimitPerMinute":"600","ffRetryRemoteVersionSnapshotLimitSec":"0"}
20231227T124945 INFO 27308  getHubInfoByTenantId 43612202 undefined
20:49:45 [rpcHandler]: Receiving command setProxy
20:49:45 Disabling proxy
20:49:45 [rpcHandler]: Receiving command setOnlineState
20:49:45 Received event setOnlineState, onlineState: 0
Connecting to controller server
Controller server info:  QHostAddress("127.0.0.1") : 63873
host connected QHostAddress("127.0.0.1") : 63988
"20:49::45.504" MAGWorkControllerHostConnection  cmd sent  "auth"
"20:49::45.506" MAGWorkHostControllerConnection  cmd sent  "auth"
MAGWorkControllerHostConnection  cmd received  "auth"
"20:49::45.508" MAGWorkControllerHostConnection  cmd sent  "authok"
MAGWorkHostControllerConnection  cmd received  "auth"
"20:49::45.512" MAGWorkHostControllerConnection  cmd sent  "authok"
MAGWorkHostControllerConnection  cmd received  "authok"
Controller connected
Sending max processes:  8
"20:49::45.515" MAGWorkHostControllerConnection  cmd sent  "ready"
MAGWorkControllerHostConnection  cmd received  "authok"
MAGWorkControllerHostConnection  cmd received  "ready"
"20:49::45.519" MAGWorkControllerClientConnection  cmd sent  "auth"
"20:49::45.521" MAGWorkClientControllerConnection  cmd sent  "auth"
MAGWorkControllerClientConnection  cmd received  "auth"
"20:49::45.523" MAGWorkControllerClientConnection  cmd sent  "authok"
MAGWorkClientControllerConnection  cmd received  "auth"
"20:49::45.528" MAGWorkClientControllerConnection  cmd sent  "authok"
MAGWorkClientControllerConnection  cmd received  "authok"
MAGWorkControllerClientConnection  cmd received  "authok"
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
20231227T124946 INFO 27308  [L8 Init Journal Instance] Created journal instance [2a7OeLaqefZL3vgcW1zC0] of type [main] for collection [By92P05KAYC7hjX0IJZ56Q_L2C], legacySync [false]
20:49:46 [documentIdentityService]: Successfully rehydrated service.
     Space Collection id: By92P05KAYC7hjX0IJZ56Q_L2C
20:49:46 Space collection id is : By92P05KAYC7hjX0IJZ56Q_L2C
20:49:47 [rpcHandler]: Receiving command setProxy
20:49:47 Disabling proxy
20:49:47 [rpcHandler]: Receiving command setOnlineState
20:49:47 Received event setOnlineState, onlineState: 2
20:49:47 --------------------------------------------------------------------------------
20:49:47 [postSdkInitializationExecutor]: Start
20231227T124947 INFO 27308  [L8 Init Journal Instance] Created journal instance [2a7OeLaqefZL3vgcW1zC0] of type [main] for collection [By92P05KAYC7hjX0IJZ56Q_L2C], legacySync [false]
20231227T124947 INFO 27308  [L8 Init Journal Instance] Created journal instance [By92P05KAYC7hjX0IJZ56Q_L2C] of type [remote] for collection [By92P05KAYC7hjX0IJZ56Q_L2C], legacySync [false]
MAG.Default: Test platform is ready.
MAG.Default: [Idle Status] Report Idle(Idle or Not):  true
20:49:49 [postSdkInitializationExecutor]: End (1506 ms)
20:49:49 --------------------------------------------------------------------------------
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
qt.gui.imageio: libpng warning: iCCP: known incorrect sRGB profile
*** UnKnown 找不到 localhost: No response from server
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
QWidget::paintEngine: Should no longer be called
QPainter::begin: Paint device returned engine == 0, type: 1
MAG.Default: [Idle Status] Report Idle(Idle or Not):  false
~QTWorkspaceWindow(): "Main--QTWorkspaceWindow"
20:50:52 [rpcHandler]: Receiving command documentClosed
20:50:52 [documentClosed]: Received event
20:50:52 [documentClosed]: Caught error while trying to get invariant id
20:50:52 --------------------------------------------------------------------------------
20:50:52 [documentClosed]: Start
20:50:52 [documentClosed]: Missing invariant id or document not ready for removal, Unable to remove document from state model on document close event.
20:50:52 [documentClosed]: End (0 ms)
20:50:52 --------------------------------------------------------------------------------
20:50:52 [rpcHandler]: Receiving command setProxy
20:50:52 Disabling proxy
"Assert failed in CApp::~CApp() file: R:\\Electron\\EAGLE\\src\\API\\EagleAPI\\libeagle_app.cpp, line: 188"
QObject::startTimer: Timers can only be used with threads started with QThread
QObject::startTimer: Timers can only be used with threads started with QThread
QObject::startTimer: Timers can only be used with threads started with QThread
QEventDispatcherWin32::wakeUp: Failed to post a message (無效的視窗控制代碼。)
OptionAdapter DestructorEagleRc DestructorEagleRc Destructor20:50:57 WebSocket close observed.
20:50:57 [socket close]: code 1006 reason  wasClean false
20:50:57 [socket]: Client is no longer running. Will terminate.
20:50:57 Shutting down FremontJS [Client socket disconnected]...
20:50:57 ...Server disconnected
20:50:57 ...Commands of SDKInstance 409a75e7-feec-4636-b125-501f4410b194:By92P05KAYC7hjX0IJZ56Q_L2C completed
20:50:57 ...SDKInstance 409a75e7-feec-4636-b125-501f4410b194:By92P05KAYC7hjX0IJZ56Q_L2C disposed
20:50:57 Exiting process

About some variables in the part drawing

I am sorry to open another issue, because I am trying to parse the B-REP structure of your provided graph file to JSON format, and I found some variables that are not very clear by comparing your part graph in assembly_joint dataset.

  1. For example, point_on_face_x(y,z), there are so many points on a face, how is this point selected?
  2. Some faces have both normal and axis and their values are not the same, how are these two geometrically determined?

Randomized reconstruction commands

Hi,

I am adding more semi-synthetic data by taking existing designs and modifying or recombining them, which refers to the tool mentioned in Section A.2.4 in the supplemental material. But the related code is only in the test folder and some configuration files are missing in the repo, e.g.,

        cls.data_dir = dataset_dir
        cls.void_data_dir = dataset_dir.parent / "void"
        cls.split_file = dataset_dir.parent / "train_test.json"
        cls.void_split_file = dataset_dir.parent / "void.json"
        cls.distributions_json = dataset_dir.parent / "d7_distributions.json"
        cls.distributions_training_only_json = dataset_dir.parent / "d7_training_distributions.json"

Is there a runnable example for the randomized reconstruction part?

Doubts regarding FUSION 360 Gallery dataset

I had a few queries regarding the FUSION 360 Gallery dataset.
1 - For each item in the Reconstruction Dataset we have JSON, OBJ, PNG, SMT and STEP files.
Also in the paper you have mentioned that in the dataset we have B-Rep, mesh, and construction sequence JSON text
format.
Among the above formats which ones are B-Rep and mesh formats as I am very new to CAD dataset.
2 - What embedding are you using to train the Neural Network i.e. what input are you giving to the neural network.
3 - From paper(section 5.2) - What do you mean by actions(At) and design (Gt), i.e. which components in the dataset represent these?

Precisions on the JoinABLe implementation

Hello ! Thank you for these cool datasets.
I'm trying to reproduce the results of the JoinABle paper, and I was wondering how to deal with the edge vertices which have the is_degenerate flag set to True in a part JSON file.

Indeed, we do not have curve or length information for these vertices, but they however seem to appear sometimes in the graph connectivity matrix that I will be using in the message passing network.

That's the reason why I don't feel they should be removed from the part graph, but I don't know how to represent them in the input tensor.

Thank you again !

Questions about assembly datasets

I read your instructions in the assembly dataset, but I have some confusion.

You have introduced a concept in the assemblyset called components, which can be composed of one or more parts. In your assembly dataset you have derived a concept called components, which can be composed of one or more parts. I see that in your assembly json file you provide joint and as_built_joint as a clue to the connection between different ocurrence, but for ocurrence with multiple parts you don't provide information about the connection between these parts, nor do you have smt, step or obj files for the ocurrence. step or obj file of the ocurrence. So based on this data how should I compose this ocurrence with these parts?
Also I would like to know for ocurrence, component, body names in the assembly dataset are a long string of numbers and letters as their UUIDs, what are these UUIDs generated based on? Do they have any special meaning?

I hope you can answer my questions at your convenience, thank you!

Joints label

In the assembly dataset, it seems that the Joints of the assembly are not complete, resulting in many parts not connected to any other parts from the Joints information, is there any way to complete the labels of these Joints?

Reconstruction of disconnected shapes

Hi,

Can I create graphs in PerFace mode using Regraph for 3D models which contain disconnected shapes? For example, the following model from the reconstruction dataset contains two separate components.

Though the above model gave an assertion error, is it possible for some other 3D models in the reconstruction dataset to generate a graph successfully?

let's do best-first-search

it's similar to beam search except maybe has better characteristics. specifically it wastes less interactions with the GYM but at the risk of not going to the very end of the trajectory. one problem we noticed with beam search is it'll re-do its first beams when it doubles in size, effectively wasting 1/2 of its computation per beam-rollout.

@karldd if you can open a new file by copy/paste the beam-search code, I can take a go at it

Regraph success rate

Hi,

I tried to regraph the reconstruction dataset. Currently I ran through about 1384 json files. 687 of them are successfully regraphed, 446 of them raised some exceptions, and 242 of them are skipped. Are these numbers reasonable? Are a lot of AssertionErrors expected during regraph? Thanks in advance!

Parts of the Assembly Dataset in JSON graph format

Hello @karldd !
Hope you and your team had a great CVPR 🎷

Unlike the Assembly - Joint subset, parts in the complete Assembly dataset aren't available in the NetworkX Graph JSON format. This format is super useful to parse geometrical features that are not necessarily encoded in the part's STEP or SMT files.

I was wondering if you planned to release a version of the dataset that includes the parts in this format.

Thank you as always for your great work and kind help 😄

Tools to parse B-REP

Hi,

Are there some tools or APIs available to parse the B-REP file to get the boundary curves after every extrusion step?

Thanks

How to acquire 3D-Constraints info based on provided 2D-Constraints and Extrusion?

Hi,

Firstly thanks for selflessly releasing such outstanding dataset for public!

As depicted in paper, Fusion360Gallery provides constraints information on 2D sketch, such as CoincidentConstraint or ColinearConstraint. However, these provided constraints all lie in 2D domain rather than 3D. To me, the 3D constraints information is equally important to their 2D counterpart and can also be applied for many meaningful tasks. So i wonder whether there are some available tools to be applied to extracting 3D constraints info from the existing 2D sketch constraints and extrusion sequence?

Look forward to your reply! Thanks in advance.

How to get segmentation label for each face in brep data?

Thank you very much for making the dataset publicly available! We appreciate your great efforts and excellent work. I had a few questions regarding the FUSION 360 Gallery dataset-segmentation dataset. When I looking into the segmentation data, I find there is some data with meaningless label, for example, in figure 1, the quadratic plane is labeled with "cut end". But there is no "cut side" face around it. Same situation in figure 2. I understand that this kind of label is possible when the user cut the whole face with a distance, but this kind of data will bring ambiguity to my task.
image
figure 1
image
figure 2

So I want to change the label, or generate some modeling sequence data and label the face by myself. May I ask the following questions?

  1. How could you label the face with its corresponding modeling sequence? Could you please roughly clarify the process?
  2. Is there any possible way to assign the face with its corresponding modeling sequence label with pyocc or occwl?

Thank you very much!!

UI blocked and cannot recover

Hi,

I started the Fusion360 Gym server as suggested in the interface:

Running
Open Fusion 360
Go to Tools tab > Add-ins > Scripts and Add-ins
In the popup, select the Add-in panel, click the green '+' icon and select the server directory in this repo
Click 'Run' to start the server

But the Fusion360 is blocked since then, even I closed and restarted. I searched so many to find a solution and even reinstalled it several times, it cannot return back. And since the UI was blocked, nothing I can do in the interface. Finally, I use the provided python tool to detach the serve, then it goes back normally.

So I suggest that add a few lines around the 'Running' command to tell users that if this case happens, try the python tool to stop the server. I think that would be helpful especially for the novice user of Fusion 360.

parsing sketch extrude sequences in segmentation data

Hi, thank you for releasing the segmentation data. However it seems that the timeline json file only contains reference to face uid, which is different from the json file in reconstruction data that also contains the sketch information. I wonder if there is available parsing code for recovering the sketch-extrude sequences as before?

Question about Reconstruction dataset visualization

Hi everyone,
Thanks for your dataset contribution to the community. This is a good dataset, and I like it!
I have some questions about the visualization problem for the Reconstruction dataset.
I wish to get some visualization results for the Brep representation, like the image below.
image

I see there is a solution for Segmentation dataset in #76 (comment)

I tried this script, and the result doesn't seem to be similar to the image in the original Reconstruction dataset.
By the way, this script seems to visualize the mesh instead of the Brep representation.

How can I get a similar visualization result in the original Reconstruction dataset? It's better to have a script solution to capture images for a large number of model files.

Thanks very much in advance!

Are sketches inherently 2D?

I have been working under the assumption that sketches are 2D but it does not seem like that is necessarily the case.
For example, in object 127202_42451722_0000, Sketch1 You have two circles, 0ba7f88c-e321-11ea-bb0d-54bf646e7e1f and 0ba90a02-e321-11ea-8bd1-54bf646e7e1f which have different normals according to their worldGeometry objects ([0, 0, 0] and [-20, 0, 0] respectively).

For reference I extract the normals from the curve_obj.worldGeometry object as calculated in the reconstruct_sketch_curve method of sketch_extrude_importer.py

Question about import of STEP file into fusion360.

Hi everyone.
Here I have some trouble import the STEP file into fusion360 with adsk python API.

Thanks to @karldd , I try to follow

def screenshot(self, data, dest_dir=None):

as is mentioned in #101

To import the STEP file into the fusion360 following the official adsk import manager example here:
https://help.autodesk.com/view/fusion360/ENU/?guid=GUID-3f24e9e8-422d-11e5-937b-f8b156d7cd97

I write a following program to do the import:

from pathlib import Path
import sys
import os
import json
import adsk.core, adsk.fusion

# Add the client folder to sys.path
CLIENT_DIR = os.path.join(os.path.dirname(__file__), "..", "tools/fusion360gym/client")
if CLIENT_DIR not in sys.path:
    sys.path.append(CLIENT_DIR)

CLIENT_DIR = os.path.join(os.path.dirname(__file__), "..", "tools/fusion360gym/server")
if CLIENT_DIR not in sys.path:
    sys.path.append(CLIENT_DIR)

from fusion360gym_client import Fusion360GymClient
from command_runner import CommandRunner

# Before running ensure the Fusion360GymServer is running
# and configured with the same host name and port number
HOST_NAME = "127.0.0.1"
PORT_NUMBER = 8080


def main():
    # SETUP
    # Create the client class to interact with the server
    client = Fusion360GymClient(f"http://{HOST_NAME}:{PORT_NUMBER}")
    # Clear to force close all documents in Fusion
    # Do this before a new reconstruction
    r = client.clear()
    # Example of how we read the response data
    response_data = r.json()
    print(f"[{r.status_code}] Response: {response_data['message']}")

    file_path = "C:/Users/DavidXu/Downloads/20203_7e31e92a_0000_0005.step"
    # First clear to start fresh
    r = client.clear()

    app = adsk.core.Application.get()
    product = app.activeProduct
    design = adsk.fusion.Design.cast(product)
    rootComp = design.rootComponent
    importManager = app.importManager
    stpOptions = importManager.createSTEPImportOptions(file_path)
    stpOptions.isViewFit = False
    print(importManager.importToTarget(stpOptions, rootComp))


if __name__ == "__main__":
    main()

I simply get this output:

[200] Response: Success processing clear command
False

For the function importManager.importToTarget documented at https://help.autodesk.com/view/fusion360/ENU/?guid=GUID-7472BAC7-E570-43CE-8578-268735B6FE83, return false means the failure.

As is in the documentation: Returns true if the import was successful.

My import fails, and I also output the design.allComponents.count as 0. All seem to mean that it doesn't import successfully.

I test this code on Windows10.

What's wrong with my import code?

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.