Git Product home page Git Product logo

py-capellambse's Introduction

Python-Capellambse

PyPI - Python Version Code QA License: Apache-2.0 REUSE status Code style: black Imports: isort

A Python 3 headless implementation of the Capella modeling tool

Intro

capellambse allows you reading and writing Capella models from Python without Java or the Capella tool on any (reasonable) platform. We wanted to "talk" to Capella models from Python, but without any Java on the way. We thought this project will help individuals and organisations getting through the MBSE adoption journey with Capella faster and so we made it public and open-source.

With capellambse you can access all (or almost all) Capella model elements, and render diagrams as SVG and PNG. We made it for automation of Systems Engineering work, so it integrates nicely into most CI/CD toolchains. We also found it at the core of our artifact generation pipelines (model to documents, model to SW interfaces).

The library works with PVMT and Requirements extensions without any additional efforts.

It started as a basic library somewhere mid 2019. Since then it was re-architected a few times and now has a full read/write capability for most of the present Capella ontology. We are continuously improving the API (introducing shortcuts), increasing the meta-model coverage and have more engineering automations and improvements in the pipeline to share.

Related projects

  • capellambse-context-diagrams — A capellambse extension that visualizes the context of Capella objects, and exposes it on element attributes like .context_diagram, .tree_view, etc.

  • capella-diff-tools — A set of tools to compare Capella models.

Did you make something cool that is using or extending capellambse? Tell us about it, so we can add it to this list!

Documentation and examples

The library is designed to be easy to use and discover, especially in an interactive environment such as JupyterLab. Additionally, API documentation is automatically generated and published whenever new features and bug fixes are added.

You are encouraged to explore our test models and demo notebooks. Click on the button below to launch a Jupyter notebook server on the public myBinder service, and get started in seconds:

myBinder

Warning: Do not enter confidential information, such as passwords for non-public models, into a notebook hosted on myBinder. If you want to try out capellambse with those models, please install and run it in a local, trusted environment!

The docs/source/examples directory contains several hands-on example notebooks that you can immediately run and start experimenting with. Below is a short summary of each notebook's goal. If you are in the JupyterLab environment, you can click the notebook names to directly open them in a new lab tab. On Github, you will be shown a statically rendered preview of the notebook.

  • 01 Introduction.ipynb provides a high-level overview of the library features. It visualizes examples like a Component - Function allocation table by leveraging Jupyter's and IPython's rich display functionality.

  • 02 Intro to Physical Architecture.ipynb explores some more advanced concepts on the example of the Physical Architecture Layer. It shows how to derive tabular data, like a Bill of Materials or a Software to Hardware allocation table, by using pandas dataframes.

  • 03 Data Values.ipynb shows how the API can be used to explore classes, class instances and other objects related to data modeling.

  • 04 Intro to Jinja templating.ipynb demonstrates how to effectively combine capellambse with the powerful Jinja templating engine. This enables the creation of all sorts of model-derived documents and artifacts, including interactive web pages, PDF documents and any other textual representations of your models.

  • 05 Introduction to Libraries.ipynb shows how to use Capella Library Projects within capellambse. In this example you'll learn how the API can be used to open a project that is based on a library and find objects in both models.

  • 06 Introduction to Requirement access and management.ipynb shows how the API can be used to work with requirements objects, introduced by the Capella Requirements Viewpoint. In this example you'll see how to find requirements in the model, see which objects requirements are linked / traced to and even export requirements to Excel or ReqIF formats.

  • 07 Code Generation.ipynb shows how to generate code from class diagrams. In particular, we focus on Interface Descriptive Languages with concrete examples for Class to ROS2 IDL and Google Protocol Buffers. We also show how simple Python stubs could be generated given a Class object.

  • 08 Property Values.ipynb shows how to access property values and property value groups, as well as the Property Value Management (PVMT) extension.

  • 09 Context Diagrams.ipynb shows the capellambse-context-diagrams extension that visualizes contexts of Capella objects. The extension is external to the capellambse library and needs to be installed separately.

  • 10 Declarative Modeling.ipynb demonstrates a basic application of the declarative approach to modeling on a coffee machine example.

  • 11 Complexity Assessment.ipynb quickly demonstrates how to use and view the model complexity badge for a Capella model.

We are constantly working on improving everything shown here, as well as adding even more useful functionality and helpful demos. If you have any new ideas that were not mentioned yet, don't hesitate to contribute!

Installation

In order to use private models that are not publicly available, please install and use capellambse in a local, trusted environment.

You can install the latest released version directly from PyPI.

pip install capellambse

To set up a development environment, clone the project and install it into a virtual environment.

git clone https://github.com/DSD-DBS/py-capellambse
cd capellambse
python -m venv .venv

source .venv/bin/activate.sh  # for Linux / Mac
.venv\Scripts\activate  # for Windows

pip install -U pip pre-commit
pip install -e '.[docs,test]'
pre-commit install

We recommend developing within a local Jupyter notebook server environment. In order to install and run it in the same virtual environment, execute the following additional commands:

pip install jupyter capellambse
cd docs/source/examples
jupyter-notebook

If your browser did not open automatically, follow the instructions in the terminal to start it manually.

Once in the browser, simply click on the 01 Introduction.ipynb notebook to start!

Current limitations

We are continuously improving coverage of Capella onthology with our high-level API (the current coverage map is available here), however it is still incomplete. It covers most of the commonly used paths but when you need to get to an ontology element that isnt covered yet you may do so by using the low-level API.

Also, as we started in mid 2019 and there was no such thing as Python4Capella yet, we are not API compatible with that project.

The generated diagrams are currently not persisted in .aird files, and currently there is no plan to implement this. If there is a genuine usecase for that we may re-consider it - feel free to create an issue or add comments to an existing one.

Contributing

We'd love to see your bug reports and improvement suggestions! Please take a look at our guidelines for contributors for details.

Licenses

This project is compliant with the REUSE Specification Version 3.0.

Copyright DB InfraGO AG, licensed under Apache 2.0 (see full text in LICENSES/Apache-2.0.txt)

Dot-files are licensed under CC0-1.0 (see full text in LICENSES/CC0-1.0.txt)

To provide the same look and feel across platforms, we distribute our library bundled with the OpenSans font (capellambse/OpenSans-Regular.ttf). The OpenSans font is Copyright 2020 The Open Sans Project Authors, licensed under OFL-1.1 (see full text in LICENSES/OFL-1.1.txt).

py-capellambse's People

Contributors

amolenaar avatar dahbar avatar dominik003 avatar ewuerger avatar freshavocado7 avatar henrik429 avatar huyenngn avatar jamilraichouni avatar juancalero-gmv avatar materpillar avatar moritzweber0 avatar paula-kli avatar thithi47 avatar vik378 avatar wuestengecko avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

py-capellambse's Issues

Missing all_physical_paths at PA level

In PA we are missing the physical path object. We should fix that

image

This is what we should do to close this issue:

  • add .all_physical_paths accessor to PhysicalArchitecture
  • introduce PhysicalPath class into cs.py
  • add .physical_paths accessor to PhysicalLink
  • add tests for new functionality
  • update API documentation

Global search by name

It would be nice to have a method that would allow finding all elements (within a model) that have a matching name (or contain a matching string fragment in the name). This ideally would result in a mixed element list.

Association edge_factory

A really weird thing that happened during development of Association factory:
Normally we define a special factory function and try to reuse the generic_factory, since this already gets almost everything done. Most special factories just deal with the labels that should appear.

With associations in class diagrams it's a bit different:
Here we have to deal with multiple(I think it's at maximum 2) labels that are specifying the rolenames. I tried following changes to the aird.diagram.Edge:
image
Edges now have labels that default to an empty sequence.

This wasn't explicitly handed over in the generic factory and caused repetition of all preceeding edge.labels in the newly constructed edge... almost like the edge wasn't constructed freshly, just reused.

I'd really like to understand how this could happen in my favourite programming language.

What fixed the issue:
image

Refactoring of the `capellambse.aird` and `capellambse.svg` modules

This was already briefly touched on in #25, but we never opened a formal issue for this.

Problem analysis

Currently, a lot of calculations are duplicated between capellambse.aird and capellambse.svg. To make matters worse, the two modules have small, but significant differences in their implementations. As an example, the aird module makes box sizing calculations without taking into account any icons, only the text label, whereas the svg module does account for icons. This has lead to several hard to debug rendering issues in the past already, and it's safe to assume that similar issues will continue to crop up in the future.

The fact that aird module has to do some calculations is unavoidable: The XML does not contain enough information on its own to render a complete diagram (which makes sense, as most of it can be relatively easily calculated from the information that is given). Examples include the physical extent of labels, i.e. their height and width when drawn – this can be (and is already) calculated based on the text, font and the font size.

On the other hand, the svg module needs to do its own additional calculations because not all of the information that it needs is provided to it from the aird module. For example, to draw text, it needs to not only know the size of the bounding box, but also the height and vertical position of each line. This effectively results in doing (almost) all the text-related calculations for the second time, and then doing some more on top.

Proposed solution

A simple solution to this problem is for the aird module to provide all of the additional numbers that the svg module needs as part of the JSON document that is exchanged between the two. However, this approach makes the JSON format even more specific to the current SVG format converter, and does not scale very well if other modules emerge that provide different output formats (and which do not just convert from SVG, as is done by the current PNG converter).

A viable alternative approach is for the svg module to consume the aird.Diagram instance directly. This would allow us to provide either methods on that object or functions in some standardized place for all the calculations that need to be done additionally, while also taking advantage of the information already calculated before. Such methods can also be leveraged by alternative format conversion modules, without having to calculate things that are not needed for the particular requested format.

Both proposals constitute a breaking change for the JSON interface (the first one because older documents would be lacking crucial information, the second one because the interface would be abandoned entirely). However, I don't think that this is a particularly big issue, as the only real use case for these JSON documents was this exchange between capellambse submodules anyway. Without a way to reconstruct an aird.Diagram instance, they are not very useful for persisting modified (or newly created) diagrams, and with the svg module being the only known consumer of that format, it makes more sense to just convert it to the well-known, standardized SVG format straight away.

Additional benefits

No matter which route we choose, addressing this issue can be expected to provide some additional benefits.

  1. The current svg module is, frankly, a mess. During this refactoring, there is a great opportunity to clean it up, and to add some more documentation to it.
  2. The aird.parser has grown to be a very complex beast as well. It's possible that some calculations can be moved out of it, reducing its overall complexity.
  3. By removing the need to perform the same calculations twice, we should experience a noticeable speed-up in the SVG rendering process.

Use SPDX to declare the source files' license

Currently, we have a relatively large (13 lines) header in each Python source file. The Apache-2.0 license requires that such a notice be put there, and it provides this copy-pasta as an example (see the definition of "Work" in §1), but it does not mandate that this long notice must be used.

This is where SPDX comes into play. It offers a standardized, compact, but human-readable format for declaring licenses (among other things). Using SPDX, the license header could be condensed down to two lines. Omitting the year from it prevents mistakenly copying an older year into a new file, and avoids having to fix all affected files each year. (I'm assuming that this library will keep being maintained for a while. ;) )

# Copyright DB Netz AG and the capellambse contributors
# SPDX-License-Identifier: Apache-2.0

This is much easier on the eyes, and greatly increases the ratio of code to boilerplate, especially for small files.

Add simple API for object relationships exploration in data models

At the moment it is only possible to find object - object (Class) relationships via low level API calls which makes adoption fo the library for interface spec generation a bit difficult.

We should provide a high level API to facilitate object relationships exploration.

In Capella metamodel, the object relationships are captured via Association and Property objects. The below set of class diagrams describes the Capella implementation:

image

image

Lets apply this now to the following practical example: a Trajectory object is made of an ordered list of Waypoint objects

image

There are a few paths that we could take to get the end user from Trajectory object to Waypoint object and the other way around:

  • trajectory.properties.by_name("waypoints").type --> Waypoint
  • model.search("Association").by_name("DataAssociation1").roles[1].type --> Waypoint

Next actions:

  • Create more practcal examples with associations in a test model
  • Update API ontology model for information layer to reflect introduced paths and shortcuts

Improve documentation of the `GitFileHandler`

The attributes (apart from username and password) can have a docstring.

Some are already documented on the FileHandler ABC, we should add a link to it and explicitly mention which ones we support here.

The open method doesn't have a docstring yet

It uses the same docstring as the base class.

The Numpy style guide for docstrings doesn't explicitly say anything about this case AFAICT. The Google style however says to insert a docstring à la """See base class""". I'll raise this in our next regular meeting, where we can discuss it properly with the entire team.

and write_transaction is referencing a private class' docstring (_GitTransaction).

It isn't referencing, but rather copying it:

write_transaction.__doc__ = _GitTransaction.__init__.__doc__

Or was your point that we shouldn't do this? (In which case, I'm curious as to why?)

Originally posted by @ewuerger in #99 (review)

Add simple API for ReqIF export at module level

We need simple a method, similar to pandas df.to_excel(filename) to dump requirement modules in ReqIF format.

Ideally, the call should look like this:

module = model.la.requirement_modules[0]
module.to_reqif("my_module.reqif")

Improve detection of Git LFS files

The way how we currently detect which files in a Git repo use LFS has two major flaws:

  1. It only works if git-lfs is installed on the host. If it is not, all files will be treated as if they weren't using LFS, even if they are. When attempting to load a model from a Git repo under these conditions, a very non-obvious error will be raised:

    Traceback (most recent call last):
      File "/home/martinlehmann/git/capellambse/./_modeltest.py", line 137, in <module>
        model = capellambse.MelodyModel(**modelinfo)
      File "/home/martinlehmann/git/capellambse/capellambse/model/__init__.py", line 177, in __init__
        self._loader = loader.MelodyLoader(path, **kwargs)
      File "/home/martinlehmann/git/capellambse/capellambse/loader/core.py", line 215, in __init__
        self.__load_referenced_files(
      File "/home/martinlehmann/git/capellambse/capellambse/loader/core.py", line 240, in __load_referenced_files
        frag = ModelFile(filename, self.filehandler)
      File "/home/martinlehmann/git/capellambse/capellambse/loader/core.py", line 102, in __init__
        self.tree = etree.parse(
      File "src/lxml/etree.pyx", line 3521, in lxml.etree.parse
      File "src/lxml/parser.pxi", line 1876, in lxml.etree._parseDocument
      File "src/lxml/parser.pxi", line 1896, in lxml.etree._parseMemoryDocument
      File "src/lxml/parser.pxi", line 1784, in lxml.etree._parseDoc
      File "src/lxml/parser.pxi", line 1141, in lxml.etree._BaseParser._parseDoc
      File "src/lxml/parser.pxi", line 615, in lxml.etree._ParserContext._handleParseResultDoc
      File "src/lxml/parser.pxi", line 725, in lxml.etree._handleParseResult
      File "src/lxml/parser.pxi", line 654, in lxml.etree._raiseParseError
      File "<string>", line 1
    lxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
    
  2. It doesn't work well together with branches. git lfs ls-files always works on the HEAD, which may not be related to the branch we're interested in. If a file is using LFS on the branch we're using, but is not marked with LFS in the HEAD commit, we will currently not apply the LFS filters. This leads to the same error as above. If on the other hand a file is marked LFS in HEAD but is not actually LFS on our branch, we will try to apply the filter. This produces a warning message, which gets demoted to "debug" level as the operation succeeds anyway.


To address both points simultaneously, I propose switching away from git lfs ls-files and instead inspecting the .gitattributes files ourselves. This can either be done once for the entire repo and the results saved, similar to how it works now, or it could be done for each file whenever it is open()ed. For flexibility, I prefer the latter approach, but it may lead to performance issues especially on Windows due to repeated calls out to git.

We can take advantage of the fact that git-lfs always uses the standardized filter name lfs (lower-case), which will greatly simplify the operation and avoid the need to additionally parse any git configuration files and/or guess the actual filter name. However, our implementation does need to be aware that .gitattributes files can exist on any directory level, not just the repository root.

Add a basic example for templating with Jinja

To reduce on-boarding effort we should demonstrate how one could generate simple document out of a model with Jinja templates. We'll use HTML source and target as this can be nicely visualized in Jupyter without any additional software.

We should also add some hints how to move from there in other directions like markdown, weasyprint or python-docx

Document how to properly use the `git` filehandler

Using the git:// protocol (specifically the git+ssh:// variant of it) is not obvious for new users, especially if they're used to the scp-like short form. Additionally, the error message produced when trying to use the short form is confusing and suggests that Git-via-SSH is not supported at all.

Alternatively, a special case could be implemented in the URL parsing logic that handles the scp-like short form.

Provide example for how to work with requirements

Managing requirement objects is an important part of our MBSE workflow / one of the primary usecases for our library, hence we should provide some common usage examples.

The following topics should be covered:

  • easy start - show requirements linked to an object
  • export all requirements to excel (id, uuid, text)
  • create allocation table - for every requirement show list of objects that links to it via "implements" link
  • change assessment - show how to check if a requirement has changed
  • create - show how to create a requirement programatically, assign a type, set attribute and link it to an object

SAB diagram - multiple rendering issues

SAB diagram rendered by our default engine has missing features and incorrect placement issues:

  • Function ports are not shown
  • ComponentExchange edges go into and above the Component Ports
  • port allocation links go above ports
  • Component Port direction is identified erroneously

image

Raise minimum supported Python version to 3.9

Keeping support for Python 3.8 introduces a notable amount of developer overhead. This is primarily due to it being the last version without PEP-585. This PEP deprecates most of the stdlib-parallel classes in typing (e.g. List, Set, etc.) and allows to use type hints on the stdlib classes themselves.

This works fine in a type annotation context, however when we want to subclass something we still need to go back to the typing classes for that instance. This is not only annoying, but also makes the code a little less clear to read due to essentially the same class being used from different modules.

Dropping support for Python 3.8 entirely is at this point a reasonable choice, in my opinion. However, we do need to analyze our dependency chain and make sure that all dependencies and dependents, as well as all relevant tool configuration, have been updated accordingly.

Git model loader should not depend on LFS

We can not expect every user to put his/her models in Git LFS (Large File Storage).

Martin wrote](#22 (comment)):

I'd consider the hard requirement on Git LFS a bug. Although it has become quite common, we still can't just require everyone to install LFS even if they don't use it.
After a brief look at the affected code, it should be fairly straight-forward to catch the produced error and pretend that git lfs ls-files had simply listed nothing. Getting test coverage on both cases in a single test run might become annoying though :)

Suggested solutions:

  • Create separate loaders for "normal" git and git lfs
  • Try with git lfs and fall back to normal git

`hash()` constraint violation since merge of #43

Copied here for visibility from #43 (comment) ("Make RequirementType and EnumValue hashable")


There's a good reason why Python requires an explicit override of the __hash__ method when overriding __eq__. Just keeping the parent class' __hash__ in this case violates the one fundamental requirement on __hash__:

The only required property is that objects which compare equal have the same hash value

Python Documentation on object.__hash__()

Since instances of these classes compare equal to the string value of their .long_name, their hash value must be equal to that of the long_name; in other words hash(some_enum_value) == hash(some_enum_value.long_name) must always be True. Otherwise they cannot be looked up in hash collections (i.e. dictionaries) by their long_name, which is the exact property that we were after with having them compare equal in the first place. Try this with our 5.0 test model:

>>> dtd = model.by_uuid("637caf95-3229-4607-99a0-7d7b990bc97f")
>>> dtd.values
<CoupledElementList at 0x00007F2D8FB9BCB0 [<EnumValue 'enum_val1' (efd6e108-3461-43c6-ad86-24168339ed3c)>, <EnumValue 'enum_val2' (3c2390a4-ce9c-472c-9982-d0b825931978)>]>
>>> "enum_val1" in dtd.values
True
>>> "enum_val1" in set(dtd.values)
False

Because the __eq__() of these two classes delegates to GenericElement through super(), the hash value of EnumValues must also be equal to the hash of self._element (see https://github.com/DSD-DBS/py-capellambse/blob/a501bdf2a77ea906f79cc79bae793283a805dff3/capellambse/model/common/element.py#L206..L207). Satisfying both conditions at once is not possible for obvious reasons, but it's safe to drop the super delegation.

However, it is not safe to make the hash() based on the .long_name. Remember: The only reason we can get away with GenericElements being hashable at all is that their _element is considered immutable over their lifetime. Everything else, specifically everything that accesses the XML, is mutable and can therefore not be used in hashes.

Therefore the only possible course of action for this pull request (that I can come up with) is being rejected (or reverted, as it has already found its way into master).

There are basically two ways how we can address the unhashability of EnumValue and AbstractType:

  1. We just keep these unhashable
  2. We remove the equality to their .long_name attribute

Both cause inconveniences for different use cases.

In my opinion, we should go with option 1 and advise users who unconditionally require the hashability of all model objects (independent of their type) to explore alternative solutions. To facilitate this, we can "open up" the _element a little bit. Currently we treat it as a private attribute; we could "publish" it as an opaque, but hashable object. Then advanced users could use it to make hashes of every possible model object. In this case, to make the user experience around it more pleasant, we should also offer a function that takes such an object and converts it back into a "proper" model object, i.e. a GenericElement instance. This would most likely be a method on MelodyModel, analogous to .search() and .by_uuid().

Of course another solution, which should be even easier for everyone involved, would be to hash the .uuid instead of the object itself. That can already be looked up with constant time via MelodyModel.by_uuid(). Depending your use case, this might just do the trick.


Summoning @amolenaar and @vik378, the people involved in #43.

Clean tests

  • Remove unused test models and merge test models into latest version
  • Apply Pytest best practices
  • Either add logic into SVG tests or remove them

IMO it would be better to add this to the 5.0 test model, instead of adding some cases to the 5.2 one. Keeping as much as possible in the same model will make it easier to migrate to a higher minimum supported version in the future, because there's only one model with actual useful content.

Originally posted by @Wuestengecko in #94 (comment)

New user experience can be improved

As per Viktors request I had a look at the project. My initial impression is that the code is of high quality. It may be worth to add some banners at the top, stating code coverage, code quality 👍 .

This is the list of things I looked for and things I checked, being an Average Joe with some Python experience, giving this project a swing.

  • Check out the project
  • Run a python setup.py install
  • Start the application. There's no command line entrypoint (console_scripts). Or is is a Sphinx extension?
  • pyproject.toml file
  • pyproject.toml contains buildsystem` section
[build-system]
requires = ["setuptools", "wheel"]
  • README should provide a step-by-step example, e.g. using one of the models in this repo.

  • README should provide developer install instructions:
    * create a virtual env python3 -m venv .venv
    * source .venv/bin/activate
    * pip install pre-commit
    * pre-commit install

  • README: Make a short title and a subtitle, instead of the line wrapping title it is now.

  • public CI build (e.g. GitHub Actions)

  • MyPy is not configured in pre-commit

  • python setup.py test/pytest fails -> require module cssutils, sphinx

  • Set setup.py test runner to pytest

  • pre-commit run --all-files fails (fixed in 26056b9)

  • When you import dependencies only for type checking (typing.TYPE_CHECKING), make sure the following import is also made:
    from __future__ import annotations (as far as I know it's still required for Python 3.8)

  • Remove the ci folder

  • A copyright notice in a .gitignore and .gitattributes file is a bit too much :)

Although the tests are well written, it may be worth following the Arrange, Act, Assert pattern (a.k.a Given, When, Then). Separating the preparation, action and checks by blank lines. Then (most) assertions should end up in at the bottom of the test. Now they're sprinkled throughout the test, which I tend to find confusing. Precondition checks can be performed in the fixtures themselves.

NB. One thing we should strive for is to make a new user enthusiastic within 5 minutes. You want to make a new achieve something, so he's hooked.

Change specification link format conversion to capella standard hlink format

In the description of GenericElements you can link to other GenericElements and that leaves an <a>GenericElement.name</a> in the xml attribute value. In the case of a Specification we convert these links, if they are there, to #{uuid}. This is inconsistent with the standard capella hlink format: hlink://{uuid}.

API for accessing auxiliary files

In addition to the model itself, there can also exist auxiliary files next to it. These files are defined by users of the model. This will be especially useful in conjunction with the GitFileHandler or similar, where such files would be downloaded automatically and transparently to the user.

In order to implement this functionality, we first need to define an appropriate API:

  1. How files can be accessed from a MelodyModel instance.
  2. How the MelodyLoader provides access to the underlying file handler.
  3. How to access directories and enumerate files in them.

The first two points can be solved in a very simple and effective manner by exposing the actual FileHandler object. This allows the FileHandler to implement any arbitrary API without having to worry about name collisions with attributes of the model object.

For the last point we can implement a PathLike API. This is both user friendly and allows code that works with pathlib.Path objects to also transparently handle files in the file handler.

Introduce API coverage control

It would be helpful to indicate the amount of classes, relationships and queries that we already cover vs discovered (based on Capella meta-model analysis). The implementation status is already collected via PVMT attribute in the API model.

This includes the following actions:

  • Clean-up the API model so that for every package all classes and relationships (including derived attrs and rels) represent the state of implementation
  • Add pipeline step to compute percentage of implemented elements and indicate via custom badge

Document diagraming engine API

To enable further development of diagraming engine we should provide an overview page (documentation) that explains:

  • how the diagramming subsystem works
  • how individual diagrams rendering is implemented
  • aird parsing
  • method for sideload of diagram cache

Use filter name enum for filter lookup/registration

As the filter names in the XML are quite cryptic and arbitrary at times, it might be a good idea to add an Enum or something that has all the filters that exist. Each member could have a short, but ideally still readable name, and map to the internal XML name. Then each docstring could have the full name shown in the GUI.

I added the _enum.py where all found filters are stored. Let's finalize this PR by using the filter names defined there. What do you think @Wuestengecko?

Originally posted by @ewuerger in #84 (comment)

Improve requirement object appearance in display context

By default a requirement object is described as GenericElement, however since it is closer to ReqIF element it needs to be described differently - the following attributes should be visible:

  • RegIF long name
  • ReqIF identifier
  • ReqIf text
  • ReqIF attributes (if any)
  • links (if any)

Indexing of properties from class 'Class'

In a Jupyter notebook, the following code:

import capellambse
model = capellambse.MelodyModel("tests/data/melodymodel/5_0/Melody Model Test.aird")
trajectory = model.search("Class").by_name("Trajectory")
assert len(trajectory.properties) > 0
trajectory.properties[0]

results in this error:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
File /tmp/tmp.QyReJLIMrO/lib/python3.10/site-packages/IPython/core/formatters.py:343, in BaseFormatter.__call__(self, obj)
    341     method = get_real_method(obj, self.print_method)
    342     if method is not None:
--> 343         return method()
    344     return None
    345 else:

File ~/git/capellambse/capellambse/model/common/element.py:309, in GenericElement._repr_html_(self)
    308 def _repr_html_(self) -> str:
--> 309     return self.__html__()

File ~/git/capellambse/capellambse/model/common/element.py:282, in GenericElement.__html__(self)
    279 fragments.append('</th><td style="text-align: left;">')
    281 if hasattr(value, "_short_html_"):
--> 282     fragments.append(value._short_html_())
    283 elif isinstance(value, str):
    284     fragments.append(escape(value))

File ~/git/capellambse/capellambse/model/common/element.py:299, in GenericElement._short_html_(self)
    296 def _short_html_(self) -> markupsafe.Markup:
    297     return self._wrap_short_html(
    298         f" &quot;{markupsafe.Markup.escape(self.name)}&quot;"
--> 299         f"{(': ' + str(self.value)) if hasattr(self, 'value') else ''}"
    300     )

File ~/git/capellambse/capellambse/loader/xmltools.py:96, in AttributeProperty.__get__(***failed resolving arguments***)
     94 xml_element = getattr(obj, self.xmlattr)
     95 try:
---> 96     return self.returntype(xml_element.attrib[self.attribute])
     97 except KeyError:
     98     if self.default is not self.NOT_OPTIONAL:

ValueError: could not convert string to float: '*'

Drawings of Class diagrams

There are a few drawing issues in Class diagrams:

  1. We disregard the aggregation kind of associations, in the following diagram the source ending of the edge should be an Diamond-Marker since the aggregation kind of role trajectory is a composition.

image

  1. Specific box types are lacking as e.g. Box.NumericType Box.StringType for Class Diagram Blank.

image

Actually:

image

  1. Symbols of properties are lacking.

image

Actually:

image

  1. Multiplicities are lacking.

image

Actually:

image

  1. The Fine-ArrowMark looks a bit overdimensioned.

image

Solved in branch code-generation.

Add support for functional chain involvement context

While modeling processes (operational, functional) people capture context information which is attached to exchanges (i.e. configuration / values of an exchange item, constraints, etc). In the current API via .involved property we skip the involvement object as technical and deliver a list of end elements (functions or operational activities, functional exchanges, etc) leaving no means for working with the involvement context itself. What would help is a new property of a functional chain / operational process like .involvements that would deliver a list of involvement objects where the end user could retrieve the involvement context or involved element.

Capella metamodel analysis follows:
image

image

image

The below view is of particular interest for the usecase as it shows the exchange_context relationship with Constraint:

image

Improve exchange traceability in PA layer

In Capella it is possible to explore allocation traces between exchanges (via semantic browser).
This library should enable that kind of exploration too, so lets add the following object properties:

  • FunctionalExchange.allocating_component_exchange --> ComponentExchange or None
  • FunctionalExchange.owner --> ComponentExchange (shortcut)
  • ComponentExchange.allocating_physical_link --> PhysicalLink or None
  • ComponentExchange.allocating_physical_path --> PhysicalPath or None
  • ComponentExchange.owner --> PhysicalLink or PhysicalPath or None (shortcut)

and we are also missing .all_functional_exchanges at physical layer

Improve access to requirement elements

At the moment it is possible to get requirements list linked to a model element via .requirements
While working with this list in a document template we frequently face the following challenges:

  • filter requirements list by requirement type name (string)

While working with requirement objects I also noticed that attribute selection and retrieval needs some improvement:

  • I'd have to do req.attributes[1].values[0].long_name to get a value of attribute 1. And I'll need to somehow find upfront that attribute 1 is what I'm after. It would work better if the attribute access looked like so: req.attributes["ChangeStatus"] -> "Unmodified"
  • Additionally, it would be nice to have something like "ChangeStatus" in req.attributes working. This will be implemented by providing the keys() function, due to the different semantics of __contains__ in lists and dicts. Therefore, the actual check will be: "ChangeStatus" in req.attributes.keys().

Refactor aird.DiagramDescriptor to use the DiagramElement as reference instead of its UUID

It might be a little late for that discussion (although it can never be too late for improvements!), but do you think it makes sense to refactor the aird.DiagramDescriptor to include a reference to the actual XML element instead of just its UUID? This would allow code in aird that actually uses that descriptor to avoid essentially the same lookup in several places, and therefore also avoid the possibility for these lookups to be implemented slightly differently each time (see https://github.com/DSD-DBS/py-capellambse/blob/master/capellambse/aird/parser/__init__.py#L166-L168).

Aside from that point, it might make sense for API consistency reasons to pass the DiagramDescriptor instead of just the uid part here, even though the other parts of it aren't actually used in this function.

Originally posted by @Wuestengecko in #84 (comment)

Obsolete/faulty look ups on ArchitectureLayers

Under all ArchitectureLayers we have all_{functions, capabilities,...} look up ElementLists that made our lives easier regarding our other tools. By today we have the tools that make these items easily accessible. For example:

image
can be accessed via:

import capellambse

all_logical_functions = model.search(capellambse.model.layers.la.LogicalFunction)

So for all the look-ups that are simply derived via the ProxyAccessor(Child-Relationship) and concrete Classtype in the current layer there is imo no reason to define them.

Especially the following:
image
aren't working correctly. The actor_exchanges should catch all ComponentExchanges that have either an LogicalComponent | is_actor is True as source_port.owner and/or target_port.owner... but this definition won't catch exchanges that were moved into ComponentPackages.

There is an open question on how we want to make these two look-ups accessible.

Option A (Not an option, since we are user-centric)

We add an owner attribute on AbstractExchange in the fa crosslayer. For ComponentExchanges only Components and their Packages can be the owner. Then Option A is possible:

# Get all instances of LogicalComponents
all_logical_components = model.search("LogicalComponent")
# Get all instances of LogicalComponentPkgs
all_logical_component_pkgs = model.search("LogicalComponentPkg")
all_logical_component_exchanges = model.search("ComponentExchange").by_owner(
    *all_logical_components,  *all_logical_component_pkgs
)

all_logical_actor_exchanges = [
    aex for aex in all_logical_component_exchanges
    if aex.source_port.owner.is_actor or aex.target_port.owner.is_actor
]

Option B

We add a new attribute containing_layer on GenericElements that gives the ArchitectureLayer instance which underneath the GenericElement is defined. This will need a new Accessor that is more implementation work than option A, but leads to shorter code for usability.

all_logical_component_exchanges = model.search("ComponentExchange").by_containing_layer(model.la)

Solve owner/parent attribute for GenericElements

I start with an example:

LogicalFunctions.owner attribute currently links to the LogicalComponent where this function is allocated to(it is in component.allocated_functions). This behaviour was introduced when we wanted to access the owners of Functions displayed in diagrams. But in the explorer the owner should be either of type LogicalFunction or LogicalFunctionPkg.

In capella's semantic browser it says:
image

Here it is called parent. We should be consistent with the naming of attributes on ModelElements to not confuse the user and most importantly ourselves.

FutureWarning raised on internal method call

Given the call context (screenshot below) I suspect that library's internal code is making use of deprecated property. We should fix that as this will pop up all over the place and will be especially annoying in Jupyter notebooks.

image

Support for models which use libraries

Currently, when a model is loaded that links to a library project, an exception like the following is raised:

FileNotFoundError: [Errno 2] No such file or directory: 'TestProject/platform:/resource/Test%20Library/TestLibrary.capella'

This occurs due to two new types of links in the model, which currently aren't handled correctly or at all:

  1. platform: links, which look similar to platform:/resource/Test%20Library/TestLibrary.capella. capellambse does not recognize this special syntax and interprets them simply as relative paths, which leads to the above exception.
  2. Relative links that go beyond the project's root directory. In this case, the project root is the directory containing the "entry point" .aird file. This is not handled correctly by capellambse due to the applied path normalization: If a relative link would go beyond the top of the hierarchy, it is cut off and constrained to within the hierarchy. However, if there are layers above this project root within the file handler (as can be the case e.g. in git, if the .aird file lives in a subdirectory of the repo), the file handler would attempt to find the mentioned file, fail, and raise a similar FileNotFoundError as above.

Fix missing PhysicalComponent attributes

At the moment PhysicalComponent class has the following missing or not properly working features:

  • kind attribute is missing
  • nature attrubute is missing
  • components attribute is not working properly (returns empty list)
  • Add deployed_components attribute
  • Add deploying_components attribute

Models with a space in name result in link follower breaking

Use of models with space in name results in failure to render diagrams. We can then see the diagram name, uuid and description. But any call that involves renderer (i.e .nodes or actual svg preview) will fail with ValueError: Malformed link: 'test 5.1.aird#_r-0YcAauEeydodL3xp60Ww'

The failure occurs in regex of follow_link method of MelodyLoader in loader.core module

Steps to reproduce:

  1. Create a new test model with Capella 5.1, create a sample LAB diagram
  2. Create a new Jupyter notebook
  3. Load test model in Jupyter with model = capellambse.MelodyModel("test.aird")
  4. List the diagrams with model.diagrams
  5. Visualize the lab diagram by index with model.diagrams[idx] where idx is a number from the above list

Improvements to API and documentation concerning linked Capella Libraries

When loading a model that links in a library, there is currently no straight forward way to access or search for elements defined in the library:

  1. Currently, capellambse allows runtime modifications of objects defined in libraries, but it will not save those libraries during MelodyModel.save(). This can easily lead to inconsistencies, and therefore should be changed to a) disallow modification of objects in the first place if they are defined in a library with access policy readOnly and b) save all libraries along with the base model.
  2. Accessors like model.la.all_functions currently only search the base model, not linked libraries. (Whether or not we want to change this is up for debate, but either way this needs to be documented.)
  3. model.search() finds elements from the base model and all linked libraries, but there is no easy way to figure out where an element is defined.

Apart from that, there's a few simple additions that we can make to the API to solve (2):

  • We could add something like MelodyModel.libraries, which acts as dict-like and provides access to each library's layers. In other words, given a library "testlib" in a model, a possible call would be: model.libraries["testlib"].la.all_functions.
  • We could add a parameter to MelodyModel.search() which restricts the search to a certain subset of the model, e.g. "search only the base model" or "search only the linked testlib".
  • We could add a way to determine whether a given element is defined in the base model or a linked library (and which one, if there are multiple).

Fix RequirementTypes __str__ and attribute on Requirements

Currently the functionality of getting Requirements by RequirementsTypes easily is not so easy:

rationale_type = model.search(reqif.XT_REQ_TYPE).by_long_name("ReqType")[0]
req = fnc.requirements.by_type(rationale_type)[0]
req.type.long_name == "ReqType"

The functionality that is more intuitive and also was the way how it behaved earlier:

>>> rationale_reqs = fnc.requirements.by_type("ReqType")
>>> for req in rationale_reqs:
>>>     print(req.type)
ReqType
ReqType
ReqType
...

Generalize functions

To improve consistency we need to generalize current implementation of functions as it is done in Capella meta-model and properly implement AbstractFunction in fa package.

Deprecate support for legacy Capella 1.x

Remove all support functionality for pre 5.x Capella version. It's probably broken anyway since we don't test against the 1.x test models (These can therefore go too).

Add Capella 5.1 test model

In the current collection of test models we are missing a test model for Capella 5.1 so that we could see if any other API gets broken there (as it does atm for diagram rendering)

Provide basic docs for onboarding

Our current documentation is a bit too technical. We need to improve user experience there by providing at least layer specific pages and introduction to API and supporting packages (aird parsing, rendering).

  • Provided ontology and taxonomy overview of core packages for
    • OA
    • SA
    • LA
    • PA
    • FA
    • CS
    • Common
    • Info
  • The API overview page should explain overall meta-model, package structure and the cross-layer concept.
  • Update obsolete installation instructions (.rst)
  • Update "high-level API link in README to point to "intro-to-api"
  • Add docs page to explain low level API (Accessors, ElementLists, search)
  • Add page to explain model loading methods (load from git in particular)

Handle unavailability of diagram in diagram_cache

When MelodyModel is being loaded with a filled diagram_cache param, we try to request diagrams from the so called Capella diagram service.

When such a request is not returning a diagram, the algorithm shall fallback to the internal diagram engine.

That kind of behaviour will also cover cases where we access synthentic context diagrams.

Missing exchange_items accessor on function ports

accessor for exchange_items is missing for function ports, should be fixed in crosslayer --> functional analysis:

image

Actions:

  • create documentation model entry for missing ontology
  • create test items in the test model
  • create tests
  • implement missing API

Practical example / documentation on how to use PVMT

There currently are no publicly available practical examples on how to use the Property Value Management extension, and the documentation about it is also lacking some important details. We should improve the docs and add an example notebook, similar to how it was done for the Requirements extension.

Undefined nature of root physical component causes KeyError on access attempt

Physical components are supposed to have nature attribute, however for root physical component this attribute is undefined. Current impolementation assumes it is always defined and expects the value to match an enum. This failes with key error.

The issue can be reproduced on the test model 5_0 with the following code: model.pa.all_components[0].nature

image

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.