Git Product home page Git Product logo

alire-old-discussion's People

Contributors

mosteo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Forkers

sparre

alire-old-discussion's Issues

Library Source Distribution

I have let myself be convinced that what we need is "library source distribution" (i.e. we leave the distribution of compiled libraries to the tools provided for that purpose by the host operating systems).

From there I got the idea that the key to building a common Ada library distribution infrastructure could be to take GPR files (yes, I know what the G stands for) as the library descriptor. As most Open Source Ada developers probably write GPR files anyway, it may reduce the effort involved in inserting libraries and library updates into a common repository. - When it comes to supporting other compilers, I hope that we either can configure "gprbuild" to support other compilers than the G-one, or that we can generate build scripts for other compilers from the GPR files.

My suggestion is that we use project names (i.e. GPR file names minus extension) as the key to identify libraries (but not their versions).

Client-side we want to be able to run commands like these:

ada-lsd-upload <project name>
ada-lsd-search <tag> ... <tag>
ada-lsd-get <project name>
ada-lsd-refresh
  • upload = copy to the common repository.
  • search = get a list of matching projects from the repository
  • get = fetch a library and its dependencies from the repository
  • refresh = update all downloaded libraries to their newest versions (keeping all buildable)

Server-side I expect the uploaded packages to be checked for buildability on all the supported platforms, before they are made available to the users.

Some parts of the client-side tools may have to depend on the host system (which compiler to use, which command to use to query the system package manager about installed packages, etc.).

Uploading

Client-side

  1. Find ".gpr" in GPR_PROJECT_PATH.
  2. Create an empty ZIP archive.
  3. Process the found ".gpr" (see below).
  4. Sign the ZIP archive.
  5. Upload the ZIP archive and signature as a release of "".

Processing a GPR file:

  1. Add the GPR file to the ZIP archive.
  2. Add all files found in "Source_Dirs" in the GPR file to the ZIP archive.
  3. Add any "README" found in the same directory as the GPR file to the ZIP archive.
  4. For all with'ed projects:
    • If it is from an installed package: Ignore it.
    • Otherwise: Find and process the matching GPR file (recursively).

Server-side

  1. Receive the ZIP archive and signature.
  2. Validate the signature.
  3. Generate a release ID (hash the ZIP archive contents or something like that).
  4. For each platform:
    1. Iterate through working releases for each dependency (newest releases first):
      1. Launch test system (Docker/AmazonEC/...)
      2. Install the selected versions of the dependencies on the test system.
      3. Install the new upload on the test system.
      4. Build (and test) the new upload on the test system.
        • Pass: Register the release as a working release with the tested set of dependency releases on this platform.
      5. Terminate the test system.
    2. Report if the release was registered as working on this platform to the uploader.

Auto-uploading

It might be possible to implement automated packaging from Bitbucket, GitHub etc. by tracking GPR files there.

Client-side deployment methods

e) e.g apt-get fetches packages for you and updates only what is necessary, but you need that executable on your machine. The dub (dlang apt-get) executable works the same way. Mr. Brukardt had an objection has to have to learn and depend on a new local executable to operate. I think someone proposed everything could be done by the server. I dont get it on how to ONLY remotely operate dependencies on a user local machine. The only thing I see is the server bundling things in a zip file with all the .gpr files automatically generated to build properly for a given machine/os combination. The user would have to manually download the archive and launch the build by hand.

OK. I essentially agree about having a client tool. Although I can see Mr. Brukardt way, requesting a library, which in turn would cause the zipping of every dependency, to me this is a long-term, even optional, goal, since it requires remote code execution (unless javascript can do this, which I don't know). To me, we should first focus on client-side tools (preferably just one, a-la apt-get, but more on this below) and server-side standard, free, services, just like this one. In other words: a zero-cost solution without possibility of downtime, nor maintenance requirements.

(In that vein, I wouldn't want to have something where you upload your library and it is prepared/stored. I'd stop at the level of a pull request for metadata).

As for the client tool, I'm nowadays a linux-only man, so with my partial view I'd advocate for either a downloadable static* pre-built executable, or (yes, I'm going to say it) a python or shell script. Of course, ideally if this takes off, the client tool could be just a regular package from the distro. For windows I only see the static executable way.

Question: does deployment includes compilation? Or could we stop as a first stage at the source code ready stage?

Perhaps we could try to define the bare minimum objectives for a pre-pre-alpha milestone.

*I've never achieved a fully static executable in linux with gnat. Even with -static, I ended with a ld-linux.so or somesuch dependency.

General design discussion

Evident needs: indexing, storing, client tool for deployment.

Suggestions have been made to study how it has been done for Haskell [1] and D [2]

[1] http://hackage.haskell.org/
[2] http://code.dlang.org/

Without knowing how those have been done, I'd solve storage by pointing to concrete commits in open source repositories like github and bitbucket.

I guess ideas agreed upon could be transferred to the wiki part of the project.

Library Source Distribution

I have let myself be convinced that what we need is "library source distribution" (i.e. we leave the distribution of compiled libraries to the tools provided for that purpose by the host operating systems).

From there I got the idea that the key to building a common Ada library distribution infrastructure could be to take GPR files (yes, I know what the G stands for) as the library descriptor. As most Open Source Ada developers probably write GPR files anyway, it may reduce the effort involved in inserting libraries and library updates into a common repository. - When it comes to supporting other compilers, I hope that we either can configure "gprbuild" to support other compilers than the G-one, or that we can generate build scripts for other compilers from the GPR files.

My suggestion is that we use project names (i.e. GPR file names minus extension) as the key to identify libraries (but not their versions).

Client-side we want to be able to run commands like these:

ada-lsd-upload <project name>
ada-lsd-search <tag> ... <tag>
ada-lsd-get <project name>
ada-lsd-refresh
  • upload = copy to the common repository.
  • search = get a list of matching projects from the repository
  • get = fetch a library and its dependencies from the repository
  • refresh = update all downloaded libraries to their newest versions (keeping all buildable)

Server-side I expect the uploaded packages to be checked for buildability on all the supported platforms, before they are made available to the users.

Some parts of the client-side tools may have to depend on the host system (which compiler to use, which command to use to query the system package manager about installed packages, etc.).

Uploading

Client-side

  1. Find ".gpr" in GPR_PROJECT_PATH.
  2. Create an empty ZIP archive.
  3. Process the found ".gpr" (see below).
  4. Sign the ZIP archive.
  5. Upload the ZIP archive and signature as a release of "".

Processing a GPR file:
a. Add the GPR file to the ZIP archive.
b. Add all files found in "Source_Dirs" in the GPR file to the ZIP archive.
c. Add any "README" found in the same directory as the GPR file to the ZIP archive.
d. For all with'ed projects:
+ If it is from an installed package: Ignore it.
+ Otherwise: Find and process the matching GPR file (recursively).

Server-side

  1. Receive the ZIP archive and signature.
  2. Validate the signature.
  3. Generate a release ID (hash the ZIP archive contents or something like that).
  4. For each platform:
    a. Iterate through working releases for each dependency (newest releases first).
    (more to come)

server-side design

There are several questions of philosophy that will impact the design -- reuse of extant infrastructure or designing a custom infrastructure being the most dramatic. If the former, we can have a "quicker" start at the cost of forcing extant systems into a new mold, if the later we will get the feeling of "reinventing the wheel" with the benefit of having the whole thing "work together as if that's what it's meant for" -- this is, in esence, the same argument one could have about using a C-library binding/import vs. creating the same functionality natively in Ada.

Or, if you will, usage of the type system -- wee can, after all, use it to eensure (eg) that no constraint-violating values are inserted into [or retreved from] a database. (E.G. a phone-number type which is a string w/ a particular format [or set of values, rather].

Applying this, we could ensure that only well-formed 'projects' are resident within the repository. We could also integrate w/ unit-tests to waarn (or error) on failing tests. -- While this would certainly increase the barrier to submitting a 'project', it also would have the effect of ensuring that ALL projects within the repository are buildable and [therefore] of some quality.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.