Git Product home page Git Product logo

Comments (26)

Jamiras avatar Jamiras commented on May 24, 2024 1

I agree that having binaries in the repository is unnecessary bloat, and one of the few things git doesn't do very well. Right now the bin directory is 63MB - fbalpha being more than half of that. Because git is a distributed repository, that 63MB has to be downloaded every time someone clones the repository. If we decide we want a new build of fbalpha, that's an automatic 35MB more. The previous one is still in the history, and therefore has to be available locally in case you wanted to switch to an older commit.

In order to nuke these binaries from the repository, we have to rewrite the history without them. That's fairly trivial to do, but it will break any local checkouts you have of the affected branch or anything that was branched off the affected branch. As these files are in the develop branch, and develop is the default branch, that means pretty much everyone is affected. Any local develop checkouts, or any local branches made off develop after May 1st will be broken after this change. The commits after May 1st that have been pushed to the server will be rewritten without the offending binaries. A fresh clone will still contain the same code, but existing clones will suffer from a "detached HEAD" problem.

In my opinion, it would be beneficial to remove these files from the repository. It is my recommendation that anyone working with the repository clone a fresh copy after the purge. As such, if anyone has any non-committed code they want to keep, consider pushing it now.

Let me know if you have any concerns. I won't do anything until at least next weekend.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024 1

From what I understand/assume based on the details provided, save states were not "lost", they were simply rendered incompatible with newer versions of a certain core. In that case, rolling back is not a major issue, but users will eventually have to migrate. I don't know that we can take responsibility for core maintainers following best practices, and we can't hold back updates forever. So in practice I don't see much difference between deploying fresh core builds at every release and holding them back for some arbitrary amount of time.

In order for a list of "stable" cores to be useful and reliable, we would need acceptance tests covering every important feature exhaustively. Otherwise, any update will be subject to bias, validation mistakes, omissions, and negligence, which is not much better than trusting the core maintainers.

In any case, if you decide to go this route, I would document such a list in the Wiki, along with the documentation for acceptance tests.

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024 1

save states were not "lost"

End users won't care what the real issue was, they will be unable to load their save games.

rolling back is not a major issue

I wouldn't count on end-users locating the last known "good" core and updating RALibretro by hand.

I don't know that we can take responsibility for core maintainers following best practices

And we can't make our end-users lives worse because they don't follow best practices.

In order for a list of "stable" cores to be useful and reliable, we would need acceptance tests covering every important feature exhaustively

We don't need to go overboard with this. When a cool feature is released or an important bug is fixed, we can just wait a couple of weeks after the core is available and see if there are road blocks. Of course if one has enough time to make such tests, I'm all in for that.

which is not much better than trusting the core maintainers

It's not much a question of trust. I'm positive they do their best trying to maintain their ports of the upstream emulators, but when you only have time to update the core once an year, the original emulator may already changed the save state format and dropped the compatibility code that loads the previous format so the core just can't load the previous format after one single update.

Again, this has happened before, and in the current state of affairs, can happen again. I don't think exposing RetroAchievements end-users with the holes we have in our release process as a hole is the best approach, hence my stance about having a known "good" set of cores, and updating them with care.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024 1

End users won't care what the real issue was, they will be unable to load their save games.

It's a huge difference! We can easily roll back to a previous version of the affected core (in the case of a legitimate bug) or direct affected users to do this themselves. This is not the same as losing or corrupting files, which would be a major issue. In any case, the situation you describe seems preferable to sticking with an older version that is potentially buggier in unpredictable ways, as well as less accurate/compatible; not to mention that RAIntegration itself has been known to behave unexpectedly.

I wouldn't count on end-users locating the last known "good" core and updating RALibretro by hand.

We can provide links to the RetroArch buildbot, which includes several older "stable" versions as backup. In addition, deploying a hotfix version to correct any critical issue is possible. I think we should expect some level of intelligence from end users anyway.

And we can't make our end-users lives worse because they don't follow best practices.

It sounds like the situation you brought up was a one-time issue involving a single core, and as you said, it was never rolled back. So in this situation, we would be making users' lives worse either way.

Another important point that should be brought up is that by keeping RALibretro in sync with the libretro repositories, we are ensuring compatibility with RetroArch. Mixed compatibility between supported systems has been an issue for a while, and I was under the impression that solving that problem was a goal of RALibretro.

We don't need to go overboard with this.

As a user, I would not trust some arbitrary tests performed at random times by another random user without any standards any more than a core maintainer's own testing. After all, there's no guarantee that a feature I care about would be tested properly. I don't feel that there's any benefit to holding back updates without serious guarantees about stability. In other words, if stability is truly critical, then exhaustive acceptance tests (and/or audits for the source code of bundled cores) are critical to ensuring that stability.

It's not much a question of trust. I'm positive they do their best trying to maintain their ports of the upstream emulators, but when you only have time to update the core once an year, the original emulator may already changed the save state format and dropped the compatibility code that loads the previous format so the core just can't load the previous format after one single update.

Outside of keeping the old version forever, or forking the repositories in the style of RAEmus, there really is no solution for this. Part of using a libretro-based frontend is yielding control of the cores to their maintainers; we can't realistically fork the repositories (and we all know that doing so doesn't guarantee being bug- or crash-free), exhaustive acceptance testing is not something I would estimate to be worth the resources necessary to execute correctly, and holding on to old versions for the sake of stability can potentially have the adverse effect (if newer versions fix stability issues), or compromise on compatibility and performance.

If you have any other example of core updates compromising stability, then you might convince me to hold off on updates; I am simply not convinced that the risk of a major issue popping up is worth the trouble. Either way. if we decide to ship validated versions only, I think we can agree that they should be kept outside of the repository. I will let others voice their thoughts on these issues, since our conclusions are opposite.

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024

I believe they were put there to make everything work out-of-the-box, and that the fetch script exists just to update them. @meleu can you confirm this?

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

If that's really important, then please consider bundling wget and unzip builds instead, though I don't think it's necessary as they are properly documented as dependencies (in #37). I don't think that vendoring cores is a good idea, considering their size.

Running dl-cores.bat as a build command in the Visual Studio project if cores are missing is an option.

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024

I'll leave @meleu @Jamiras and @GameDragon2k decide what to do about this. My development machine runs Linux.

I just don't think removing them from the repo and adding instructions to the README to make developers run a script before start working with it is not much different in practice.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

The difference is that the fetch script will always download up-to-date versions of the cores, while vendored cores would need to be updated regularly in the repository, which involves maintenance work and pollutes the history, and means that in order to get a full (deep) copy of the repository, one will end up downloading every set of cores in history.

Someone who reads the README before cloning will know to fetch a shallow clone, but not everyone does that, and migrations or other operations that require the full repository history will likely cause problems. Local repository copies will also keep old cores in history, while re-running the fetch script will overwrite them.

Finally, cores are soft dependencies; they are not needed to compile RALibretro or even to launch it, so there is no need to download them as part of automated builds and other situations where they are not needed.

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024

I think they being in the bin folder also means that they are the "good" set, that have been tested and are known to work. I'll let the other devs decide.

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024

@Jamiras ok, could announce in the appropriate forums so everyone has time to comply? I don't think this will affect anyone besides us, but it doesn't hurt to be sure.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

People should be able to rebase their change sets either way in order to move them to the "clean" history.

from ralibretro.

Jamiras avatar Jamiras commented on May 24, 2024

could announce in the appropriate forums so everyone has time to comply?

What would the appropriate forum be? It seems like the target audience would get the notification about this issue being updated just from watching this repository. I'm not even sure what the appropriate Discord channel would be. It's not really #rasuite or #raemus. So #devs?

from ralibretro.

meleu avatar meleu commented on May 24, 2024

@leiradel, I've found our conversation talking about this stuff. We decided to stick with a core version and update once in a while due to the "not very controlled" way the development happens on those repos.

At that time we agreed that storing a specific version here would be good to avoid the insertion of bugs. But the guys here have a point, though (the dlls are bloating the repo).

I think we just need to test the current version of each core and ensure they are working fine. If it's all working properly, we can go ahead with the "rewrite the history approach". 👍

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

Have you encountered issues due to the use of nightly builds before, or is this hypothetical?

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024

@meleu is right, the libretro cores don't follow a strict release cycle. There were situations where a core update would lead to lost save games, because the code that deals with save states had changed in incompatible ways and released without warning.

Although it's rare to have that kind of issue, I'd rather still have a known "good" set of cores. If the repository is not a good place for them, we should find another place. We can't assume nightly core builds are always ok and stick to the "good" set.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

Can you please provide a reference for a case where core updates led to lost save games? Was it resolved afterwards, between stable releases? Did save states get overwritten automatically on read?

I don't consider save states to be a critical system that warrants holding back improvements and fixes to a core, and I don't think that auditing every one of them to guarantee that no breaking change was pushed to a deployment version is reasonable, but I would recommend documenting the deployment process somewhere and packaging stable releases, not nightly builds. The reason why the fetch script operates on nightly builds is because stable builds do not include individual core packages.

If a breaking change is found in a new stable release, then you can consider the core bundled with the previous release to be "good".

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024

Can you please provide a reference

I'm afraid references are lost in IRC/Discord chat histories. The issues weren't fixed because they weren't bugs, they were made in the upstream repos and caused compatibility issues with the save states when the changes were updated in the cores. The main problem is that the cores are not kept up-to-date with the upstream repos, updates are made from time to time and incorporate lots of changes in one shot.

packaging stable releases

There are no stable releases of libretro cores, only nightly builds. Our "good" core set would be the what we think are stable releases for RALibretro.

I don't consider save states to be a critical system

Save states is a very important feature, I sure don't want to have angry users opening issues because their save states don't work anymore.

from ralibretro.

Jamiras avatar Jamiras commented on May 24, 2024

Did we come to a verdict on this? @ScottFromDerby seems to want to keep them where they are to keep the "barrier for entry" low. @meleu suggested a new git repo for the binaries (not really an improvement). #37 downloads them from the nightly builds.

It seems like the ideal solution would be to put a package of approved cores on a host somewhere. Then the user could download/compile the executable separately from the cores. This is similar to how RAProject64 manages its plugins. I'd even go so far as to just document the behavior rather than have a script do it.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

The shell script in master right now downloads nightly builds. #37 is a port.

I agree with hosting "stable" cores and other dependencies elsewhere. Another repository is of course still poor practice, but much better than keeping them here.

from ralibretro.

meleu avatar meleu commented on May 24, 2024

I want to solve this issue soon (as in removing the core binaries from the commit history). And I would like to know what are the drawbacks of creating a dedicated repo for us to put the cores (windows 32bits builds only).

My plan is to update the cores every time we release a new version of RALibretro.

IMO creating another repo is a good approach cause we could have a safe way to get older builds if something breaks. The libretro's nightly website doesn't keep older builds of the cores for downloading and users would need to build them from source to get them.

Opinions?

from ralibretro.

leiradel avatar leiradel commented on May 24, 2024

And I would like to know what are the drawbacks of creating a dedicated repo for us to put the cores (windows 32bits builds only).

I only know git superficially, so pardon my ignorance, but how does that improve what we have now if the two repositories will walk hand-in-hand?

Can't we have a RALibretro.zip file available at RetroAchievements with the executable, the necessary DLLs, and all supported cores bundled?

from ralibretro.

Jamiras avatar Jamiras commented on May 24, 2024

The problem this is trying to address is git is a distributed system, and keeps history for everything. So that 40MB FBA core that's in the repository now is downloaded to everyone who clones the repository even if it's been replaced a dozen times already - which would of course also down the newest version and the 10 intermediate versions.

As I mentioned in my last comment: using a separate git repository does nothing to address the problem.

The cores are in the official release package. But we still need some way to indicate when we move to an updated core so people can use it until the next release. And we might want to know which versions of the cores were supported in each released version - which could be handled just by archiving the release packages.

from ralibretro.

meleu avatar meleu commented on May 24, 2024

I only know git superficially, so pardon my ignorance, but how does that improve what we have now if the two repositories will walk hand-in-hand?

  1. The main issue here is that RALibretro contributors need to download big files when cloning the repo, and those files are not always necessary for the feature/fix they'll work.
  2. If we update the cores here on RALibretro's repo, the size of the repo will continuously grow with different versions of the cores in the commit history.
  3. The separated repo only for the core binaries is not intended to be cloned, but just a way to keep older versions of the cores. [edit: I plan to automate the management of such repo via bash scripts]

Can't we have a RALibretro.zip file available at RetroAchievements with the executable, the necessary DLLs, and all supported cores bundled?

yeah, we already do and will continue doing this.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

As I mentioned in my last comment: using a separate git repository does nothing to address the problem.

I agree that git is not a good solution to host a set of cores to distribute, but it would make it a little better, since it is possible to specify a depth level for submodules. At least by default it would be possible to only fetch the cores if they are needed, and only the latest versions.

That said, if all that matters is having that set of cores hosted somewhere that can be updated occasionally, then the RA server seems best.

from ralibretro.

rzumer avatar rzumer commented on May 24, 2024

This is tangentially related, but assuming at least some of the cores are licensed under the GPL, currently we are not following the terms for distribution:

You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following:

a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or,

c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.)

Other licenses may have similar terms, so please check the terms for each binary being distributed when/if a change is made to hosting (including the non-core DLLs in the bin directory).

libretro also seems to be violating those terms, so maybe someone can convince them to bundle an offer with their automated builds as per b) and we can follow c).

from ralibretro.

Jamiras avatar Jamiras commented on May 24, 2024

With #103, cores can now be downloaded/updated from the nightly buildbot just like RetroArch. The dlls have been removed from the current develop branch, but still have to exist for current master.

I'm undecided whether or not the history should be rewritten to completely eliminate them from the repository, so I'm going to leave this open.

from ralibretro.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.