Hacker News new | past | comments | ask | show | jobs | submit login
Writing a Package Manager (antonz.org)
163 points by celiktom 11 months ago | hide | past | favorite | 101 comments



The author added a lockfile without understanding why they exist.

A lockfile is meant to "freeze" dependency version resolution when package authors can specify dependencies on other packages using version ranges... it also "freezes" choices of transitive packages' versions when different packages depend on the same one, but with different versions.

They chose to not handle package dependencies at all, and I believe there's no version ranges either, so I really don't see why they added a lockfile.


There is quite a difference between "the author added a lockfile without understanding why they exist" and "I really don't see why they added a lockfile".

It's better to start with the latter if you seek to understand, not attack.


FWIW, cynically speaking, more often than not "I really don't see why ..." roughly translates to "You don't know what you are doing ..." in business dialogue


The lockfile in this post actually seems a bit more like a manifest, or at least that it's trying to do both things at once.

I'd expect you'd have human readable file to list your dependencies and versions in a simplistic way, and then a separate machine readable file that locks the versions for reproducibility. As it stands, the example in the article does not look pleasant to write by hand so you'd have to script something yourself.


A version number is just a label, and labels are mutable. A lock-file containing hashes will always resolve to the same packages (or fail).


Depending on a single specific version is wrong. A library can be changed to fix a bug without changing the interface in any breaking way. It should be possible for a user (also their package manager, automatically) to replace the library with the new version in this case.


Depending on how important supply chain security is to your industry/company/team, validating the hash of every package is critical. If an attacker can manage an interception/man in the middle attack on your CI network, the hash check provides protection. If an attacker compromises the server you get packages from, having a hash provides protection as well. Automatically trusting the server's response to be correct, or automatically upgrading to newer versions (even if the package author strictly adheres to SemVer), puts you at risk at attacks similar to what we saw with SolarWinds around 2021.


Your argument supports the idea of getting rid of the lock file and instead committing the hash in the original dependencies file - so that it's never an automatic process to update the hashes/versions.


Zigs package manager takes this approach.

https://zig.news/edyu/zig-package-manager-wtf-is-zon-558e


No, it isn't.

Just because someone says X does Y, it doesn't mean it does.

For anything serious you absolutely verify checksums. Ideally you also mirror every dependency used so you don't care anymore about what's out there.

The thinking in your comment lead to Maven range and npm general atrocities.


Every time I download someone's code I replace all the == requirements with >=s and it works perfectly (I understand there are many cases when it wouldn't).

Every time an old unmaintained Linux app I need fails to start, saying it needs some libsomething.2.3 which isn't in the repos already I just symlink the libsomething.2.5 to it and it works great.

Some times this even helped me to overcome bugs/vulnerabilities.

Being able to fix a bug and update a library without the program even knowing (whithout having to get and rebuild the source or contacting the author) is why dynamic linking has been introduced in the first place, isn't it? Isn't this the "unix way"? Is having a program superglued to an outdated library with known (and fixed already) bugs really what you want?


The Unix way is not gospel, and even if it were, this is Computer Science, not Computer Faith :-)

It depends on what you want to do.

If you're hobby hacking stuff, sure.

Any kind of software that's supposed to come out of software engineering, probably not.


> Is having a program superglued to an outdated library with known (and fixed already) bugs really what you want?

It is not superglued. If you want to update dependencies, just remove the lockfile and reinstall everything. The main reason people do this is because just updating a library without the program knowing by not specifying the exact version leads to behaviour silently changing, which is terrible (especially on CI!)


You are mad.

I love you.


It's not wrong to depend on a single version, just suboptimal.

Exhaustively testing every combination of dependencies with your project is infeasible for any non-trivial set of dependencies. The "is compatible with" relationship present in some package managers (e.g. ~= in pip) isn't guaranteed to work because packages can "lie" about their compatibility.

Sure, it's better to have some kind of way for a piece of software to specify that it depends on foo 3.x instead of tying it to a specific release, but at least being able to specify some dependency is better than not at all. In the worst case, the user can treat the "required" version as a suggestion/guide, override that version with whatever they want, and see if the code works.


Why not just replace the "requires" semantics with "has passed tests with" and "has failed tests with" and let the host decide?


In general: no reason, I quite like your idea and think that it's useful. However, in the specific case of the blog post: because that's more complexity, which requires more effort.

I suspect that the author would have added in a more complex versioning/dependency system if he had more time/energy. But, given that he didn't, he had to make a choice about what features to include, and how much to flesh out the various systems.

A simple "depend on exact version" system is relatively easy to implement, and still (indirectly) supports users overriding the lockfile by manually editing it. A more complex (but flexible) system like pip's or yours would be more ideal from a user's perspective - the developer just needs to put in the time to implement.

However, I don't think that any of the above systems are wrong, unlike e.g. one that only allows you to specify a specific version in the lockfile, but then silently downloads "updates" to those dependencies in the background.


This is basically what I daydream about for Nix some day (Nixpkgs doesn't currently involve any dependency resolution at all). Just recording successful version combinations (both of a package declaration and source version and dependency versions) would let us add versioning metadata and save historical package versions in-repo (or in an ancillary repo) without actually changing how the monorepo is built/used.


If you’re going for guaranteed repeatable builds (for example you’re using Bazel) then you have no choice. Dependencies must have a single resolution.


Honestly pretty strange to write a package manager for sqlite and to use directories + json files to store the data instead of sqlite.


A couple of reasons:

1) I wanted human readable specs and lockfiles.

2) I didn't want to introduce a dependency (working with SQLite in Go is pretty ugly, it either requires GCC or a ton of other dependencies for a pure Go implementation).


SQL not so good for tree structures. Recursive CTEs are pretty gnarly.

Graphdb would be ideal.


You want to bring in a dependency to avoid writing one 8 line CTE?


The number of lines needed is not a good measure for gnarliness.

(To be fair, I don't know if gnarliness is measurable at all.)


It isn’t gnarly at all, but recursive CTE are outside the experience of most SQL devs. Here “gnarly” is used to describe “unfamiliar”.


SQLite is a pretty darn good solution for any file(set) that needs atomic / consistent writes


Why not? Just add child/parent ids


local system log "files" and package databases would be an excellent example of using sqlite


Great point


Awesome post! I worked with a bunch of package managers over the years and one can see that this design got inspired by the hood parts of a few I know.

The only design part I don‘t really like is the ‚latest‘ version specifier in the spec file. Which moves the declaration what the latest version is to the hosted location (in the example GitHub via GitHub API) paired with the fact that the checksums are also fetched rather than being part of the spec. This makes no sense for me. The spec needs to be versioned or better the spec is the actual location to declare a release. I kind of understand where the desire comes from to have a floating spec. Makes the publish process easier since one only creates a GitHub release in this case. But I would still argue that explicitly creating a new version of the spec for each version with baked checksums is better. The benefit for me would be that one could create a hash for each spec version for instance and use that internally for instance.


Probably the simplest version of a "package manager" are git submodules. Pointing submodule to `master` is effectively the same thing as pointing a "real" package manager to `latest`. This is trunk based development, however you implement it.

One could easily argue that floating versions are considered harmful for releases outside of development team and should always be pinned, but on the other hand it is hard to argue against support for floating versions in development. As much as I dislike floating versions, I am not aware of any other way to force changes downstream.


Brew has the —head flag where one can instruct to build the latest commit from the repo. But the spec/Formular needs to set a head to pull from.


Sounds like Maven had all this solved many years ago. Yes, it cannot run arbitrary code, like NPM does, it just copies files, but the dependencies and specfiles were there from the beginning.


Not running arbitrary code is a feature, that yarn brings back to the js ecosystem.

It sounds nice in theory, but it impacts the soundness of the whole system.


Maven didn’t support transitive dependencies from the beginning.


It's surprisingly to me that no-one has built a asdf style package manager. (I'm not talking about system package managers, their language software packages are always out of date and get installed globally instead of locally to a project).

Having a unified interface to a package manager per language that will use the languages registry could be really nice (I guess you'd have some core dependency management functions and agnostic ways to store and download the packages, and plugins per language that would use these common building blocks to actually do stuff?).

Another advantage of this could be cross language dependencies which often aren't handled well by package managers.


Nix solves that problem.

(While creating some new ones.)


You could argue that Bazel, Nix and Guix solve that problem but they introduce a lot of complexity as a result because it's not a simple abstraction. Hell, you could probably pull it off somehow in Cmake if you placed no value on your sanity. Most standard lib package managers double up as build tools for bundling your code after all.

It's a totally different problem space than the one asdf aims to solve, which is just managing multiple independent versions of some programming language binaries in a consistent way, so you're not dancing between nvm, rvm/rbenv/chruby, virtualenv, etc. etc.


Something like Meta Package Manager? https://github.com/kdeldycke/meta-package-manager


I think some day rather than the current paradigm, even including declarative package managers and environments and distros, the future will be per-user and even per-app chrooting or jails, or something similar. Apple already uses something like this today. Many people who are smart about information security have one login for shopping and banking and bill pay, one for their business, and one for cruising the web or gaming or whatever. That way a breach of one doesn't necessarily end up pwning the whole system.

I only have two accounts, one "serious" and one "off hours" but I still feel better protected than most people.


> Apple already uses something like this today

Some Linux distros too have things like this but unfortunately there is no buy-in across the ecosystem so "sandboxing" is done in a half-baked way.

The problem is when applications in general aren't written with sandboxing in mind, and when you have to choose between apps not working properly or having a leaky sandbox, you will opt for the latter.

I wish some big corp bit the bullet and ported hundreds of apps to a new, sandboxed environment in Linux, while attempting to upstream the whole effort. This would necessarily involve things like fully migrating to Wayland (X11 security is awful), only granting filesystem access through distro-sanctioned file pickers (so you need some coordination among Gtk, Qt, and other toolkits), and generally having a deny-by-default policy: first you make secure, then you fix what broke.


There are also other problems with the way that the common sandboxing systems are working. You might need different permissions by command-line arguments and environment variables and user configuration files, and it is not designed to work with popen with user-specified commands, and they usually assume any text (including file names) is Unicode, and that some programs might access multiple files whose names are the specified one with some suffix (SQLite is one program that does this; there is the database file and the journal file). There are some other problems too.

I had wanted to add conditional compilation to one of my programs to work with the sandboxing but there are too many problems with the sandboxing system that it will not work, since my program requires that popen can call programs specified by the user at run time, and that some files it accesses depend on user configuration, and that it uses non-Unicode text, and accesses multiple files whose name are the same base name given on command-line arguments but with different suffixes.

Some programs might work even if they are not designed for the sandboxing, such as if it uses stdin/stdout/stderr only, and not other files. However, many programs will use other files too.


You're describing Red Hat! After spending multiple years helping with the development of Flatpak, which is a sandboxed environment with file pickers just like you describe, they recently announced[1] that they would no longer be contributing to LibreOffice in Fedora and instead will be contributing to a Flatpak version instead.

Personally, I am not so sure about Flathub (the 'official' repository for Flatpak bundles), but Flatpak itself is a welcome (and large) step towards universal sandboxing for desktop applications.

[1]: https://www.spinics.net/lists/fedora-devel/msg312784.html


That's interesting. So is Flatpak actually secure against malicious code? That is, would you trust running malware if it's packaged as Flatpak?

I'm saying this because we're talking through a platform that is trusted by the majority of pepple to run malware - the web browser. We don't manually check if the Javascript or Wasm code is good or bad before we visit a web page. Few people disable scripts altogether. We could have this level of trust in applications running on our system - but does Flatpak deliver it?


I wouldn't say that Flatpak is secure against specifically designed malware - applications can still run machine code directly on the CPU and make Linux system calls, and so could exploit any vulnerabilities (like privilege escalation) that they might have. However, I would certainly trust Flatpak to protect me against excessively snooping applications which are otherwise legitimate, which it can do by limiting access to specific filesystems or devices.

For JavaScript, web browsers have good sandboxing, but arguably also have a smaller attack surface than Flatpak because the page cannot run system calls directly. I don't yet know enough about WASM to know if that tangibly changes the situation.


Firejail proves you can sandbox most anything, OpenBSD has their pledge and unveil too. I guess there's a gradient there but each program should be written or constrained to only being able to access what it needs ideally. Per-task groups could go a long way toward solving this, they researched it in the '80s even. Just create and destroy groups on the fly to enable processes to access only what they need. Unix is flexible enough to permit experimenting here.


Hard pass. I don't need Microsoft securing my calculator from me, thank you very much.

If I need it that badly, I'll build it myself.


My calculator is bc -l in a terminal but I'd bet that there are calculator apps with network access to display ads, sync to the same app on other devices, save past calculations to the cloud, and get plugins.

I found one with some of those features and some more, with only one minute of googling https://apps.apple.com/us/app/graphcalcpro2go/id1091870099

That's for iOS but why not on Windows or Android? I'd be surprised to find one like that on Linux.


Sandboxing is a nice backstop no matter what you're running, because it can help protect you from vulnerable software as well as malicious software.

But it's much more urgent in ecosystems where the norm is to install untrusted, proprietary software supplied directly from developers/publishers with minimal real human oversight from anyone.

A lot of Linux users rely just on the distro itself for their software, which is way, way safer than the way people install software on macOS, iOS, Windows, or Android. This is probably part of why desktop Linux has lacked these facilities as a default for so long, and also why Flatpak sandboxing is seen by so many users as 'about' proprietary software.

It's definitely needed, by now, though. There's still a fair bit of proprietary software that has strong network effects which compel even some users of libre operating systems, like Zoom, Slack, and Discord. It's way better to install those with sandboxing than give them access to the normal packaging mechanisms whose design assumes a level of trust and social oversight that's just not there for third-party, vendor-provided packages and repos.


...Why not use Jitsi?

I'm also in the process of whipping through getting prosody (a subcomponent on which jitsi is built) set up in such a way as to also be able to handle most of what people would use Slack or Discord for.

The primitives for much of the modern corpo-ware environments have been available for a while. The best part is that those you build from scratch don't even require extra firewall config to nuke the telemetry of. Just leave that part out!


Jitsi meet is a pretty easy one to convince others to use, since it works in the browser and requires no accounts.

For the others, it can be more work.

Either way, the reality is that many people install those untrustworthy proprietary apps insecurely today, using vendor-provided DEBs that have a history of huge misbehavior and serious security vulnerabilities.


Won't argue with you there, but I've gotten tired of turning into the old man shouting "If you have not read what you are preparing to deploy at least once, you don't know what it does! You are not engineering! You're doing a ritual!" at the clouds.


That resonates deeply with me— even though, of course, I run lots of software I have not read.

Trying to understand what's going on in a computer or a network can be overwhelming. I understand the need to simplify, to take for granted, to abstract away— to ritualize, even.

But sometimes I do feel frustrated that engineers (whether in application development or infrastructure or networking, whatever) can be frustratingly uncurious about the tools they're using. 'How it works' should never mean 'how to operate it', but a lot of people use those terms interchangeably, even within tech.

For me, though, using F/OSS isn't about reading code. It's about feelings and values like trust, respect, control, and peace. When you manage to avoid proprietary software entirely, you can recover those things in your computing life in a way that is totally opposite to the adversarial relationships most people have with the software they run today. It's easy to 'not know what you're missing', especially because you don't really get it from just using a few pieces of F/OSS on a proprietary platform.

But the real reason to use F/OSS is to take refuge and let go of that tension of the posture you have to take with software whose authors don't have your interests at heart, that alertness and readiness to swat away nags, to dodge traps, to dig up the checkboxes and registry hacks and configuration files you need to disable an endless onslaught of individually small but nonetheless malicious behaviors.

Imo, that makes it worth it to try to convince a few of your friends to explore Jitsi or Matrix or Revolt with you instead of trying to make room for whatever proprietary social media apps are trendy right now, even if you have never read a line of code in your life.


This idea is as confused as people who claim that they "don't need package manager because they have Docker".

So what if you will have containers / jails? -- You still need to install multiple components into the same container / jail... because you need them to work together. It's not solving the problem at all. Of course, containers are useful, but not for the purpose of solving installation of software comprised of multiple components problem.


Isn't this already how Android handles app permissions? Each application runs as its own user for the sake of security. The Application Sandbox is a pretty cool interpretation of the long existing Unix user/group paradigm.


How do you build the app with its dependencies without a package manager?


Like silverblue with toolbox?


Something like Silverblue will only ever work with a rigid set in stone ROM-friendly base. You can't even do fixes with that, so Linux is right out.

Lisp on bare metal, FORTH, anything. BASIC. It has to be small.


Er, what? SB handles updates fine. The base/host OS is separate from what goes on in the containers anyways.


Bit of a tangent here but what’s a pip/npm/cargo like package manager for C++? For example ‘pip install boost’? I’ve never worked it out for hobby projects and never worked with it commercially


The closest thing we have at the moment is conan[1]. It’s a cross platform package manager that attempts to implement “integrations”, whereby different build systems can consume the packages[2]. This is a big problem with package management in C/C++, there’s no single, standardised build system that most projects use. There isn’t even a standardised compiler! So when hosting your own packages using Conan, often you need to make sure you build your application for three different compilers, for three different platforms. Sometimes (for modern MacOS) also for two different architectures each.

If you control the compiler AND build system you can get away with just one package for most cases. This true for Microsoft’s C/C++ package manager, NuGet[3]

Historically, the convention has been to use the package manager of the underlying system to install packages, as there are so many different build configurations to worry about when packaging the libraries. The other advantage of using the system package manager is that dependencies (shared libraries) that are common can be shared between many applications, saving space.

[1] https://conan.io/

[2] https://docs.conan.io/1/creating_packages/toolchains.html

[3] https://devblogs.microsoft.com/cppblog/nuget-for-c/


The cpp ecosystem is insane. I don't know how it got so out of hand. At the end of the day you are just a running a bunch of clang/gcc cli commands. It's really, really, really simple. But all these commands are generated, and the user becomes so detached from what they are actually doing, and then they are left with an error about one of these commands not working, and then they need to dive into this huge monstrosity to figure out what is making a command do something.

Declarative build systems obfuscate so much without providing the proper debugging and error-handling capabilities.

Build systems should be imperative and type-checked. A simple script that a user can step through and observe what is happening.

Makefiles suck because they are not type-checked.

The only abstraction you need is some kind of dependency graph. But then it should be completely transparent to the user so they can easily understand it.


> Build systems should be imperative and type-checked.

Perhaps Meson fits the bill for you?


vcpkg is also an option


FYI pip is specifically not a package manager, it's a package installer.

Pip does not attempt to resolve dependency conflicts of already installed packages, only the ones it is currently trying to install. Nor does it try to manage the life cycle of installed packages, such as removing transitive dependencies never specified or no longer needed or create consistent environments with a lock file.

As package specifications have become better defined (rather than "whatever setup.py does") and are being better enforced (e.g. Pip will soon reject version numbers that are non-compliant) there are several attempts at writing full-fledged package managers (Rye, Huak, Pixi, etc.) and I'm sure once one of them gains critical mass it will replace Pip quickly.


Thanks for the info! That would explain why I spend so much time fighting dependency errors when I upgrade something ML related ...


There isn't any. There are a few partial ones like conan, etc.

And many people in the community are against it, you'll hear stuff ranging from "just use the distro package manager" to "don't use any, make it a single file library".

Personally, I'd say all hope is lost.


Just because I’m curious, in what respect is Conan a “partial” package manager? Within the constraints of existing C++ projects and the variance of their build systems, I can’t imagine how to do it differently.

In my experience, most of the value of Conan is with creating packages yourself when needed. You can then self-host a Conan remote and have pre-compiled binaries ready development. Having a conan recipes repository with CI that produces binaries for each required build configuration has become a de-facto standard for projects I have worked on


Well, I'll expand.

For 95% of Java devs, JavaScript devs, Python devs, if it's Open Source and it isn't on Maven Central, NPM, PyPi, it might as well not exist.

The reverse is true, if you have any library or tool worth a damn, it's there.


Use Bazel.


I’ve heard a lot about Bazel but are there any good beginner tutorials? When I last looked it felt a little Nix-like (high learning curve to get through which I prefer to ignore for my tooling where possible)


The issue with bazel is having to roll out your own build definitions for every package you want to use that doesn't already provide bazel files.


Wonderful post! I've written a couple of packager manager like things over the years. Although they've all been internal-only which gives some flexibility.

> I like the idea of allowing both project and global scope

Yikes! Hard no from me. Globals are pure evil. Avoid like the plague. Environment variables too. Kill them all with fire.

> what if the user wants to reinstall the packages on another machine or CI server? That’s where the lockfile comes in

I've typically bypassed the need for a lockfile by simply checking in the dependencies. Dependencies belong in version control! That's a rant for another day.

Treating the packages folder as source of truth is basically the equivalent I think?

> I understand that dropping dependencies altogether may not be something you are ready to accept.

Nice. A corollary to this is that if all your packages are internal then you can simply disallow dependency graphs that want different versions of the same package. Simply bump and fix-up all version number dependencies for all packages.

The basically means "all packages play nicely on latest version". Which when all the packages live in a monorepo is perfectly plausible. Totally doesn't work for public package managers, but that's a different story.


> I've typically bypassed the need for a lockfile by simply checking in the dependencies. Dependencies belong in version control!

These are binary dependencies. You're checking shared object files into source control? Note that he's also doing this to add functionality to a binary installation of sqlite, which is presumably running somewhere like /usr/bin/sqlite. This isn't a custom application he's developing. The extensions are in his home directory. "Checking them into version control" would entail making his entire filesystem a Git repo. If you're willing to pay whatever IBM charges these days for ClearCase, something like that might actually work. Regular source control like Git, though? I don't think so.


Binaries absolutely should go in version control. The fact that Git is incapable of efficiently supporting that workflow is a separate topic.


I think you are right. Tarballs with version strings as filenames served from an HTTPD are effectively a poor man's version control. Git commits are immutable and efficient (storing only the delta) for text, but those same benefits could apply to binary files too if only the tooling was better. You may be interested to know that Debian actually does use Git to store binary data, although only when it is not possible to use the source files to generate the binary data directly.


> Environment variables too. Kill them all with fire.

I feel this way about host file entries.

It is shocking how common they remain, and not just in QA/Dev.


It totally works for Go with MVS, which bumps a lot of versions behind the scenes for you.


Source control is my package manager. Package managers as we usually think of them are syntax sugar and abstraction for the sake of abstraction

Working on a Linux distro that is one unified/generalized/normalized code base (with the help of AI/ML) and a model to sample and establish correct state from memory of the initial code base.

One way to think of it is like a game engine with action plans to allocate resources to recreate Firefox, for example. Not compile and run Firefox, but using *alloc() and free(), etc to establish the correct machine state to browse websites after learning what that state is in the abstract from Firefox’s code.

My thinking is many of our “truisms” in IT are outdated given modern machine performance and network reliability relative to the 80s and 90s when many of those “truisms” were defined.


Most package managers work with version control already, they are not solving the same problems. Package managers deal with the building and dependency graph along with delivery of working executables. Version control solves zero of these problems.


If you haven't already researched them, you may want to investigate Plan9 and Erlang, both early systems embodying a distributed and networked philosophy of computing.


So, how does it handle diamonds in the dependency graph? How does it handle version compatibility? IMO these are the most interesting aspects of a package manager, other stuff are implementation detail.

Although there is a mention for semver at the end:

> Decide what versioning scheme to use (Probably semver, or something like it/enhancing it with a total order).

I wonder if total order is appropriate (assuming the relation <= means compatibility), but it sure simplifies things. Basically throw away the major version, embed that in the name of the package if you have to.


I could be wrong here, but given what sqlite's documentation includes and the fact his spec files don't even include dependencies, I believe sqlite extensions don't depend upon anything except sqlite itself (he says here "most" don't).

You'd need to ensure the extensions and sqlite were both compiled against the same libc, but I don't see what this package manager can do about this given the metadata doesn't seem to be available in these Github releases. In fact, going to the repo for the example in his spec file, the reason for the release was to downgrade the build host OS from Ubuntu 22.04 to 20.04 to resolve an issue with some user not being able to run this because of a missing GLIBC symbol.

This is an underappreciated problem and why actual distros include everything. If you're going to distribute binary files, you need to ensure they're compatible in many more ways than just the architecture and kernel matching. A complete package manager would include a build system and public registry with builds matching a complete machine triplet (as in, what you'd get by running gcc -dumpmachine). If you consume upstream Github releases as this is doing, they may not have that level of granularity, and in fact, we can see they do not.


Doesn't the article say that it doesn't deal with dependencies at all?

Precisely, I wouldn't call this a "package manager", it's a "package installer" (same as pip). Managing implies some form of interaction after components are installed (esp. ensuring that all installed components can work together). This project doesn't seem to manage anything, it's more of a fire-and-forget kind of thing.


Agreed. Version resolution is the interesting problem.

Most package managers use a SAT solver to resolve dependencies. The Dart team has a detailed write up on their SAT-based approach which is worth a read [1]. For contrast, Russ Cox presents an algorithm that doesn't use a SAT solver (intended for Go) [2].

[1] https://github.com/dart-lang/pub/blob/master/doc/solver.md

[2] https://research.swtch.com/vgo-mvs


Man, I really wish frontend package management could be this simple, but with a system that ultimately ships bytes over the wire, we have to not only massage the dough with chopsticks (insofar as try to reduce our dependency tree with weak, feckless tools), but also, with the sheer number of transitive dependencies for each package addition, it’s almost impossible to really know what’s going on with your dependencies unless you write plug-in after plug-in.


Not a package manager, but I made this somewhat relevant "app" if anyone's interested:

https://sr.ht/~tpapastylianou/misc-updater/

It has solved my own pain points elegantly, so happy to share.


Why do we need a separate package manager for every programming language and every extensible library/app?


We don't, but it is the easiest way to ensure the package manager is adapted to the language and the library ecosystem. It is also easier to add features, remove misfeatures, and fix bugs when the package manager only has to handle a single language.

Once we understand package management better -- we don't understand it well yet -- it will likely be feasible to consolidate many package managers into a few omnimanagers.


In theory nix can solve that, in practice is more ergonomic to leverage a language's native package manager even in nix.


Does using nix still make the sense it's meant to make if you use a language's native package manager?


Need is the wrong word, sometimes it is necessary. Look at npm, long time they not include new features and made less progress. That time yarn was born. pnpm and other followed, cause npm did not have features, security solutions, and other things that people want.

In theory we need only one for all, but in reality, that is impossible.


Isn't this missing the SQLite version?

Are SQLite extensions guaranteed to work across different SQLite installs?


SQLite is quite good at staying backwards compatible, and this includes the APIs exposed to extensions, but extensions could definitely have a minimum SQLite version.


Wonderful post, how about using standard OCI to unify products?

We have also been working hard recently on a package manager for configuring language KCL, which currently supports Git, OCI, and more.


What do you mean by OCI and KCL?


Writing a package manager is not one of the most common programming tasks. After all, there are many out-of-the-box ones available.


These copy-pasted one-liners aren't useful. Skip them.


Imagine what'd we be using for package management today if package manager >= #2 author said "you know what, one already exists".


This is the package manager paradox. The problem is multiple package managers but to address it we build more package managers.


I think we'd all be using 'tar'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact