Skip to main content
The 2024 Developer Survey results are live! See the results

The Future of CI/CD For Platform Teams?

Created
Active
Viewed 115 times
4 replies
1

I recently read this article about the Future of CI/CD for platform teams, discussing the Dagger tool.

I agree with the article that we should always encourage a culture where the maintenance and success of CI/CD pipelines are a collective responsibility, simplifying the workflow for developers, allowing them to concentrate on building new features rather than learning complex pipeline syntax.

However, I also wonder if we are not just abstracting complexity once again, building more and more layers / tools that will finally give very little context to the developers about what they are really doing at the "foundation level".

Enhancing the developer experience, enabling rapid local development cycles, fostering a collaborative environment... all are great considerations. But in the end, by using so many tools, aren't we widening the knowledge gap between the development team and the operation / SRE / DevOps teams, by always wishing to simplify this CI/CD process? Where will this end?

4 replies

Sorted by:
78316264
3

I haven't tried dagger, but if I got that correctly, the main idea here is to have a CI/CD environment available locally. And maybe I'm just looking at this thing a bit superficially, but it doesn't seem like an additional layer of complexity to me. More like the replacement to a layer of complexity we already have (CI/CD env) which would allow devs to better investigate what's going on in their CI. I've worked on quite a lot of projects which were failing on the CI, while locally, the build and test suite were passing without issues. And it was always a pain to debug that...

In my ideal world, I'd like to be able to run my dev environment with one click (hey there, Joel Test) and be confident the results are running just as well in any other environment I run my project in. I get that there will always be slight differences between different environments, but working on minimizing them doesn't really seem to widen the knowledge gap between different teams, but rather bring these teams closer together.

78316599
1
Author

Thanks for sharing your thoughts! I may have been a bit dramatic when talking about a new layer, I just observe that we constantly see/create new tools to simplifying current processes with new abstractions (which require new knowledges) to perform operations we are already doing, but without good practices.

I wonder if the issue is not more with the way we work, instead of the tools we use (tools have an impact on the performance, but the problem might be more at the culture / practices level most of the time).

78320159
1
  • 196k
  • 55
  • 444
  • 846

We got good usability by shifting things towards the development environment after adding complexity remotely then and running the abstractions more "naked" on bare metal.

The complexity also feels reduced because the remote tax is not to be paid and speeds are much faster.

This is ca. over the last 5-6 years for something similar to how you describe Dagger, some were voicing the tendency towards the local build earlier and providing the tooling.

For the cultural / practice level we consider fast feedback cycles und "single click" solutions one of the best guidelines. The single click must not be taken word-by-word, e.g. it's fine to integrate a first fast build (or fast feedback build or fast forgiving build; FFB) on the staging area after the user has submitted the commit message. If the build fails, the message is tagged in the subject by the step and the exit status (e.g. [make: 2] original subject ) and the log is inserted into the message.

As local builds have the sheer performance, increments, shares and caches (100 ms up to ~1 sec are the norm); and can be forgiving (commit and push is not blocked), the fastest feedback is available showing the amend message. Amending the commit re-runs and replaces or removes the build log and voids the exit status tagging.

It is often already transparently compatible with IDE integrations that can also run "their checks" on commit and reflect the (pushed) commit message in a popup, (again) providing fast feedback showing at least the subject beginning.

In general the same build manager should be in use on the remote and all the procedures so that they are easy to create, run and maintain from the local environment which we see leading/parenting. Running them only remote robs the most of the remote builds other benefits (depends on project, if the development environment is "lost", there is only win in the remote build).

This is also why we consider the guideline to look for fast feedback as a good one, it is a process in the maintenance lifecycle.

78326949
1

@GuyFalourd You're probably right about the about the culture/practices part. I think that sometimes, the "oh, shiny new stuff"-factor of new tools might be a heavy contributor to that. While I think it's very important in our profession as software engineers to keep an open mind and see what the recent developments are, often changing for the newest hype tools is probably detrimental to progress. Most of the times, the learning curve for the new abstractions you've mentioned is steeper than anticipated. Thus, most of the times, it pays off to master your current tools rather than jump to the next one. Saves you overhead required for switching to different tools and learning them.

I've probably been guilty of jumping to shiny new stuff more often than I'd like to admit...