Today we noticed an issue in Val Town’s TypeScript support.
// The 'bug' was that source code like:
const x = <X>(arg: X) => {};
// Was getting formatted to this:
const x = <X,>(arg: X) => {};
This is an arrow function with a generic argument called X.
Interesting! Why would dprint want to insert a trailing comma in a generic? I discovered this fascinating issue.
Let’s say that the input was this instead:
export const SelectRow = <T>(row: Row<T>) => false;
Some people have probably guessed the issue already: this is colliding with TSX syntax. If that line of code were like
export const SelectRow = <T>something</T>;
Then you’d expect this to be an element with a tag name T
, rather than T
as a generic. The syntaxes collide! How is TypeScript to know as it is parsing the code whether it’s looking at JSX or a generic argument?
The comma lets TypeScript know that it’s a generic argument! If you check out some behavior:
So, if you’re using generics with arrow functions in TypeScript and your file is a TSX file, that comma is pretty important, and intentional!
For us, the issue was that we were relying on CodeMirror’s syntax tree to check for the presence of JSX syntax, and it was incorrectly parsing arrow functions with generic arguments and the trailing comma as JSX. One of CodeMirror’s dependencies, @lezer/javascript, was out of date: they had fixed this bug late last year. Fixing that squashed the false warning in the UI and let the <T,>
pattern work as intended.
Looking at this and the history of the lezer package is a reminder of how TypeScript is quite a complicated language and it adds features at a sort of alarming rate. It’s fantastic to use as a developer, but building ‘meta’ tools around it is pretty challenging, because there are a lot of tricky details like this one.
What makes this even a bit trickier is that if you are writing a .ts
file for TypeScript, with no JSX support, then generic without the extra comma is valid syntax:
const x = <X>(y: X) => true;
For what it’s worth, I don’t use arrow functions that often - I like function declarations more because of hoisting, support for generators, and their kludgy-but-explicit syntax. For example, arrow functions also have a little syntax collision even if you don’t consider TypeScript: you can generally omit the {}
brackets around the arrow function return value, like
const x = () => 1;
But what if it’s an object that you’re trying to return? If you write
const x = () => { y: 1 };
Then you may be surprised to learn that you aren’t returning an object like { y: 1 }
, but you’ve defined a labeled statement and the function x
will, in fact, return undefined
. You can return an object from an arrow function, but you have to parenthesize it:
const x = () => ({ y: 1 });
So I kind of like function declarations because I just don’t have to think about these things that often.
Thanks David Siegel for reporting this issue!
For many years now, I’ve considered “democratizing programming” as a major personal goal of mine. This has taken many forms - a long era of idolizing Bret Victor and a bit of following Alan Kay. The goal of making computing accessible was a big part of why I joined Observable. Years before joining Observable, I was making similar tools attempting to achieve similar goals. I taught a class on making games in computers. At Mapbox, we tried to keep a fluid boundary between engineers & non-engineers, as well as many resources to learn how to code, and many people successfully made the jump from non-coding jobs to coding jobs, and vice-versa.
But what does it really mean to “democratize programming?” It’s a narrowly technical goal that its adopters imbue with their own values. Most commonly, I think the intents are:
Those are the intents that I can think of impromptu – maybe there are others; let me know. For me, I came to believe most strongly in the last one: it’s useful to make programming jobs accessible to more people.
Basically I think that material circumstances matter more than ever: the rent is too damn high, salaries are too damn low. Despite the bootcamps churning out new programmers and every college grad choosing a CS/AI major, salaries in the tech industry are still pretty decent. It’s much harder to land a job right now, but that is probably, partially, an interest-rate thing.
The salaries for societally-useful things are pretty bad! Cities pay their public defenders and teachers poverty wages. Nonprofit wages at the non-executive levels are tough. And these aren’t easy jobs, and don’t even earn the kind of respect they used to.
So, a big part of my hope that more people have the ability to become programmers is just that my friends can afford their rent. Maybe they can buy houses eventually, and we can all live in the same neighborhood, and maybe some folks can afford to have kids. Achieving that kind of basic financial security right now is, somehow, a novelty, a rarity, a stretch goal.
Which brings me to the… ugh… LLM thing. Everyone is so jazzed about using Cursor or Claude Workspaces and never writing code, only writing prompts. Maybe it’s good. Maybe it’s unavoidable. Old man yells at cloud.
But it does make me wonder whether the adoption of these tools will lead to a form of de-skilling. Not even that programmers will be less skilled, but that the job will drift from the perception and dynamics of a skilled trade to an unskilled trade, with the attendant change - decrease - in pay. Instead of hiring a team of engineers who try to write something of quality and try to load the mental model of what they’re building into their heads, companies will just hire a lot of prompt engineers and, who knows, generate 5 versions of the application and A/B test them all across their users.
But one of the things you learn in studying the history of politics is that power is consolidated by eliminating intermediate structures of authority, often under the banner of liberation from those authorities. In his book The Ancien Régime and the Revolution, Tocqueville gives an account of this process in the case of France in the century preceding the Revolution. He shows that the idea of “absolute sovereignty” was not an ancient concept, but an invention of the eighteenth century that was made possible by the monarch’s weakening of the “independent orders” of society—self-governing bodies such as professional guilds and universities. - The World Beyond Your Head
Will the adoption of LLMs weaken the semi-independent power of the skilled craftspeople who call themselves “programmers”? And in exchange for making it easy for anyone to become a programmer, we drastically reduce the rewards associated with that title?
Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place. - Mira Murati
Is the blasé attitude of AI executives toward the destruction of skilled jobs anything other than an attempt at weakening skilled labor and leaving roughly two levels of society intact, only executives and interchangeable laborers? Of course, you can ask what do executives do to keep their jobs, but at some point that is kind of like asking what do kings do to keep their jobs in the ancien régime: the king is the king because he’s the king, not because he’s the best at being the king.
Anyway, I don’t want my friends to move to the suburbs because the rent is too high and the pay is too low. Who is working on this?
In 2018, I wanted to create a book identification code translator to help me manage my self-hosted reading log. I built it at bookish.tech
and it would translate between ISBN, OCLC, LCCN, Goodreads, and OLID identifiers.
It broke recently because OCLC’s Worldcat redesigned their website, and bookish relied on scraping their website for well-formed microdata in order to pull information from it.
The new Worldcat website is a Next.js-powered piece of junk. It’s a good example of a website that completely flunks basic HTML semantics. Things that should be <details>
elements are instead inaccessible React components. It doesn’t work at all without JavaScript. Bad job, folks!
I don’t like to link to Goodreads because of its epic moderation problems and quiet Amazon ownership. Linking to OpenLibrary is nice but they rarely have the books that I read, and it’s easy to look up ISBN codes on that website.
So, from now on, sadly: my books will probably just be referenced by ISBN codes. bookish.tech is offline and I abandoned an effort to turn it into a CLI. OCLC sucks.
I continue to hop between browsers. Chrome is undoubtedly spying on me and nags me to log in - why do I want to log in to a browser!? Arc is neat but vertical tabs waste space and tab sync is weird and I always end up with 3 Arc windows without ever intending to open a new one. Plus, it wants me to log in to the browser - why would I want to log in to a browser!? Orion has great vibes but the devtools are Safari’s, which might be fine, but I am very locked in to Chromium-style devtools. Firefox… I want to believe, but… I think it’s just dead, man. Safari is fine but again, the devtools lockin.
For now, I just want Chrome without the spying, so I’m trying out ungoogled-chromium. Seems fine so far.
🎉 Programming pro tip I guess 🎉
If you’re writing JavaScript, Rust, or a bunch of other languages, then you’ll typically have dependencies, tracked by something like package.json
or Cargo.toml
. When you install those dependencies, they require sub-dependencies, which are then frozen in place using another file, which is package-lock.json
, or Cargo.lock
, or yarn.lock
, or pnpm-lock.yaml
, depending on nitpicky details of your development environment.
Here’s the tip: read that file. Maybe don’t read it top-to-bottom, but when you install a new module, look at it. When you’re figuring out what dependency to add to your project for some new problem you have, look at the file, because that dependency might already be in there! And if you’re using modules from people you respect in package.json
, reading the lockfile gives you a sense of what dependencies and people those people respect. You can build a little social graph in your head.
Now, dependencies, I know: it’s better to have fewer of them. Managing them is grunt-work. But few projects can do without them, and learning good habits around managing and prioritizing them is pretty beneficial. They’re also a reflection of a real social element of programming that you are part of if you’re a programmer.
Here’s a conundrum that I’ve been running into:
On one hand, a company like Val Town really needs well-maintained REST APIs and SDKs. Users want to extend the product, and APIs are the way to do it.
The tools for building APIs are things like Fastify, with an OpenAPI specification that it generates from typed routes, with SDKs generated by things like Stainless or so.
APIs are versioned - that’s basically what makes them APIs, that they provide guarantees and they don’t change overnight without warning.
On the other hand, we have frameworks like Remix and Next that let us iterate fast, and are increasingly using traditional forms and FormData to build interfaces. They have server actions and are increasingly abstracting-away things like the URLs that you hit to submit or receive data.
Though we have the skew problem (new frontends talking to old versions of backends), there is still a greater degree of freedom that we feel when building these “application endpoints” - we’re more comfortable updating them without keeping backwards-compatibility for other API consumers.
So how do you solve for this? How do you end up with a REST API with nice versions and predictable updates, and a nimble way to build application features that touch the database? Is it always going to be a REST API for REST API consumers, and an application API for the application?
In short, I’m not sure! We’re taking a some-of-each approach to this problem for now.
I wrote a little bit about how I published this microblog with Obsidian, and I recently published an Obsidian plugin. I’m a fan: I’ve used a lot of note-taking systems, but the only ones that really stuck were Notional Velocity and The Archive. And now, Obsidian.
Versus Notational Velocity / The Archive:
Anyway, to my Obsidian setup. It’s not fancy, because I have never found a structured system that makes sense to me.
There are a lot of features that I don’t use:
I’ve turned off the graph view and tab view using combinations of the Hider and Minimal Theme settings plugins.
This is what my Obsidian setup looks like right now, as I’m writing this. I followed a lot of advice in Everyday Obsidian to get it there, so I don’t want to share everything that got me there, but anyway: it’s the Minimal theme, and the iA Quattro font. Like I said, the graph view, canvas, and tags aren’t useful to me, so I hide them. I usually hide the right bar too. I’ve been trying out vertical tabs, and am on the fence as to whether I’ll keep them around.
The Minimal Theme is a godsend. Obsidian by default looks okay-ish - much better than Roam Research or LogSeq, but with the Minimal Theme it looks really nice, on par with some native apps.
I use pretty rough folders with broad categories - notes on meetings, trips, finance, and so on. There are some limited subfolders, but I’m not trying to get that precise. It is impossible to mirror the brain’s structure in this kind of organization, and…
I think, importantly, that simple organizational structures are good for thinking. I see all of these companies pitching like hierarchical nested colored tags that are also folders and spatially arranged, and I think it’s essentially bad. Forcing notes into a simple set of folders is useful. Having to write linear text and choose what to prioritize is useful.
Think back to writing essays in high school: that forced us to produce structure from our thoughts. That is a good skill to have, it helps us think. It is strictly better than an English class in which all of the term papers are word clouds of vibes with arrows pointing everywhere.
Every day I’m computering, I’ll create a day note in Obsidian and use it as the anchor for everything else I write or am working on. Basically there are three shortcuts I rely on all the time that I’ve customized:
The daily note template has nothing in it. I tried inserting prev and next links into it, but it was more annoying than helpful. I have the “Open Daily Note on Startup” option toggled on.
The daily note is usually a bulleted list with links to pages of what I’m working on or writing that day. The Command-. keybinding is wonderful, because it doesn’t just toggle a task to done or not - if you have plain text on a line, it will toggle between a list bullet, a new todo item, and a finished todo item. I use that keybinding all the time.
The other thing is Command-0 for today’s date. This is something I learned from the Zettelkasten ideology: this is a lazy form of an ID for me. A lot of my ongoing notes will consist of multiple sections where I type ##
for a heading and then Command-0 to insert the day’s date, to add updates. Automatic date management is extremely useful to me: I’ve always had a lot of trouble calculating, understanding, and remembering dates. Could probably diagnose or pathologize that, but let’s leave that for another day: thankfully calendars exist.
I use properties a lot. Even if they don’t help that much with ‘retrieval’ or aren’t useful in views, I just like the idea of regularized structures. So many notes have a url
property for the URL of that technology, person, place, etc.
For some categories, like “stuff I own,” I use properties for the order number, color, price, and other trivia. Every once in a while this comes in handy when something needs a new battery and I can easily look up what kind it takes. I’m also constantly trying to buy more durable things, and it has been really useful to have a record of how long a pair of jeans last (not long!).
All together, I use:
We use Remix at Val Town and I’ve been a pretty big fan of it: it has felt like a very responsible project that is mostly out of the way for us. That said, the way it works with forms is still a real annoyance. I’ve griped about this in an indirect, vague manner, and this document is an attempt to make those gripes more actionable and concrete.
Remix embraces traditional forms, which means that it also embraces the FormData object in JavaScript and the application/x-www-form-urlencoded
encoding, which is the default for <form>
elements which use POST
. This is great for backwards-compatibility but, like we’ve known about for years, it just sucks, for well-established reasons:
undefined
in form encoding.These limitations trickle into good libraries that try to help, like conform, which says:
Conform already preprocesses empty values to
undefined
. Add.default()
to your schema to define a default value that will be returned instead.
So let’s say you have a form in your application for writing a README. Someone writes their readme, tries to edit it and deletes everything from the textarea, hits submit, and now the value of the input, on the server side, is undefined
instead of a string
. Not great! You can see why conform would do this, because of how limited FormData is, but still, it requires us to write workarounds and Zod schemas that are anticipating Conform, FormData, and our form elements all working in this awkward way.
This is kind of gripes with “reacting to form submission in Remix” but it often distills down to useEffect
. When you submit a form or a fetcher in Remix, how do you do something after the form is submitted, like closing a modal, showing a toast, resetting some input, or so on?
Now, one answer is that you should ask “how would you do this without JavaScript,” and sometimes there’s a pleasant answer to that thought experiment. But often there isn’t such a nice answer out there, and you really do kind of just want Remix to just tell you that the form has been submitted.
So, the semi-canonical way to react to a form being submitted is something like this code fragment, cribbed from nico.fyi:
function Example() {
let fetcher = useFetcher<typeof action>();
useEffect(() => {
if (fetcher.state === 'idle' && fetcher.data?.ok) {
// The form has been submitted and has returned
// successfully
}
}, [fetcher]);
return <fetcher.Form method="post">
// …
The truth is, I will do absolutely anything to avoid using useEffect
. Keep it away from me. React now documents its usage as saying to “synchronize a component with an external system”, which is almost explicitly a hint to try not to use it ever. Is closing a modal a form of synchronizing with another system, probably not?
The problems with useEffect
are many:
fetcher
: when fetcher changes, you want the effect to run. It is a trigger. But you also have things that you want to use in the function, like other functions or bits of state - thing that you aren’t reacting to, but just using. But you have to have both categories of things in the useEffect
dependency array, or your eslint rule will yell at you.setState
function stay the same? How about one that you get from a third-party library like Jotai? Does the useUtils hook of tRPC guarantee stable identities? Do the parts of a fetcher change? The common answer is “maybe,” and it’s constantly under-documented.fetcher.state
and fetcher.data?.ok
, or whatever you’re depending on, and figure out where exactly is the state that you’re looking for, and hope that there aren’t unusual ways to enter that state. Like let’s say that you’re showing a toast whenever fetcher.state === 'idle' && fetcher.data?.ok
. Is there a way to show a toast twice? The docs for fetcher.data say “Once the data is set, it persists on the fetcher even through reloads and resubmissions (like calling fetcher.load()
again after having already read the data),” so is it possible for fetcher.state
to go from idle
to loading
to idle
again and show another toast despite not performing the action again? It sure seems so!The “ergonomics” of this API are rough: it’s too easy to write a useEffect
hook that fires too few or too many times.
I have a lot of JavaScript experience and I understand object mutability and most of the equality semantics, but when I’m thinking about stable object references in a big system, with many of my variables coming from third-party libraries that don’t explicitly document their stability promises, it’s rough.
Some new APIs I’ve seen floating around:
Popover API - this has gotten a lot of buzz because JavaScript popover libraries are seen as overkill. I have mixed feelings about it: a lot of the code in the JavaScript popover implementations has to do with positioning logic - see floating-ui as a pretty authoritative implementation of that. The DOM API doesn’t help at all with positioning. Using CSS relative or absolute for positioning isn’t enough: flipping popovers around an anchor is really table stakes, and that still will require JavaScript. Plus, just like the <details>
element, pretty much every time I use <details>
for collapsing or popover
for showing things, I eventually want to lazy-render the stuff that’s initially hidden, which brings us back to the scenario of having some JavaScript. It does look like the API will help with restoring focus and showing a backdrop. So: my guess is that this will be possibly useful, but it won’t eliminate the need for JavaScript in all but the most basic cases.
I’m kind of broadly skeptical of these new browser elements - for example, color inputs, date inputs, and progress elements seem to all fall just short of being useful for advanced applications, whether because they don’t support date ranges, or aren’t style-able enough. They just aren’t good enough to displace all of the JavaScript solutions.
As Chris points out, the popover
API does provide a better starting point for accessibility. There are aria attributes for popups, but a lot of DIY implementations neglect to use them.
As Chase points out, there’s a separate proposed anchor positioning API that might solve the positioning problem! Very early on that one, though - no Firefox or Safari support.
CSS Custom Highlight API - now this one I can get excited about, for two reasons:
<span>
or some other element. This really hurts the performance of pages that show source code. Using custom highlighting for code syntax highlighting would be amazing. The only downside is that it would necessarily be client-side only, so no server rendering and probably a flash of unhighlighted code.Array.with is super cool. Over the long term, basically everything that was introduced in underscore (which was itself heavily inspired by Ruby’s standard library) is becoming part of core JavaScript, and I’m into it. I think one of the takeaways of the npm tiny modules discourse is that if you have a too-small standard library, then you get a too-big third-party ecosystem with modules for silly things like detecting whether something is a number. There are just too many modules like that, and too many implementations of the same well-defined basic problems. The world is better now that we have String.padStart instead of several left-pad modules, including mine, sadly.
I am still using Obsidian pretty heavily. I use the Daily note plugin to create a daily note which anchors all the links to stuff I’m working on and doing in a particular day. I wanted better navigation between days without having to keep the calendar plugin open, and decided on this little snippet -
[[<% tp.date.now("YYYY-MM-DD", -1) %>|← Yesterday]] - [[<% tp.date.now("YYYY-MM-DD", +1) %>|Tomorrow →]]
This requires the Templater plugin installed and the Trigger templater on new file creation option enabled, and the daily note date syntax be set to YYYY-MM-DD
.
I still find Templater a bit intimidating. The only other place where I use it is for this, the microblog, which has this:
<% await tp.file.move("/Microblog/" + tp.file.creation_date("YYYY[-]MM[-]DD")) %>
Since this microblog is published using Jekyll, its filenames need the YYYY-MM-DD
date prefix. This command renames a new microblog file according to that syntax, and then I add the slug and title for it manually.
Something I’ve idly wondered about in regards to the geospatial industry is the impact of Esri staying private. Esri is big and important, but it’s hard to say precisely how much, because it’s private - maybe it does $1.3B in revenue, it has somewhere around 40% or 43% market share. I don’t know what market cap that would give it as a public company, that’s a question for some Wall Street intern.
But, given that Esri isn’t public, what public geospatial companies are there? It’s hard to say.
There are a few public satellite imaging companies like Planet Labs, which has had a hard time since going public via SPAC, and Blacksky and Terran (same story). All of which trade at well under $1B, with Blacksky and Terran closer to $200M, which is kind of a mid-lifecycle private startup valuation. Maxar used to be a public company and has huge revenue and reach, but went private under a private equity firm in 2023 for $6.4B.
What about software? There are far fewer examples. The best example is TomTom, listed in Amsterdam as TOM2. HERE used to be a subsidiary of Nokia, but was bought by a consortium of automakers and is now owned by a lot of minority stakes from Intel, Mitsubishi, and others. Foursquare is still private, and so are smaller players like Mapbox and CARTO.
Of course Google, Apple, Microsoft, and Meta have their enormous geospatial departments, but they are heavily diversified oligopolies and none of them offer GIS-as-a-service or GIS-as-software-product in any real form - in fact when they do tinker in geo it tends to be as open source organizations rather than productization, those efforts probably cross-funded by their advertising and data-mining cashflows.
Trimble seems like the closest I can find to a “pure-play” public GIS company - they at least list geospatial as one of their top products on the front page. Architecture/CAD focused companies Autodesk, Bentley, and Hexagon are also publicly-traded - maybe they count, too.
Thanks Martijn for noting that TomTom is public.
With sensors, microphones, and cameras, cars collect way more data than needed to operate the vehicle. They also share and sell that information to third parties, something many Americans don’t realize they’re opting into when they buy these cars.
The irony of privacy with cars is that private industry has an incredibly well-tuned privacy-invading, tracking machine, but license plates as a means of identification have completely failed, with the endorsement of the police.
Cars without license plates are everywhere in New York City, but the Adams administration has been slow to address the problem, allowing tens of thousands of drivers to skirt accountability on the road each year. The city received more than 51,000 complaints about cars parked illegally on the street with no license plates in 2023, according to 311 data analyzed by Streetsblog. From those complaints, the Department of Sanitation removed only 1,821 cars, or just 3.5 percent.
In major cities, it’s a routine occurrence to see ‘ghost cars’ with missing or fake identification. Government interventions into running red lights are opposed by the ACLU until they can sort out the privacy impact. Cars also grant people additional privacy rights against search and seizure that they wouldn’t have riding a bike or pushing a cart.
I am, for the record, pretty infuriated by drivers who obscure the identity of their cars. Cars in cities are dangerous weapons that disproportionately hurt the disadvantaged. We don’t allow people to carry guns with filed-off serial numbers, we shouldn’t allow people to drive cars with no license plates.
There are tradeoffs: increasing surveillance of cars via red lights and license plate enforcement will undoubtedly save pedestrians and also it will definitely push people into further precarity and cost them their jobs when they lose their licenses or can’t pay the ticket. In a perfect world, we’d have Finnish-style income-based ticket pricing and street infrastructure that prevented bad behavior from occurring in the first place. But in the meantime there are some hard tradeoffs between reducing traffic deaths and further surveilling and criminalizing drivers.
So: cars are a surveillance-laden panopticon for private industry, but governments don’t have enough data, or enough motivation, to prosecute even the most brazen cases of people driving illegally and anonymously. And private industry isn’t doing much to combat road deaths.
The good
sentryVitePlugin
that works much better than Sentry’s Remix integration and is reassuringly general-purpose.The bad
sourcemap: true
now runs out of memory on Render, consistently. The servers that build Val Town from source are plenty powerful and this is pretty surprising.In summary: I love the idea of Vite, but the fairly extensive pull request I wrote for the switch is going on the backburner. I’ll revisit when Remix makes Vite the default compiler, at which point hopefully Vite is also a little lighter on its memory usage and they’ve fixed the bug around source map error locations.
This is all a somewhat recent realization for me, so there may be angles I don’t see yet.
So I was reading Bailout Nation, Barry Ritholtz’s book about the 2008 financial crisis, and one of its points was that charitable trusts were one of the reasons why CDOs and other dangerous and exotic financial products were adopted with such vigor: the trusts were legally required to pay out a percentage of assets per year, like 5%, and previously they were able to do so using Treasury Bills or other forms of safe debt because interest rates were higher. But interest rates were dropped to encourage growth, which made it harder to find reliable medium-return instruments, which put the attention on things like CDOs that produced a similar ‘fixed’ return, but were backed by much scarier things.
And then I was sitting in a park with a friend talking about how this was pretty crazy, and that these trusts are required to pay at least that percentage. He said (he knows a lot more about this than I do) that the 5% number is an IRS stipulation, but it is often also a cap: so the trust can’t pay more than that. Which - he’s not very online, and this is paraphrasing - is an incredible example of control from the grave.
All in all, why do charitable trusts exist? Given the outcomes of:
The outcomes are simply worse when you use a trust, right? Using trusts gives charities less money, because they have to accept a trickle of yearly donations instead of just receiving the money. In exchange for withholding the full amount from charities, heirs receive the power to direct the trust to one cause or another, and the prestige of the foundation’s name and continuing wealth. This isn’t better for charities, right?
It seems thoroughly wrong that our legal system allows people to exert control long after they’re dead, and to specifically exert the control to withhold money from active uses. Charitable trusts are better understood as ways to mete out little bits of charity in a way designed to benefit wealthy families.
As far as I remember it - Against Charity, a whole anti-charity book that I read, didn’t lay out this argument, even though it is definitely a thing that people are thinking.
I am a pretty faithful neovim user. I’ve been using vim/neovim since around 2011, and tolerate all of its quirks and bugs because the core idea of modal editing is so magical, and it has staying power. VIM is 32 years old and will probably be around in 30 years: compare that to all of the editors that were in vogue before VS Code came around - TextMate, Atom, Brackets, Sublime Text. They’ve all faded in popularity, though most of them are still around and used by some people, though Atom is officially done.
But using VIM to develop TypeScript is painful at times: I’m relying on a bunch of Lua plugins that individuals have developed, and because Microsoft somehow has developed LSP, the standard for editors communicating with language tools, and also developed TypeScript, a language tool, and somehow, has still not made the “TypeScript server” speak LSP, these tools have to work around little inconsistencies about how TypeScript works, and instead of using Neovim’s built-in LSP support, they have TypeScript-specific tooling. Because Microsoft can’t get their wildly successful product to speak the wildly-successful specification that they also created.
So, I dabble with Zed, which is a new editor from the same team as Atom, but this time written in Rust and run as a standalone startup instead of a unit inside of GitHub.
Now Zed is pretty good. The latency of VS Code, Atom, and other web-technology-based editors is very annoying to me. Zed is extremely fast and its TypeScript integration is pretty good. Plus, it’s now open source. I think its chances of being around in 30 years are probably lower than VS Code’s: Zed is built by a pre-revenue startup, and releasing complex code as open source is absolutely no guarantee of longevity: there is no skilled community that will swoop in and maintain the work of a defunct startup, if they were to ever go defunct.
So, on days when neovim is annoying me, I use Zed. Today I’m using it and I’ll take some notes.
Zed’s vim mode is not half-assed. It’s pretty darn good. It’s not actually vim - it’s an emulation layer, and not a complete one, but enough that I’m pretty comfortable with navigation and editing right off the bat.
That said, it needs a little extra to be usable for me. I used the little snippet of extra keybindings in their docs to add a way to switch between panes in a more vim-like fashion:
[
{
"context": "Dock",
"bindings": {
"ctrl-w h": ["workspace::ActivatePaneInDirection", "Left"],
"ctrl-w l": ["workspace::ActivatePaneInDirection", "Right"],
"ctrl-w k": ["workspace::ActivatePaneInDirection", "Up"],
"ctrl-w j": ["workspace::ActivatePaneInDirection", "Down"]
}
}
]
Still looking for a way to bind ctrl-j
and ctrl-k
to “previous tab” and “next tab”. The docs for keybindings are okay, but it’s hard to tell what’s possible: I don’t get autocomplete hints when I type workspace::
into a new entry in the keybindings file.
cd
.:tabnew %
, hit enter, which expands %
into the current filename, and then edit the filename of the current file to produce a new name. This workflow doesn’t work in Zed: %
doesn’t expand to the current filename. The only way I can find to save a new file is through the file dialog, which is super tedious. Without this tool in general, I’m kind of left with less efficient ways to do things: to delete the currently-open file, I’d usually do ! rm %
, but I can’t.Overall, Zed’s TypeScript integration is great, and it’s a very fast, pretty well-polished experience. When I tested it a few months ago, I was encountering pretty frequent crashes, but I didn’t encounter any crashes or bugs today. Eventually I might switch, but this trial made me realize how custom-fit my neovim setup is and how an editor that makes me reach for a mouse occasionally is a non-starter. I’ll need to find Zed-equivalent replacements for some of my habits.
For the last few years, I’ve had Aranet 4 and AirGradient sensors in my apartment. They’re fairly expensive gadgets that I have no regrets purchasing – I love a little more awareness of things like temperature, humidity, and air quality, it’s ‘grounding’ in a cyberpunk way. But most people shouldn’t purchase them: the insights are not worth that much.
So here’s the main insight for free: use your stove’s exhaust fan. Gas stoves throw a ton of carbon dioxide into your living space and destroy the air quality.
This is assuming you’re using gas: get an electric range if you can, but if you live in a luxury New York apartment(1) like me, you don’t have a choice.
I used to run the exhaust fan only when there was smoke or smell from cooking. This was a foolish decision in hindsight: run the exhaust fan whenever you’re cooking anything. Your lungs and brain will thank you.
There are a lot of companies pitching new kinds of wallets, with lots of ads - Ridge is one of the most famous. An option that never seems to come up in internet listicles but I’ve sworn by for years is the Hawbuck wallet.
My personal preferences for this kind of thing is:
Hawbuck checks all the boxes. I used mighty wallets before this and got a year or two out of them, but Hawbuck wallets wear much, much slower than that. Dyneema is a pretty magical material.
I’m happy it’s also not leather. I still buy and use leather things, despite eating vegan, simply because the vegan alternatives for things like shoes and belts tend to be harder to get and they don’t last as long.
(this is not “sponsored” or anything)
We’ve been using Linear for a month or two at Val Town, and I think it has ‘stuck’ and we’ll keep using it. Here are some notes about it:
_
around a string will make it italicized - but not others - Markdown links don’t become links, unlike Notion’s Markdown-ish editor. I get that it’s built to be friendly for both managers and engineers, thus creating an interface between the people who know Markdown and the ones who don’t. But it’s an awkward middle ground that forces me to use a mouse more than I’d likeMy last big project at Mapbox was working on Mapbox Studio. We launched it in 2015.
For the web stack, we considered a few other options - we had used d3 to build iD, which worked out great but we were practically the only people in the internet using d3 to build HTML UIs - I wrote about this, in “D3 for HTML” in 2013. The predecessor to Mapbox Studio, TileMill, was written with Backbone, which was cool but is almost never used today. So, anyway, we decided on React when we started it, which was around 2014.
So it’s been a decade. Mapbox Studio is still around, still on React, albeit many refactors later. If I were to build something new like it, I’d be tempted to use Svelte or Solid, but React would definitely be in the running.
Which is to say: wow, this is not churn. What is the opposite of churn? Starting a codebase and still having the same tech be dominant ten years later? Stability? For all of words spilled about trend-chasing and the people talking about how one of these days, React will be out of style, well… it’s been in style for longer than a two-term president.
When I make tech decisions, the rubric is usually that if it lasts three years, then it’s a roaring success. I know, I’d love for the world to be different, but this is what the industry is and a lot of decisions don’t last a year. A decision that seems reasonable ten years later? Pretty good.
Anyway, maybe next year or the year after there’ll be a real React successor. I welcome it. React has had a good, very long, run.
At Val Town, we recently introduced a command-k menu, that “omni” menu that sites have. It’s pretty neat. One thing that I thought would be cool to include in it would be search results from our documentation site, which is authored using Astro Starlight. Our main application is React, so how do we wire these things together?
It’s totally undocumented and this is basically a big hack but it works great:
Starlight uses Pagefind for its built-in search engine, which is a separate, very impressive, mildly documented open source project. So we just load the pagefind.js
file that Starlight bakes, using an ES import, across domains, and then just use the pagefind API. So we’re loading both the search algorithms and the content straight from the documentation website.
Here’s an illustrative component, lightly adapted from our codebase. This assumes that you’ve got your search query passed to the component as search
.
import { Command } from "cmdk";
import { useDebounce } from "app/hooks/useDebounce";
import { useQuery } from "@tanstack/react-query";
function DocsSearch({ search }: { search: string }) {
const debouncedSearch = useDebounce(search, 100);
// Use react-query to dynamically and lazily load the module
// from the different host
const pf = useQuery(
["page-search-module"],
// @ts-expect-error
() => import("https://docs.val.town/pagefind/pagefind.js")
);
// Use react-query again to actually run a search query
const results = useQuery(
["page-search", debouncedSearch],
async () => {
const { results }: { results: Result[] } =
await pf.data.search(debouncedSearch);
return Promise.all(
results.slice(0, 5).map((r) => {
return r.data();
})
);
},
{
enabled: !!(pf.isSuccess && pf.data),
}
);
if (!pf.isSuccess || !results.isSuccess || !results.data.length) {
return null;
}
return results.data.map((res) => {
return (
<Command.Item
forceMount
key={res.url}
value={res.url}
onSelect={() => {
window.open(res.url);
}}
>
<a
href={res.url}
onClick={(e) => {
e.preventDefault();
}}
>
<div
dangerouslySetInnerHTML={{ __html: res.excerpt }}
/>
</a>
</Command.Item>
);
});
}
Pretty neat, right? This isn’t documented anywhere for Astro Starlight, because it’s definitely relying on some implementation details. But the same technique would presumably work just as well in any other web framework, not just React.
I see the S&P 500 referenced pretty frequently as an vanilla index for people investing. This isn’t totally wrong, which is why this post is short. But, if you have the goal of just “investing in the market,” there’s a better option for doing that: a total market index. For Vanguard, instead of VOO, it’d be VTI. For Schwab, it’s SCHB. Virtually every provider has an option. For background, here’s Jack Bogle discussing the topic.
The S&P 500 is not a quantitative index of the top 500 companies: it has both selection criteria and a committee that takes a role in selection. In contrast, total market indices are typically fully passive and quantitative, and they own more than 500 companies.
So if you want to “own the market,” you can just do that. Not investment advice.
An evergreen topic is something like “why are websites so big and slow and hard and video games are so amazing and fast?” I’ve thought about it more than I’d like. Anyway, here are some reasons:
Someone asked over email about why I stopped building Placemark as a SaaS and made it an open source project. I have no qualms with sharing the answers publicly, so here they are:
I stopped Placemark because it didn’t generate enough revenue and I lost faith that there was a product that could be implemented simply enough, for enough people, who wanted to pay enough money, for it to be worthwhile. There are many mapping applications out there and I think that collectively there are some big problems in that field:
I do want to emphasize that I knew most of this stuff going into it, and it’s, like good that geo has a lot of open source projects, and I don’t have any ill feelings toward really any of the players in the field. It’s a hard field!
As I’ve said a bunch of times, the biggest problem with competition in the world of geospatial companies is that there aren’t many big winners. We would all have a way different perspective on geospatial startups if even one of them had a successful IPO in the last decade or two, or even if a geospatial startup entered the mainstream in the same way as a startup like Notion or Figma did. Esri being a private company is definitely part of this - they’re enormous, but nobody outside of the industry talks about them because there’s no stock and little transparency into their business.
Also, frankly, it was a nerve-wracking experience building a bootstrapped startup, in an expensive city, with no real in-person community of people doing similar kinds of stuff. The emotional ups and downs were really, really hard: every time that someone signed up and cancelled, or found a bug, or the servers went down as I was on vacation and I had to plug back in.
You have to be really, really motivated to run a startup, and you have to have a very specific attitude. I’ve learned a lot about that attitude - about trying to get positivity and resilience, after ending Placemark. It comes naturally to some people, who are just inherently optimistic, but not all of us are like that.
Val Town switched to Remix as a web framework a little over a year ago. Here are some reflections:
In summary: Remix mostly gets out of the way and lets us do our work. It’s not a silver bullet, and I wish that it was more obvious how to do complex actions and it had a better solution to the FormData-boilerplate-typed-actions quandry. But overall I’m really happy with the decision.
Things that have worked to get me back on a running regimen and might work for you:
My friend Forest has been making some good thoughts about open source and incentives. Coincidentally, this month saw a new wave of open source spam because of the tea.xyz project, which encouraged people to try and claim ‘ownership’ of existing open source projects, to get crypto tokens.
The creator of tea.xyz, Max Howell, originally created Homebrew, the package manager for macOS which I use every day. He has put in the hours and days, and been on the other side of the most entitled users around. So I give him a lot of leeway with tea.xyz’s stumbles, even though they’re big stumbles.
Anyway, I think my idea is that murky incentives are kind of good. The incentives for contributing to open source right now, as I do often, are so hard to pin down. Sure, it’s improving the ecosystem, which satisfies my deep sense of duty. It’s maintaining my reputation and network, which is both social and career value. Contributing to open source is a way to learn, too: I’ve had one mentor early in my career, but besides that I’ve learned the most from people I barely know.
The fact that the incentives behind open source are so convoluted is what makes them sustainable and so hard to exploit. The web is an adversarial medium, is what I tell myself pretty often: every reward structure and application architecture is eventually abused, and that abuse will destroy everything if unchecked: whether it’s SEO spam, or trolling, or disinformation, no system maintains its own steady state without intentional intervention and design.
To bring it back around: tea.xyz created a simple, automatic incentive structure where there was previously a complex, intermediated one. And, like every crypto project that has tried that before, it appealed to scammers and produced the opposite of a community benefit.
If I got paid $5 for every upstream contribution to an open source project, I’d make a little money. It would be an additional benefit. But I’m afraid that the simplicity of that deal - the expectations that it would create, the new community participants that it would invite - would make me less likely to contribute, not more.
This came up for Val Town - we implemented code folding in our default editor which uses CodeMirror, but wanted it to work with JSX elements, not just functions and control flow statements. It’s not enough to justify a module of its own because CodeMirror’s API is unbelievably well-designed:
import {
tsxLanguage,
} from "@codemirror/lang-javascript";
import {
foldInside,
foldNodeProp,
} from "@codemirror/language";
/** tsxLanguage, with code folding for jsx elements */
tsxLanguage.configure({
props: [
foldNodeProp.add({
JSXElement: foldInside,
}),
],
})
Then you can plug that into a LanguageSupport
instance and use it. Amazing.
I’ve been writing some CSS. My satisfaction with CSS ebbs and flows: sometimes I’m happy with its new features like :has
, but on the other hand, CSS is one of the areas where you really often get bitten by browser incompatibilities. I remember the old days of JavaScript in which a stray trailing comma in an array would break Internet Explorer: we’re mostly past that now. But in CSS, chaos still reigns: mostly in Safari. Anyway, some notes as I go:
<details>
elementsI’ve been using details more instead of Radix Collapsible for performance reasons. Using the platform! Feels nice, except for CSS. That silly caret icon shows up in Safari and not in Chrome, and breaks layout there. I thought the solution would involve list-style-type: none
or appearance
, but no, it’s something worse:
For Tailwind:
[&::-webkit-details-marker]:hidden
In CSS:
details::-webkit-details-marker {
display: hidden;
}
I’ve hit this bug so many times in Val Town. Anything that could be user-supplied and might bust out of a flex layout should absolutely have min-width: 0
, so that it can shrink.
There’s a variant of this issue in grids and I’ve been defaulting to using minmax(0, 1fr)
instead of 1fr
to make sure that grid columns will shrink when they should.
I’ve been using just for a lot of my projects. It helps a bunch with the context-switching: I can open most project directories and run just dev
, and it’ll boot up the server that I need. For example, the blog’s justfile has:
dev:
bundle exec jekyll serve --watch --live --future
I used to use Makefiles a bit, but there’s a ton of tricky complexity in them, and they really weren’t made as cheat-sheets to run commands - Makefiles and make are really intended to build (make) programs, like run the C compiler. Just scripts are a lot easier to write.
A brief and silly life-hack: headlamps are better flashlights. Most of the time when you are using a flashlight, you need to use your hands too. Headlamps solve that problem. They’re bright enough for most purposes and are usually smaller than flashlights too. There are very few reasons to get a flashlight. Just get a headlamp.
With all love to the maintainers, who are good people and are to some extent bound by their obligation to maintain compatibility, I just have to put it out there: if you have a new JavaScript/TypeScript project and you need to parse or render Markdown, why are you using marked
?
In my mind, there are a few high priorities for Markdown parsers:
So, yeah. Marked is pretty performant, but it’s not secure, it’s doesn’t follow a standard - we can do better!
Use instead:
marked is really popular. It used to be the best option. But there are better options, use them!
I’ve been trying to preserve as much of Placemark now that it’s open-source. This has been a mixed experience: some products were really easy to move away from, like Northwest and Earth Class Mail. Webflow was harder to quit. But replay.web came to the rescue, thanks to Henry Wilkinson at web.recorder.
Now placemark.io is archived, but nearly complete and at feature-parity, but costs next to nothing to maintain. The magic is the wacz format, which is a specific flavor of ZIP file that is readable with range requests. From the geospatial world, I’ve been thinking about range requests for a long time: they’re the special sauce in Protomaps and Cloud Optimized GeoTIFFs. They let you use big files, stored cheaply on object storage like Amazon S3 or Cloudflare R2, but lets browsers read those files incrementally, saving the browser time & memory and saving you transfer bandwidth & money.
So, the placemark.io web archive is on R2, the website is now on Cloudflare Pages, and the archive is little more than this one custom element:
<replay-web-page source="https://archive.placemark.io/placemark%202024-01-19.wacz" url="https://www.placemark.io/"></replay-web-page>|
This is embedding replayweb.page. Cool stuff!
God, it’s another post about Web Components and stuff, who am I to write this, who are you to read it
Carlana Johnson’s “Alternate Futures for Web Components” had me nodding all the way. There’s just this assumption that now that React is potentially on its way out (after a decade-long reign! not bad), the natural answer is Web Components. And I just don’t get it. I don’t get it. I think I’m a pretty open-minded guy, and I’m more than happy to test everything out, from Rails to Svelte to htmx to Elm. It’s all cool and good.
But “the problems I want to solve” and “the problems that Web Components solve” are like two distinct circles. What do they do for me? Binding JavaScript-driven behavior to elements automatically thanks to customElement
? Sure - but that isn’t rocket science: you can get nice declarative behavior with htmx or hyperscript or alpine or stimulus. Isolating styles with Shadow DOM is super useful for embed-like components, but not for parts of an application where you want to have shared style elements. I shouldn’t sloppily restate the article: just read Carlana.
Anyway, I just don’t get it. And I find it so suspicious that everyone points to Web Components as a clear path forward, to create beautiful, fast applications, and yet… where are those applications? Sure, there’s “Photoshop on the Web”, but that’s surely a small slice of even Photoshop’s market, which is niche in itself. GitHub used to use Web Components but their new UIs are using React.
So where is it? Why hasn’t Netflix rebuilt itself on Web Components and boosted their user numbers by dumping the heavy framework? Why are interactive visualizations on the New York Times built with Svelte and not Web Components? Where’s the juice? If you have been using Web Components and winning, day after day, why not write about that and spread the word?
People don’t just use Rails because dhh is a convincing writer: they use it because Basecamp was a spectacular web application, and so was Ta-Da List, and so are Instacart, GitHub, and shopify. They don’t just use React because it’s from Facebook and some brain-virus took them over, they use it because they’ve used Twitter and GitHub and Reddit and Mastodon and countless other sites that use React to create amazing interfaces.
Of course there’s hype and bullying and all the other social dynamics. React fans have had some Spectacularly Bad takes, and, boy, the Web Components crowd have as well. When I write a tender post about complexity and it gets summed up as “going to bat for React” and characterized in bizarre overstatement, I feel like the advocates are working hard to alienate their potential allies. We are supposed to get people to believe in our ideas, not just try to get them to lose faith in their own ideas!
I don’t know. I want to believe. I always want to believe. I want to use an application that genuinely rocks, and to find out that it’s WC all the way, and to look at the GitHub repo and think this is it, this is the right way to build applications. I want to be excited to use this technology because I see what’s possible using it. When is that going to happen?
“If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.” - Antoine de Saint Exupéry
How do you set a Cache-Control header on an object in R2 when you’re using rclone to upload?
I burned a lot of time figuring this out. There are a lot of options that look like they’ll do it, but here it is:
--header-upload='Cache-Control: max-age=604800,public,immutable'
That’s the flag you want to use with rclone copy to set a Cache-Control
header with Cloudflare R2. Whew.
Reason: sure, you can set cache rules at like 5 levels of the Cloudflare stack - Cache Rules, etc. But it’s really hard to get the right caching behavior for static JavaScript bundles, which is:
This does it. Phew.
This is a Devtools feature that you will only need once in a while, but it is a life-saver.
Some frontend libraries, like CodeMirror, have UIs like autocompletion, tools, or popovers, that are triggered by typing text or hovering your mouse cursor, and disappear when that interaction stops. This can make them extremely hard to debug: if you’re trying to design the UI of the CodeMirror autocomplete widget, every time that you right-click on the menu to click “Inspect”, or you click away from the page to use the Chrome Devtools, it disappears.
Learn to love Emulate a focused page. It’s under the Rendering tab in the second row of tabs in Devtools - next to things like Console, Issues, Quick source, Animation.
Click the Rendering tab, find the Emulate a focused page checkbox, and check it. This will keep Chrome from firing blur events and letting CodeMirror or your given library from knowing that you’ve clicked out of the page. And now you can debug! Phew.
I’ve been thinking about this. Placemark is going open source in 10 days and I’m probably not founding another geo startup anytime soon. I’d love to found another bootstrapped startup eventually, but geospatial is hard.
Anyway, geospatial data is big, which does not combine well with real-time collaboration. Products end up either sacrificing some data-scalability (like Placemark) or sacrificing some edibility by making some layers read-only “base layers” and focusing more on visualization instead. So web tools end up being more data-consumers and most of the big work like buffering huge polygons or processing raster GeoTIFFs stays in QGIS, Esri, or Python scripts.
All of the new realtime-web-application stuff and the CRDT stuff is amazing - but I really cannot emphasize enough how geospatial data is a harder problem than text editing or drawing programs. The default assumption of GIS users is that it should be possible to upload and edit a 2 gigabyte file containing vector information. And unlike spreadsheets or lists or many other UIs, it’s also expected that we should be able to see all the data at once by zooming out: you can’t just window a subset of the data. GIS users are accustomed to seeing progress bars - progress bars are fine. But if you throw GIS data into most realtime systems, the system breaks.
One way of slicing this problem is to pre-process the data into a tiled format. Then you can map-reduce, or only do data transformation or editing on a subset of the data as a ‘preview’. However, this doesn’t work with all datasets and it tends to make assumptions about your data model.
I was thinking, if I were to do it again, and I won’t, but if I did:
I’d probably use driftingin.space or similar to run a session backend and use SQLite with litestream to load the dataset into the backend and stream out changes. So, when you click on a “map” to open it, we boot up a server and download database files from S3 or Cloudflare R2. That server runs for as long as you’re editing the map, it makes changes to its local in-memory database, and then streams those out to S3 using litestream. When you close the tab, the server shuts down.
The editing UI - the map - would be fully server-rendered and I’d build just enough client-side interaction to make interactions like point-dragging feel native. But the client, in general, would never download the full dataset. So, ideally the server runs WebGL or perhaps everything involved in WebGL except for the final rendering step - it would quickly generate tiles, even triangulate them, apply styles and remove data, so that it can send as few bytes as possible.
This would have the tradeoff that loading a map would take a while - maybe it’d take 10 seconds or more to load a map. But once you had, you could do geospatial operations really quickly because they’re in memory on the server. It’s pretty similar to Figma’s system, but with the exception that the client would be a lot lighter and the server would be heavier.
It would also have the tradeoff of not working offline, even temporarily. I unfortunately don’t see ‘offline-first’ becoming a real thing for a lot of use-cases for a long time: it’s too niche a requirement, and it is incredibly difficult to implement in a way that is fast, consistent, and not too complex.
Wrote and released codemirror-continue today. When you’re writing a block comment in TypeScript and you hit “Enter”, this intelligently adds a *
on the next line.
Most likely, your good editor (Neovim, VS Code) already has this behavior and you miss it in CodeMirror. So I wrote an extension that adds that behavior. Hooray!
Every database ID scheme that I’ve used has had pretty serious downsides, and I wish there was a better option.
The perfect ID would:
Almost nothing checks all these boxes:
So for the time being, what are we to do? I don’t have a good answer. Cross our fingers and wait for uuid v7.
I am, relative to many, a sort of React apologist. Even though I’ve written at length about how it’s not good for every problem, I think React is a good solution to many problems. I think the React team has good intentions. Even though React is not a solution to everything, it a solution to some things. Even though React has been overused and has flaws, I don’t think the team is evil. Bask in my equanimity, etc.
However,
The state of React releases right now is bad. There are two major competing React frameworks: Remix, funded by Shopify and Next.js, funded by Vercel. Vercel hired many members of the React team.
It has been one and a half years since the last React release, far longer than any previous release took.
Next.js is heavily using and promoting features that are in the next release. They vendor a version of the next release of React and use some trickery to make it seem like you’re using React 18.2.0 when in fact you’re using a canary release. These “canary releases” are used for incremental upgrades at Meta, too, where other React core developers work.
On the other hand, the non-Vercel and non-Facebook ecosystems don’t have these advantages. Remix suffers from an error in React that is fixed, but not released. People trying to use React 18.3.0 canary releases will have to use npm install --force
or overrides in their package.json files to tie it all together.
This strategy, of using Canary releases for a year and a half and then doing some big-bang upgrade to React 19.0.0: I don’t like it. Sure, there are workarounds to use “current” Canary React. But they’re hacks, and the Canary releases are not stable and can quietly include breaking changes. All in all, it has the outward appearance of Vercel having bought an unfair year headstart by bringing part of the React team in-house.
An evergreen blog topic is “writing my own blogging engine because the ones out there are too complicated.” With the risk of stating the obvious:
Writing a blog engine, with one customer, yourself, is the most luxuriously simple web application possible. Complexity lies in:
Which is all to say, when I read some rant about how React or Svelte or XYZ is complicated and then I see the author builds marketing websites or blogs or is a Java programmer who tinkers with websites but hates it – it all stinks of narrow-mindedness. Or saying that even Jekyll is complicated, so they want to build their own thing. And, go ahead - build your own static site generator, do your own thing. But the obvious reason why things are complicated isn’t because people like complexity - it’s because things like Jekyll have users with different needs.
Yes: JavaScript frameworks are overkill for many shopping websites. It’s definitely overkill for blogs and marketing sites. It’s misused, just like every technology is misused. But not being exposed to the problems that it solves does not mean that those problems don’t exist.
HTML-maximalists arguing that it’s the best way probably haven’t worked on a hard enough problem to notice how insufficient SELECT boxes are. Or how the dialog element just doesn’t help much. Complaining about ARIA accessibility based on out-of-date notions when the accessibility of modern UI libraries is nothing short of fantastic. And what about dealing with complex state? Keybindings with different behaviors based on UI state. Actions that re-render parts of the page - if you update “units” from miles to meters and you want the map scale, the title element, and the measuring tools to all update seamlessly. HTML has no native solution for client-side state management, and some applications genuinely require it.
And my blog is an example of the luxury of simplicity – it’s incredibly simple! I designed the problem to make it simple so that the solution could be simple. If I needed to edit blog posts on my phone, it’d be more complicated. Or if there was a site search function. Those are normal things I’d need to do if I had a customer. And if I had big enough requirements, I’d probably use more advanced technology, because the problem would be different: I wouldn’t indignantly insist on still using some particular technology.
Not everyone, or every project, has the luxury of simplicity. If you think that it’s possible to solve complicated problems with simpler tools, go for it: maybe there’s incidental complexity to solve. If you can solve it in a convincing way, the web developers will love you and you might hit the jackpot and be able to live off of GitHub sponsors alone.
See also a new start for the web, where I wrote about “the document web” versus “the application web.”
I’ve been working on oldfashioned.tech, which is sort of a testbed to learn about htmx and the other paths: vanilla CSS instead of Tailwind, server-rendering for as much as possible.
How are tooltips and modals supposed to work outside of the framework world? What the Web Platform provides is insufficient:
title
attribute is unstyled and only shows up after hovering for a long time over an element.dialog
element is bizarrely unusable without JavaScript and basically doesn’t give you much: great libraries like Radix don’t use it and use role="dialog"
instead.So, what to do? There’s:
I think it’s kind of a bummer that there just aren’t clear options for this kind of thing.
I wrote about this pattern years ago, and wrote an update, and then Classes became broadly available in JavaScript. I was kind of skeptical of class syntax when it came out, but now there really isn’t any reason to use any other kind of “class” style than the ES6 syntax. The module pattern used to have a few advantages:
this
was referring to - before arrow functions this was a really confusing question.Well, now classes can use arrow functions to simplify the meaning of this , and private properties are supported everywhere, we can basically declare the practice of using closures as psuedo-classes to be officially legacy.
Let’s say you’re running some web application and suddenly you hit a bug in one of your dependencies. It’s all deployed, lots of people are seeing the downtime, but you can’t just push an update because the bug is in something you’ve installed from npm.
Remember patch-package. It’s an npm module that you can install in which you:
npx patch-package some-package
"postinstall": "patch-package"
to your scriptsAnd from now on when npm install
runs, it tweaks and fixes the package with a bug. Obviously submit a pull request and fix the problem at its source later, but in times of desperation, this is a way to fix the problem in a few minutes rather than an hour. This is from experience… experience from earlier today.
I’ve been moving things for Placemark’s shutdown as a company and noting some of the exit experiences:
The hot new thing in CSS is :has()
and Firefox finally supports it, starting today - so the compatibility table is pretty decent (89% at this writing). I already used has() in a previous post - that Strava CSS hack, but I’m finding it useful in so many places.
For example, in Val Town we have some UI that shows up on hover and disappears when you hover out - but we also want it to stay visible if you’ve opened a menu within the UI. The previous solution required using React state and passing it through components. The new solution is so much simpler - just takes advantage of Radix’s excellent attention to accessibility - so if something in the UI has aria-expanded=true
, we show the parent element:
.valtown-pin-visible:has([aria-expanded="true"]) {
opacity: 1;
}
users
table. Don’t get clever with a json column or hstore. When you introduce new preferences, the power of types and default values is worth the headache of managing columns.citext
, case-insensitive text. But don’t count on that to prevent people from signing up multiple times - there are many ways to do that.TEXT
. The char-limited versions like varchar
aren’t any faster or better on Postgres.json
or jsonp
, ever. Having a schema is so useful. I have regretted every time that I used these things.NOT NULL
as possible. Basically the same as “don’t use json
” - if you don’t enforce null checks at the database level, null values will probably sneak in eventually.enum
instead of a boolean
. There is usually a third value beyond true & false that you’ll realize you need.createdAt
column that defaults to NOW()
. Chances are, you’ll need it eventually.I love Strava, and a lot of my friends do too. And some of them do most of their workouts with Peloton, Swift, and other “integrations.” It’s great for them, but the activities just look like ads for Peloton and don’t have any of the things that I like about Strava’s community.
Strava doesn’t provide the option to hide these, so I wrote a user style that I use with Stylus - also published to userstyles.org. This hides Peloton workouts.
@-moz-document url-prefix("https://www.strava.com/dashboard") {
.feed-ui > div:has([data-testid="partner_tag"]) {
display: none;
}
}
This microblog, by the way… I felt like real blog posts on macwright.com were becoming too “official” feeling to post little notes-to-self and tech tricks and whatnot.
The setup is intentionally pretty boring. I have been using Obsidian for notetaking, and I store micro blog posts in a folder in Obsidian called Microblog
. The blog posts have YAML frontmatter that’s compatible with Jekyll, so I can just show them in my existing, boring site, and deploy them the same way as I do the site - with Netlify.
I use the Templater plugin, which is powerful but unintuitive, to create new Microblog posts: key line is
<% await tp.file.move("/Microblog/" + tp.file.creation_date("YYYY[-]MM[-]DD")) %>
This moves a newly-created Template file to the Microblog directory with a Jekyll-friendly date prefix. Then I just have a command in the main macwright.com repo that copies over the folder:
microblog:
rm -f _posts/micro/*.md
cp ~/obsidian/Documents/Microblog/* _posts/micro
This is using Just, which I use as a simpler alternative to Makefiles, but… it’s just, you know, a cp
command. Could be done with anything.
So, anyway - I considered Obsidian Publish but I don’t want to build a digital garden. I have indulged in some of the fun linking-stuff-to-stuff patterns that Obsidian-heads love, but ultimately I think it’s usually pointless for me.
I started another “awesome” GitHub repo (a list of resources), for CodeMirror, called awesome-codemirror. CodeMirror has a community page but I wanted a freewheeling easy-to-contribute-to alternative. Who knows if it’ll grow to the size of awesome-geojson - 2.1k stars as of this writing!
ViewPlugin.fromClass
only allows the class constructor to take a single argument with the CodeMirror view.
You use a Facet. Great example in JupyterLab. Like everything in CodeMirror, this lets you be super flexible with how configuration works - it is designed with multiple reconfigurations in mind.
Example defining the facet:
export const suggestionConfigFacet = Facet.define<
{ acceptOnClick: boolean },
{ acceptOnClick: boolean }
>({
combine(value) {
return { acceptOnClick: !!value.at(-1)?.acceptOnClick };
},
});
Initializing the facet:
suggestionConfigFacet.of({ acceptOnClick: true });
Reading the facet:
const config = view.state.facet(suggestionConfigFacet);
I heavily use the ~/tmp
directory of my computer and have the habit of moving to it, creating a new temporary directory, moving into that, and creating a short-lived project. Finally I automated that and have been actually using the automation:
I wrote this tiny zsh function called tt
that creates a new directory in ~/tmp/
and cd’s to it:
tt() {
RANDOM_DIR=$(date +%Y%m%d%H%M%S)-$RANDOM
mkdir -p ~/tmp/"$RANDOM_DIR"
cd ~/tmp/"$RANDOM_DIR" || return
}
This goes in my .zshrc
.
This took way too long to figure out.
The File
polyfill in Remix has the fresh new .stream()
and .arrayBuffer()
methods, which aren’t mentioned on MDN. So, assuming you’re in an action and the argument is args
, you can get the body like:
const body = await unstable_parseMultipartFormData(
args.request,
unstable_createMemoryUploadHandler()
);
Then, get the file and get its text with the .text()
method. The useful methods are the ones inherited from Blob.
const file = body.get("envfile");
if (file instanceof File) {
const text = await file.text();
console.log(text);
}
And you’re done! I wish this didn’t take me so long.