home tags events about login

novalis rss

Cosmopolitan. Brooklyn-based indie game developer best known for Semantle.
@NovalisDMT on Twitter

novalis honked back 19 Apr 2026 01:21 +0000
in reply to: https://fedi.copyleft.org/users/bkuhn/statuses/116427131005370154

@bkuhn @zacchiro @cwebber @ossguy @richardfontana Actually, I just thought of one proprietary software company that would be much happier not to have LLMs around: Salesforce. Nobody's going to buy their overpriced shit when the alternative is to vibecode something that works exactly with your business process and that you can change, yourself, any time you want at the cost of a couple hundred bucks of Claude and a few hours of work.

novalis honked back 18 Apr 2026 17:22 +0000
in reply to: https://mastodon.xyz/users/zacchiro/statuses/116426787052879205

@zacchiro @cwebber @bkuhn @ossguy @richardfontana I would say it's dramatically less safe. First, there's very little incentive to go after some OSS project over an unauthorized inbound=outbound contribution. Second, if someone did, the damage would likely be a small part of a single project. Third, only a small number of parties (the employer, or maybe some other single party whose code was copied) have the ability to sue.

With LLMs, it's different. When the authors sued Anthropic, they all sued. Is a shell script that Claude generated a derivative work of, say, the romantasy novel A Court of Thorns and Roses (to pick a random thing included in Anthropic's training set)? Well, it's hard to show that it's not, in the sense that that novel is one of the zillion things that went into generating the weights that generated the shell script.

Now it happens that the authors sued Anthropic (and settled). But I don't know if their settlement covers users of Claude (and even if it did, there are two other big models). And that's only the book authors -- there's still all of the code authors in the world.

So yes, I think the risk is high. I mean, in some sense -- in another sense, it seems unlikely that Congress would say, "sorry, LLMs as code generators are toast because of some century-old laws". At most, they would set up a statutory licensing scheme for LLM providers which covers LLM outputs. Of course, Europe might go a different way, but I think they would probably do the same. Under this hypothetical scheme, if your code were used to train Claude, you would get a buck or two in the mail every year. Authors got I think $3k per book as a one-time payment, but that was a funny case because of how Anthropic got access to the books.

Still, there's a risk that Congress wouldn't act (due to standard US government dysfunction).

It seems like most people are willing to take this risk, which I think says something interesting about most people's moral intuitions.

novalis honked back 17 Apr 2026 22:39 +0000
in reply to: https://snug.moe/notes/al70v4bsvc5v3nzr

re: genai. ethical harms. bit rambly

re: genai. ethical harms. bit rambly

@lumi @bkuhn @ossguy @mastodonmigration I have always been in favor of a narrow definition of Free Software -- that is, I think it means software that respects the four freedoms. A piece of Free Software could be bad for other reasons. Bitcoin comes to mind as being unnecessarily bad for the environment. Perhaps software useful only to send spam. Or (hypothetically) software made with enslaved labor.

novalis honked back 17 Apr 2026 20:29 +0000
in reply to: https://snug.moe/notes/al6wz4zole5336nc

re: genai. ethical harms

re: genai. ethical harms

@lumi @ossguy @bkuhn @mastodonmigration Right, that's the car analogy: cars aren't sustainable.

(If you're asking whether it genuinely helps, I would encourage you to look at what other experienced programmers you respect are saying -- in particular, I think @mjd is worth listening to, as he is one of the best programmers I personally know).

But also, unfortunately, it seems really unlikely that we will manage to outlaw either cars or LLMs.

novalis honked back 17 Apr 2026 19:57 +0000
in reply to: https://snug.moe/notes/al6vtsz0dcqbqemt

re: genai. ethical harms

re: genai. ethical harms

@lumi @bkuhn @mastodonmigration @ossguy GenAI has a case where it's useful: producing small software when you don't know how to write code. To my mind, this is a software freedom issue: what use is a pile of source code that you don't know how to modify? Sure, you could hire someone (if you're rich).

It also seems to (since November) be sometimes able to help experienced practitioners produce software faster than they otherwise would be able to -- especially in areas where they are unfamiliar with the ecosystem. You may or may not believe that this justifies the harm, but it is a use-case.

Finally, one weird-ass use-case which I admit is niche: I use it to remove ads from podcasts. Imagine doing that like 90s spam filtering, with a pile of regexps. Yuck. LLMs (while not perfect at the job) make it straightforward. My kid is much happier not listening to ads.

novalis bonked 15 Apr 2026 22:15 +0000
original: mhoye@cosocial.ca

Age verification is a deliberate attack on system sovereignty, both for individuals and countries. There’s no “age verifcation”, there is only “identity verification that includes age”, and the system doing that verification is not just a privacy-invasive user tracking system but a remotely controlled off switch for anyone of any age.

novalis bonked 15 Apr 2026 13:52 +0000
original: suricrasia@lethargic.talkative.fish

docker for qualia. gone are the days of "it works on my subjectivity." now you can easily deploy and manage experience itself. it's admittedly not perfect—there's been a long running issue where the sky's blue and the grass's green might be different depending on the platform. it's a linux permissions issue.

novalis honked 17 Mar 2026 00:54 +0000

In sort of an inverse Bay Area House Party move, I just updated my vibecoded podcast ad stripper to automatically remove land acknowledgements.

novalis honked 16 Mar 2026 17:18 +0000

Just had an automated system read out my phone number as if it were an integer. As in, "Is your phone number six hundred seventeen million, four hundred forty-one thousand..."

novalis honked 13 Mar 2026 21:11 +0000

Just described Wingspan to a friend as a "ludonarrative Superfund site". (Doesn't make it a bad game -- but, like, you're competing to watch birds, but you can also force them to lay eggs, and also you get extra points when a bird kills another bird).

novalis bonked 28 Feb 2026 14:21 +0000
original: jmac@masto.nyc

If I were a paying OpenAI customer, I'd feel as proud of that fact today as a Tesla owner did one year ago.

novalis honked 08 Feb 2026 03:22 +0000

Kai Huang's 2012 Functions was dramatically more elegant than this year's, because of the constraint. Functions (2026) is less about the sequential ahas of figuring out the functions, and more about grinding through to figure out what X and Y can go into g(X, a(Y)). Several of Kai's functions were really fun, while these were rather straightforward. Weirdly, the solution page for the 2026 puzzle doesn't mention Kai's as an inspiration, which seems odd, given that the puzzle has exactly the same name and a very similar conceit.

At least when Allie Goertz covers Nine Inch Nails, missing the point is the point.