From george at ceetonetechnology.com Wed Apr 1 09:47:57 2026 From: george at ceetonetechnology.com (George Rosamond) Date: Wed, 1 Apr 2026 09:47:57 -0400 Subject: NYC*BUG Tonight: Michael W Lucas Message-ID: Note: MWL will be remote, so the meeting will be streamed from https://www.nycbug.org/streaming.html, and some local NYC people will be watching at the Brass Monkey. What's Changed Since The Last Time I Came this Way - a talk that was supposed to be about OpenZFS, Michael W Lucas 2026-04-01 @ 18:45 local (22:45 UTC) - Backroom of Brass Monkey 55 Little West 12th St Remote participation: Plans are to stream via NYC*BUG website. Q&A will be via IRC on libera.chat channel #nycbug - please preface your questions with '[Q]'. Michael W Lucas and Allan Jude are busy working on a new OpenZFS book, which means not only documenting everything that?s changed in the last 12 years but discovering everything that they got wrong the first time. The quest for accuracy has taken Lucas deep into mailing list archives, Usenet, VAX installation manuals, the Kremlin?s first Internet connection, the United Nations? effort to merge the BSD projects, and the ULTRIX and S51K filesystems, and left MWL more convinced than ever that filesystems are nothing but a April Fools? prank. This hurriedly conceived and hastily assembled talk will update you on new OpenZFS features, but will also try to determine if it?s a good prank?or not. Michael W Lucas? name may ring a bell for some in the BSD community. He?s written several shelves of books. But for anyone who has seen him speak in public during Ante COVID days, it was clear they are mere transcriptions of his rambling presentations. For this NYC*BUG meeting, he is unlikely to edit out any of his expected corny jokes we endure during his conference presentations. More likely, you know his name from his grotesque horror fiction. In the same way his technical books are just transcriptions of his presentations, his fictionaal horror is just a simple reflection of someone who lives in a haunted house filled with (pet) rats in Detroit. Nearest NYC Subway is the 14th Street/Eighth Avenue station L, A, C, E. To get to the backroom, you must enter the front door, follow the long bar on your left, and walk all the way to the back. At the rear of the BrassMonkey, you will see an alcove for the 3 bathrooms our room is off to your right. From mwl at mwl.io Wed Apr 1 14:56:29 2026 From: mwl at mwl.io (Michael W. Lucas) Date: Wed, 1 Apr 2026 14:56:29 -0400 Subject: NYC*BUG Tonight: Michael W Lucas In-Reply-To: References: Message-ID: I feel compelled to say that Patrick came to me and said, "We need a talk. Have you got anything? ANYTHING AT ALL??" Me: "No, dude. Sorry. Not a thing." Patrick: "It's on April Fool's Day!" Me: "Uh... well, given that, I can do something." I already launched one prank today. Consider yourselves warned. ==ml -- Michael W.(Warren) Lucas https://mwl.link/ From rac at conpocococo.org Wed Apr 1 17:03:31 2026 From: rac at conpocococo.org (=?UTF-8?Q?Ra=C3=BAl_Cuza?=) Date: Wed, 01 Apr 2026 17:03:31 -0400 Subject: NYC*BUG Tonight: Michael W Lucas In-Reply-To: References: Message-ID: On Wed, Apr 1, 2026, at 14:56, Michael W. Lucas wrote: > I feel compelled to say that Patrick came to me and said, "We need a > talk. Have you got anything? ANYTHING AT ALL??" > > Me: "No, dude. Sorry. Not a thing." > > Patrick: "It's on April Fool's Day!" > > Me: "Uh... well, given that, I can do something." > > I already launched one prank today. Consider yourselves warned. > > ==ml I thought your prank launch wasn?t for another 2 hours (19:00 EDT)? - r From jkeenan at pobox.com Wed Apr 1 17:43:41 2026 From: jkeenan at pobox.com (James E Keenan) Date: Wed, 1 Apr 2026 17:43:41 -0400 Subject: NYC*BUG Tonight: Michael W Lucas In-Reply-To: References: Message-ID: On 4/1/26 09:47, George Rosamond wrote: > Note: MWL will be remote, so the meeting will be streamed from > https://www.nycbug.org/streaming.html, and some local NYC people will be > watching at the Brass Monkey. When I go to that URL right now, I see an error message starting, "Could not play video." I have never watched a video from this website (and can't travel tonight due to medical problem). Will I have to do anything in particular come 6:45 pm? From mwl at mwl.io Wed Apr 1 18:20:11 2026 From: mwl at mwl.io (Michael W. Lucas) Date: Wed, 1 Apr 2026 18:20:11 -0400 Subject: NYC*BUG Tonight: Michael W Lucas In-Reply-To: References: Message-ID: On Wed, Apr 01, 2026 at 05:43:41PM -0400, James E Keenan wrote: > On 4/1/26 09:47, George Rosamond wrote: > > Note: MWL will be remote, so the meeting will be streamed from > > https://www.nycbug.org/streaming.html, and some local NYC people will be > > watching at the Brass Monkey. > > When I go to that URL right now, I see an error message starting, "Could not > play video." I have never watched a video from this website (and can't > travel tonight due to medical problem). Will I have to do anything in > particular come 6:45 pm? In the past, I've just clicked the link and the window shows up once they start streaming. Often a few minutes before the official start. -- Michael W.(Warren) Lucas https://mwl.link/ From rac at conpocococo.org Wed Apr 1 18:44:30 2026 From: rac at conpocococo.org (=?UTF-8?Q?Ra=C3=BAl_Cuza?=) Date: Wed, 01 Apr 2026 18:44:30 -0400 Subject: NYC*BUG Tonight: Michael W Lucas In-Reply-To: References: Message-ID: <43a99abb-bf23-49cf-9537-30ca1fed0f7d@app.fastmail.com> On Wed, Apr 1, 2026, at 18:20, Michael W. Lucas wrote: > On Wed, Apr 01, 2026 at 05:43:41PM -0400, James E Keenan wrote: >> On 4/1/26 09:47, George Rosamond wrote: >> > Note: MWL will be remote, so the meeting will be streamed from >> > https://www.nycbug.org/streaming.html, and some local NYC people will be >> > watching at the Brass Monkey. >> >> When I go to that URL right now, I see an error message starting, "Could not >> play video." I have never watched a video from this website (and can't >> travel tonight due to medical problem). Will I have to do anything in >> particular come 6:45 pm? > > In the past, I've just clicked the link and the window shows up once > they start streaming. > > Often a few minutes before the official start. > > > -- > Michael W.(Warren) Lucas https://mwl.link/ Exactly what George tasked me with saying. - r From george at ceetonetechnology.com Thu Apr 2 09:14:11 2026 From: george at ceetonetechnology.com (George Rosamond) Date: Thu, 2 Apr 2026 09:14:11 -0400 Subject: the BSDs in the AI Age Message-ID: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> I want to initiate a thread on the "BSDs and AI today." A few things first. There are many levels to this discussion, and for the sake of clarity and sanity, please top posting. All replies should be inline. This is useful: https://subspace.kernel.org/etiquette.html#do-not-top-post-when-replying I'm looking to do a presentation on this in the summer for NYC*BUG. There hasn't been anything in our community which provides the high-level overview of the impact of AI, covering things from the impact on the BSD operating systems to the impact on $job, etc. Hopefully this thread can provide some raw materials, and become an outlet for individual experiences and more general views. I initiated a similar fruitful (but private) discussion for another open-source project, and think it's high-time for us to do the same on a public list. *** There's a few layers to this discussions. Note these are discussions points, not "Yes" or "No" surveys. * How are LLMs (big tech or otherwise) impacting $job now? Are you using Claude Code or similar tools for day to day? Was it required or was it your choice? Was there expectations from this tools in terms of productivity, etc? This question raises the impact of AWS Bedrock/Kiro... * Should BSD projects have explicit LLM-focused policies? Look at the 2nd point in the NetBSD "Commit Guidelines" at https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security already discussed the issue with alleged CVEs discovered by people with LLMs trying to stack their resume with credentials. * How should the BSD projects themselves be using LLMs? Integration in the shell (oh, please no...)? Porting of APIs for big tech LLMs? Utilizing LLMs to discover bad code, CVEs, undiscovered vulnerabilities? * How should individual developers and users consider LLMs as tools for contributing to the BSDs and other open-source projects? I happily used a big tech LLM to deal with an rc file for some very Linuxey software wrapped up in systemd clutter. Other relevant questions added to this thread are welcomed, including references to other relevant public mailing list discussions. g From justin at shiningsilence.com Thu Apr 2 11:53:35 2026 From: justin at shiningsilence.com (Justin Sherrill) Date: Thu, 2 Apr 2026 11:53:35 -0400 Subject: the BSDs in the AI Age In-Reply-To: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: The two things I can think of for quantifiable tests: - What models and software run on BSDs? There's all sorts of tooling for accessing LLMs, but how much have made it to BSD? - How well do LLMs answer questions about BSD specific technology? Or how exact are they when answering questions that could also be for Linux systems? This one might be enraging, as in "check your systemd settings to tune your ZFS pools..." or some such. On Thu, Apr 2, 2026 at 9:15?AM George Rosamond wrote: > I want to initiate a thread on the "BSDs and AI today." > > A few things first. > > There are many levels to this discussion, and for the sake of clarity > and sanity, please top posting. All replies should be inline. > > This is useful: > https://subspace.kernel.org/etiquette.html#do-not-top-post-when-replying > > I'm looking to do a presentation on this in the summer for NYC*BUG. > There hasn't been anything in our community which provides the > high-level overview of the impact of AI, covering things from the impact > on the BSD operating systems to the impact on $job, etc. Hopefully this > thread can provide some raw materials, and become an outlet for > individual experiences and more general views. > > I initiated a similar fruitful (but private) discussion for another > open-source project, and think it's high-time for us to do the same on > a public list. > > *** > > There's a few layers to this discussions. Note these are discussions > points, not "Yes" or "No" surveys. > > * How are LLMs (big tech or otherwise) impacting $job now? Are you using > Claude Code or similar tools for day to day? Was it required or was it > your choice? Was there expectations from this tools in terms of > productivity, etc? This question raises the impact of AWS Bedrock/Kiro... > > * Should BSD projects have explicit LLM-focused policies? Look at the > 2nd point in the NetBSD "Commit Guidelines" at > https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security > already discussed the issue with alleged CVEs discovered by people with > LLMs trying to stack their resume with credentials. > > * How should the BSD projects themselves be using LLMs? Integration in > the shell (oh, please no...)? Porting of APIs for big tech LLMs? > Utilizing LLMs to discover bad code, CVEs, undiscovered vulnerabilities? > > * How should individual developers and users consider LLMs as tools for > contributing to the BSDs and other open-source projects? I happily used > a big tech LLM to deal with an rc file for some very Linuxey software > wrapped up in systemd clutter. > > Other relevant questions added to this thread are welcomed, including > references to other relevant public mailing list discussions. > > g > -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at ceetonetechnology.com Thu Apr 2 12:03:46 2026 From: george at ceetonetechnology.com (George Rosamond) Date: Thu, 2 Apr 2026 12:03:46 -0400 Subject: the BSDs in the AI Age In-Reply-To: References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: Ha! >> This is useful: >> https://subspace.kernel.org/etiquette.html#do-not-top-post-when-replying Please reply to the original thread inline... and please no one reply to Justin's top post. g On 4/2/26 11:53, Justin Sherrill wrote: > The two things I can think of for quantifiable tests: > > - What models and software run on BSDs? There's all sorts of tooling for > accessing LLMs, but how much have made it to BSD? > > - How well do LLMs answer questions about BSD specific technology? Or how > exact are they when answering questions that could also be for Linux > systems? This one might be enraging, as in "check your systemd settings to > tune your ZFS pools..." or some such. > > On Thu, Apr 2, 2026 at 9:15?AM George Rosamond > wrote: > >> I want to initiate a thread on the "BSDs and AI today." >> >> A few things first. >> >> There are many levels to this discussion, and for the sake of clarity >> and sanity, please top posting. All replies should be inline. >> >> This is useful: >> https://subspace.kernel.org/etiquette.html#do-not-top-post-when-replying >> >> I'm looking to do a presentation on this in the summer for NYC*BUG. >> There hasn't been anything in our community which provides the >> high-level overview of the impact of AI, covering things from the impact >> on the BSD operating systems to the impact on $job, etc. Hopefully this >> thread can provide some raw materials, and become an outlet for >> individual experiences and more general views. >> >> I initiated a similar fruitful (but private) discussion for another >> open-source project, and think it's high-time for us to do the same on >> a public list. >> >> *** >> >> There's a few layers to this discussions. Note these are discussions >> points, not "Yes" or "No" surveys. >> >> * How are LLMs (big tech or otherwise) impacting $job now? Are you using >> Claude Code or similar tools for day to day? Was it required or was it >> your choice? Was there expectations from this tools in terms of >> productivity, etc? This question raises the impact of AWS Bedrock/Kiro... >> >> * Should BSD projects have explicit LLM-focused policies? Look at the >> 2nd point in the NetBSD "Commit Guidelines" at >> https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security >> already discussed the issue with alleged CVEs discovered by people with >> LLMs trying to stack their resume with credentials. >> >> * How should the BSD projects themselves be using LLMs? Integration in >> the shell (oh, please no...)? Porting of APIs for big tech LLMs? >> Utilizing LLMs to discover bad code, CVEs, undiscovered vulnerabilities? >> >> * How should individual developers and users consider LLMs as tools for >> contributing to the BSDs and other open-source projects? I happily used >> a big tech LLM to deal with an rc file for some very Linuxey software >> wrapped up in systemd clutter. >> >> Other relevant questions added to this thread are welcomed, including >> references to other relevant public mailing list discussions. >> >> g >> > From justin at shiningsilence.com Thu Apr 2 13:16:13 2026 From: justin at shiningsilence.com (Justin Sherrill) Date: Thu, 2 Apr 2026 13:16:13 -0400 Subject: the BSDs in the AI Age In-Reply-To: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: On Thu, Apr 2, 2026 at 9:15?AM George Rosamond wrote: > I want to initiate a thread on the "BSDs and AI today." > > I'm looking to do a presentation on this in the summer for NYC*BUG. > Two quantifiable measures, though they will change by the time you are doing a summer presentation: - What models and software run on BSDs? There's all sorts of tooling for accessing LLMs, but how much have made it to BSD? - How well do LLMs answer questions about BSD specific technology? Or how exact are they when answering questions that could also be for Linux systems? This one might be enraging, as in "check your systemd settings to tune your ZFS pools..." or some such. > * Should BSD projects have explicit LLM-focused policies? > LLM policies right now appear to be a stand-in for other problems. For example, LLM bug reports are high volume and low quality so far, but I imagine if they get better, the objection would go away: https://lwn.net/Articles/1065620/ There's probably also something that needs to be settled with copyright and assignment with generated code, but I am out of my depth beyond feeling like it's undefined. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cracauer at cons.org Thu Apr 2 13:33:02 2026 From: cracauer at cons.org (Martin Cracauer) Date: Thu, 2 Apr 2026 13:33:02 -0400 Subject: the BSDs in the AI Age In-Reply-To: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: Here is a proposed Developer's Guide to Generative AI in FreeBSD: https://www.delphij.net/temp/ai-guide.html As for LLMs knowing about FreeBSD, the two LLMs I use most have pretty good knowledge of FreeBSD and don't mix it up with Linux. Local: bartowski/Qwen_Qwen3.5-27B-GGUF:Q6_K_L Remote: Claude Code -- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Martin Cracauer http://www.cons.org/cracauer/ From imp at bsdimp.com Thu Apr 2 13:42:21 2026 From: imp at bsdimp.com (Warner Losh) Date: Thu, 2 Apr 2026 11:42:21 -0600 Subject: the BSDs in the AI Age In-Reply-To: References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: On Thu, Apr 2, 2026 at 10:18?AM Justin Sherrill wrote: > On Thu, Apr 2, 2026 at 9:15?AM George Rosamond < > george at ceetonetechnology.com> wrote: > >> I want to initiate a thread on the "BSDs and AI today." >> >> I'm looking to do a presentation on this in the summer for NYC*BUG. >> > > Two quantifiable measures, though they will change by the time you are > doing a summer presentation: > > - What models and software run on BSDs? There's all sorts of tooling for > accessing LLMs, but how much have made it to BSD? > > - How well do LLMs answer questions about BSD specific technology? Or how > exact are they when answering questions that could also be for Linux > systems? This one might be enraging, as in "check your systemd settings to > tune your ZFS pools..." or some such. > > >> * Should BSD projects have explicit LLM-focused policies? >> > > LLM policies right now appear to be a stand-in for other problems. For > example, LLM bug reports are high volume and low quality so far, but I > imagine if they get better, the objection would go away: > > https://lwn.net/Articles/1065620/ > > There's probably also something that needs to be settled with copyright > and assignment with generated code, but I am out of my depth beyond feeling > like it's undefined. > Copyright is an interesting issue. It brings to light several issues that the Open Source community is generally unaware of. Copyright law doesn't stop all copying. There are elements of programs that are not copyrightable because they embody facts, or there's only one way to express things. In addition, boilerplate items part of the interface also likely don't enjoy copyright protection. These details usually don't matter for open source: If there's no copyright you can copy it freely, if there is, you can copy it freely (though maybe with a restriction or two). They only come up with, say, a table that initializes a device's registers is copied or something similar that has no creative content. However, AI-generated code brings these issues back. So if I have claude generate some code for me, and don't edit it, that likely has no copyright protection. It also almost certainly doesn't have any copyright violations in it, at least for the domains that I deal with. Since llms train on thousands of examples, and looks for patterns and uses those patterns to generate the code, there's no direct copying. Other domains with fewer examples may not be so lucky. And there's tools online to look for copying, you you'll still have to be cautious about interpreting the results (eg, some copying is OK, like inline copies of the BSD license). But almost nobody uses unmodified code in production. For the BSDs, claude's generated code today is unsuitable w/o modification, or a lot of prompt refinement. As the code is tweaked to work and handle the riggors of the BSD quality floor, it becomes a combination of the author's work and claude's. The author's creative content is copyrightable, even if embedded in what started out life as AI generated, much like my copyright exists if I modify works in the public domain. In other contexts, there'd be questions about the extent to which you could protect the code, but since open source "freely" gives the code away, you either have code in the public comain, that can be freely copied, or you have code that has a copyright that you can license to "freely" give it away. So the copyright risk analysis here suggests the risks would be low for BSD-license open source projects. There's other risks, but that's the copyright risk. I personally favor policies that allow AI generated code, but require the developer to be able to explain every line, as well as making them responsible for the whole thing. It's just a tool, and like any other tool you have to use it correctly. Warner -------------- next part -------------- An HTML attachment was scrubbed... URL: From george at ceetonetechnology.com Thu Apr 2 13:47:43 2026 From: george at ceetonetechnology.com (George Rosamond) Date: Thu, 2 Apr 2026 13:47:43 -0400 Subject: the BSDs in the AI Age In-Reply-To: References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: <2494b737-9f6d-4afe-9610-0690385c5b1d@ceetonetechnology.com> There goes the clean thread! Inline below... On 4/2/26 13:33, Martin Cracauer wrote: > Here is a proposed Developer's Guide to Generative AI in FreeBSD: > https://www.delphij.net/temp/ai-guide.html > Very useful. > As for LLMs knowing about FreeBSD, the two LLMs I use most have pretty > good knowledge of FreeBSD and don't mix it up with Linux. > > Local: > bartowski/Qwen_Qwen3.5-27B-GGUF:Q6_K_L > > Remote: > Claude Code > So I see self-hosted llama.cpp and ollama in FreeBSD ports, but not Qwen or anything else... .and Claude Code is obviously talking to Anthropic. Am I missing any others in FreeBSD outside of specialized biology/gemma? g From jkeenan at pobox.com Thu Apr 2 16:33:18 2026 From: jkeenan at pobox.com (James E Keenan) Date: Thu, 2 Apr 2026 16:33:18 -0400 Subject: the BSDs in the AI Age In-Reply-To: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: On 4/2/26 09:14, George Rosamond wrote: > I want to initiate a thread on the "BSDs and AI today." > > [snip] > > There's a few layers to this discussions. Note these are discussions > points, not "Yes" or "No" surveys. > > * How are LLMs (big tech or otherwise) impacting $job now? Are you using > Claude Code or similar tools for day to day? Was it required or was it > your choice? Was there expectations from this tools in terms of > productivity, etc? This question raises the impact of AWS Bedrock/Kiro... > I am no longer employed, so there's no "$job" per se. I do, however, spend many hours each week working on the Perl 5 core distribution, the impact of changes in that distribution on Perl libraries on CPAN, etc. Until last month I saw no impact on AI on the scope of my work (except for one friend who is still in the tech labor market and is preparing a paper on Perl and Claude for this year's Perl conference). But in March of this year, the AI wave hit us ... from within! By "within" I mean that we began to get pull requests created by bots using Claude under the instruction of 3 different humans, each of whom has been a major contributor to Perl and CPAN in the past. * The p.r. submitted to the Perl core distribution had a diff of many thousands of lines of code long -- much too long for thorough code review. The p.r. was challenged by one of the project leaders on the copyright issue which other posters to this thread have mentioned. One of our best (and bluntest) C programmers dismissed the p.r. as "A.I. slop"; the human behind the bot conceded that he was not an expert on the C code found in Perl's guts and that he was relying on hundreds of new tests, also written by Claude, to guarantee the correctness of the code. * Another bot pushed 15 pull requests to Perl's main testing library within the space of a few hours, swamping the library's maintainer's capacity to review them. * Another bot published 4 new versions of another, heavily used Perl library on CPAN. All of that library's tests passed, but neither the bot nor the human at first noticed that those changes broke 3 *other* CPAN libraries which depended on that first library. The maintainer of one of those 3 libraries made changes in his own code to accommodate the new changes. The bot published more new versions of the parent library; the human claimed that fixed the problems in the two dependent libraries, but I subsequently demonstrated that one of the two was still failing its tests. What shocks me is that the people behind these bots should, IMO, have known better than to submit these p.r.s. before the recipients of the p.r.s had established policies with respect to AI and Claude. Claude is proving to be very seductive to people still in the business. I don't know enough about the BSDs to express an informed opinion about their future development *in general*, but I would say that you should be very, very, very skeptical of any submissions to your codebase for core and ports. Thank you very much. Jim Keenan From crossd at gmail.com Mon Apr 6 14:19:55 2026 From: crossd at gmail.com (Dan Cross) Date: Mon, 6 Apr 2026 14:19:55 -0400 Subject: the BSDs in the AI Age In-Reply-To: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: On Thu, Apr 2, 2026 at 9:16?AM George Rosamond wrote: > I want to initiate a thread on the "BSDs and AI today." > > A few things first. > > There are many levels to this discussion, and for the sake of clarity > and sanity, please top posting. All replies should be inline. > > This is useful: > https://subspace.kernel.org/etiquette.html#do-not-top-post-when-replying > > I'm looking to do a presentation on this in the summer for NYC*BUG. > There hasn't been anything in our community which provides the > high-level overview of the impact of AI, covering things from the impact > on the BSD operating systems to the impact on $job, etc. Hopefully this > thread can provide some raw materials, and become an outlet for > individual experiences and more general views. Hopefully this will be an interesting discussion; at any rate, thanks for initiating it. > I initiated a similar fruitful (but private) discussion for another > open-source project, and think it's high-time for us to do the same on > a public list. > > *** > > There's a few layers to this discussions. Note these are discussions > points, not "Yes" or "No" surveys. > > * How are LLMs (big tech or otherwise) impacting $job now? Are you using > Claude Code or similar tools for day to day? Was it required or was it > your choice? Was there expectations from this tools in terms of > productivity, etc? This question raises the impact of AWS Bedrock/Kiro... Personally, LLMs are both influencing my job and not influencing my job. The dichotomy is that the surrounding ecosystems are being fundamentally shaped by them, but I have not incorporated their output directly in my own work. However, given that every Google web search these days more or less includes an AI Mode summary, I'm finding it inescapable; furthermore, many of the tools that I routinely use are similarly incorporating LLMs, either directly in their construction (for example, the Zed editor) or indirectly by hooking into their use (again, text editors and so on). Further, some of my colleagues are making heavy use of LLMs, albeit with significant human supervision. I suspect this is trend that will only increase: the quality of output has increased substantially in the last few months, and the genie is out of the bottle. There _is_ a "there" there, though whether it's worth it is a question that needs to be grappled with. > * Should BSD projects have explicit LLM-focused policies? Look at the > 2nd point in the NetBSD "Commit Guidelines" at > https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security > already discussed the issue with alleged CVEs discovered by people with > LLMs trying to stack their resume with credentials. Probably! That unsatisfying one-word answer is about the best I suspect can be done at the moment. These tools are in their infancy, and collectively we're all grappling with how best to use them, or not use them at all, if that's still possible. I understand that discussions like this one are meant to iterate on that as part of the overall process. > * How should the BSD projects themselves be using LLMs? Integration in > the shell (oh, please no...)? Porting of APIs for big tech LLMs? > Utilizing LLMs to discover bad code, CVEs, undiscovered vulnerabilities? Speaking from my own experience with them.... I decided about six weeks ago that I needed to understand these things better, so I went through a few exercises messing around with Anthropic's Claude Code. What I discovered is that the output is not (yet?) good enough for direct incorporation into e.g., an operating system. Where I have found that they work best is in either interactively exploring a code base ("explain to me how this code uses interface X...."), or in building bespoke tooling that I might use to better approach whatever I'm actually working on. For example, I recently used Claude to write a tool that extracts machine-readable register definitions for a particular vendor's CPUs from PDF documents; given around ten volumes, each containing many-thousand-pages of text, the tool pulls those definitions and writes them into JSON files, which can then be queried with a tool like `jq`. Instead of ^F'ing though a multi-volume set of PDF files, I have a shell script that can show me the relevant details directly. I also had it generate tools to show me what the fields of a populated value mean, and did some editor integration so I can "hover" over a field and see what it means, what an accessor is changing, and so on. This is very handy, but more importantly, the process of building it was instructive. The first draft had all sorts of problems: page footers inside of field definitions, for example. The LLM kept wanting to add ad hoc heuristics to fix individual instances of such problems; I finally realized that among the best ways to constrain it to reality included a) asking it to explain to me what it was doing, in the form of a written "design" document, up front; b) forcing it to use test-driven development (to the extent I could force it to do anything), so that there was a known metric by which to judge the output of a change, c) making it frame the problem as building a grammar describing the register definitions I cared about, and then implementing a parser for that grammar: page footers could then be recognized as lexical tokens and treated like whitespace, solving that problem generally. This last point was key: forcing to frame its output in terms of a much smaller thing that was a) formally defined, like an EBNF grammar, and b) small enough that I could examine and verify myself, I could have reasonable confidence in the fidelity of its output. Still, it always biases towards taking the simplest action to effect an outcome, often with poor results. Regardless, I kept at it and eventually got it to the point where I was reasonably happy with the output. After, it occurred to me that if I didn't have the level of experience I do, I wouldn't have been able to successfully direct the LLM to build the tool I wanted. This led me to coin my own little "Dan's Law": an LLM can only write a program that is as good as the human driving it could have written. The corollary is that these things really are tools for senior engineers, who have the requisite experience to analyze their output. In the hands of less experienced folks, they're dangerous. I presented that tool at a little internal demo the other day and a colleague asked, "how much time do you estimate that Claude saved you?" I think this is the wrong question, and my response was that I wasn't sure that Claude really saved me any time: oh sure, it could emit text faster than I could type it all in, but I had to continually correct it and tell it to go back and start over, and in that sense, it wasted a lot of time by doing things that I would have, I hope, thought better than doing myself. Finally on this point, applicability of LLMs to a problem domain likely follows a power law: 90-99% of the training data for software is probably doing more or less the same thing, and the LLM is pretty good here. On the other hand, if you're working in the problem space covering the last 1-10%, the LLM is much worse. You can get it to generate a simple web UI, no problem; but a verifiably correct implementations of lock-free concurrent data structures? Eh, not so much. > * How should individual developers and users consider LLMs as tools for > contributing to the BSDs and other open-source projects? I happily used > a big tech LLM to deal with an rc file for some very Linuxey software > wrapped up in systemd clutter. This needs to be prefaced by asking, what does it mean to use an LLM? If essentially every web search is now using one indirectly, it seems inescapable; but I suspect that's not what you mean: rather, I think you're referring to direct use by an individual, and incorporating the output of that use into one's work. But still, this definitional issue is important. Suppose I point an LLM at a program and say, "explain what this does to me" and it points out a bug, which I then fix and produce a patch for; how does one characterize that? Suppose I verified and developed the patch _without_ use of an LLM, would sending the resulting patch upstream violate a project's "no AI" clause, given that the LLM pointed it out to me in the first place? What if I do a web search for some random technical term and the unasked-for AI summary is actually useful? Where does one draw the line? That seems like an urgent and immediate question. Anyway, to address what I suspect is the actual question, I think as a way to augment a human developer's abilities, basically being a gofer and search engine++, it's not out-right awful. As a way to explore and ideate, they're ok. As a replacement for human output (and importantly human judgement) the things are nowhere near capable enough for that. As with the tool I mentioned above, I've found that they work _best_ when constrained by something else that can be formally verified. I have had good luck asking the LLM to generate a formal model of a thing using something like TLA+, Promela, or Alloy, and proving that the model matches code (usually by showing me the correspondence between the generated model and the base code). I can then then verify the model using it's tools (SPIN, tlc, etc), and use it to generate property-based tests for a system, which gives me a baseline of behavior that the LLM has to meet in whatever it's doing. I strongly suspect that I think formal methods, aggressively applying the type systems of strongly- and statically-typed languages to a problem domain, and solid understandings of complexity theory and formal language design, are going to take on a much greater role for practitioners over the next few years. I never thought I'd say this as an OS person, but I suspect that theorem provers are going to take on a pretty big role for me over the remainder of my career. In fact, I found a bug using TLA+ on Friday; notably, that bug snuck past testing and human review. I think this is less an LLM win and more a formal methods win, but I used an LLM to generate the model that revealed the bug, so they're related in that sense. > Other relevant questions added to this thread are welcomed, including > references to other relevant public mailing list discussions. I mentioned that these tools are still in their infancy, and that feels very true even in how we interact with them: take Claude Code, for example. One can run their CLI, and it feels like playing Adventure or Zork or something. And yes, there _are_ other interfaces, including say a VS Code plugin, but features get released into the CLI first. Anyway, we're still in the "GET LAMP" era of working with these things, and still a long ways from Rogue, let alone something one of my kids would consider playing. It is also important to acknowledge the ethics here. There are three main things that keep me up at night: 1. We're re-centralizing the means of producing software. If these things are going to take on a larger role (and by every indication they are), then it's deeply concerning to me that a very small handful of big players are effectively controlling the show. Honestly, that should concern us all. Furthermore, I think that the true cost of LLM usage is much higher than what we're currently paying. Using Claude Code with the latest model effectively practically requires paying Anthropic for the Max subscription, which isn't exactly cheap. What do we do when the firehose of VC money shuts off and the cost increases 2x, 5x, or 10x? 2. There's the issue of the provenance and ownership of the data used for training models. We're starting to see supply chain attacks in this area, and people have been pointing out that there are legitimate questions about the legality of sourcing that data in the first place, and its fair use, for some time. Some folks will dismiss this by saying that most of us learn from others or by looking at existing references, so why is this different? I reply that there is a massive difference in scale: it was one thing for me to learn about linked lists as a kid reading a book on data structures; it's entirely different when a machine sucks in the content of every book on data structures and reproduces it on demand. As Warner and others have pointed out, the courts haven't caught up and it's all _really_ uncertain right now. And did the authors of those books agree to having their content used thus? If the incentive to read those references goes away, since the LLM gives me the information anyway, and there's correspondingly no financial incentive to write new books, how do we move new ideas out of the research domain and into mainstream practice? Do LLMs just pull everything towards the median? (Maybe the "Singularity" will end up being "aggressively mid.") 3. There's the environmental impact. The amount of energy required to build a new model is growing super-linearly (it appears to have gone from exponential to "merely" quadratic relative to the previous generation model), and we're running out of physics for Moore's Law to keep it reasonable (it's axiomatic that you can only halve the size of a thing so many times until you start running into fundamental physical limitations, and we're starting to edge up against that). Dedicated accelerator hardware and so forth may be able to help, but at some point, we will run out of the ability to train a bigger model. What then? Moreover, in their present form, these things are grotesquely inefficient: everything is free-form text. The whole thing really smacks of the sort of thing where the big players created a machine for generating simulacrums of plausible text, and then realized they could apply that to all kinds of stuff---like software. But the amounts of energy (and water!!) required to do so are unsustainable. Honestly, this seems like the worst of the three; one could imagine running a local model at home, or even a small cluster at a job, but if we're sucking the water table dry to train the model required to do that, that's not great. Most of the AI boosters I'm seen seem to be banking on these problems being solved before it becomes a really serious problem, or on gains in efficiency due to AI use offsetting the increase in energy costs, but I'm skeptical: I've seen no concrete plans how to address this challenge, in particular. Ultimately, there don't seem like a lot of easy answers, and I suspect we're in for a pretty wild ride over the next few years. - Dan C. From george at ceetonetechnology.com Mon Apr 6 17:41:39 2026 From: george at ceetonetechnology.com (George Rosamond) Date: Mon, 6 Apr 2026 17:41:39 -0400 Subject: the BSDs in the AI Age In-Reply-To: References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: <81f8b318-7e7b-41f7-8976-9762d152c10e@ceetonetechnology.com> On 4/6/26 14:19, Dan Cross wrote: > On Thu, Apr 2, 2026 at 9:16?AM George Rosamond > wrote: >> I want to initiate a thread on the "BSDs and AI today." >> >> A few things first. >> >> There are many levels to this discussion, and for the sake of clarity >> and sanity, please top posting. All replies should be inline. >> >> This is useful: >> https://subspace.kernel.org/etiquette.html#do-not-top-post-when-replying >> >> I'm looking to do a presentation on this in the summer for NYC*BUG. >> There hasn't been anything in our community which provides the >> high-level overview of the impact of AI, covering things from the impact >> on the BSD operating systems to the impact on $job, etc. Hopefully this >> thread can provide some raw materials, and become an outlet for >> individual experiences and more general views. > > Hopefully this will be an interesting discussion; at any rate, thanks > for initiating it. And thank you for replying inline. > >> I initiated a similar fruitful (but private) discussion for another >> open-source project, and think it's high-time for us to do the same on >> a public list. >> >> *** >> >> There's a few layers to this discussions. Note these are discussions >> points, not "Yes" or "No" surveys. >> >> * How are LLMs (big tech or otherwise) impacting $job now? Are you using >> Claude Code or similar tools for day to day? Was it required or was it >> your choice? Was there expectations from this tools in terms of >> productivity, etc? This question raises the impact of AWS Bedrock/Kiro... > > Personally, LLMs are both influencing my job and not influencing my > job. The dichotomy is that the surrounding ecosystems are being > fundamentally shaped by them, but I have not incorporated their output > directly in my own work. However, given that every Google web search > these days more or less includes an AI Mode summary, I'm finding it > inescapable; furthermore, many of the tools that I routinely use are > similarly incorporating LLMs, either directly in their construction > (for example, the Zed editor) or indirectly by hooking into their use > (again, text editors and so on). > Yes the usual "I'm not using LLMs" rarely means you're not your providers are not using LLMs. I tend to use search engines with Tor Browser without JavaScript (which limits the search engines I use), so I don't have much insight into the explicit impact on internet searches... > Further, some of my colleagues are making heavy use of LLMs, albeit > with significant human supervision. I suspect this is trend that will > only increase: the quality of output has increased substantially in > the last few months, and the genie is out of the bottle. There _is_ a > "there" there, though whether it's worth it is a question that needs > to be grappled with. Very true. While I think there's been a change in output for myself, I wonder how much humans getting better at prompt engineering matters (therefore feeding the models I use). I also wonder how much the "humanization" of the LLM interactions impact users when they are acting as end users to the LLMs. I mean, you on the prompt versus backend API calls for systems. Subconscious speaking: "Wow, the LLM thinks I'm really smart. I'm going to spend more time here!" > >> * Should BSD projects have explicit LLM-focused policies? Look at the >> 2nd point in the NetBSD "Commit Guidelines" at >> https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security >> already discussed the issue with alleged CVEs discovered by people with >> LLMs trying to stack their resume with credentials. > > Probably! > > That unsatisfying one-word answer is about the best I suspect can be > done at the moment. These tools are in their infancy, and > collectively we're all grappling with how best to use them, or not use > them at all, if that's still possible. I understand that discussions > like this one are meant to iterate on that as part of the overall > process. > >> * How should the BSD projects themselves be using LLMs? Integration in >> the shell (oh, please no...)? Porting of APIs for big tech LLMs? >> Utilizing LLMs to discover bad code, CVEs, undiscovered vulnerabilities? > > Speaking from my own experience with them.... I decided about six > weeks ago that I needed to understand these things better, so I went > through a few exercises messing around with Anthropic's Claude Code. > > What I discovered is that the output is not (yet?) good enough for > direct incorporation into e.g., an operating system. Where I have > found that they work best is in either interactively exploring a code > base ("explain to me how this code uses interface X...."), or in > building bespoke tooling that I might use to better approach whatever > I'm actually working on. > > For example, I recently used Claude to write a tool that extracts > machine-readable register definitions for a particular vendor's CPUs > from PDF documents; given around ten volumes, each containing > many-thousand-pages of text, the tool pulls those definitions and > writes them into JSON files, which can then be queried with a tool > like `jq`. Instead of ^F'ing though a multi-volume set of PDF files, I > have a shell script that can show me the relevant details directly. I > also had it generate tools to show me what the fields of a populated > value mean, and did some editor integration so I can "hover" over a > field and see what it means, what an accessor is changing, and so on. > And I wonder if you even need jq at that point to output or grep the data. I think this is one of the great advantages of generative AI for a lot of people in technology, despite many issues, hallucinations, blah blah. "How soon do we have to contact customers after we have a security incident?" I can eyeball quick looking for "24" "48" "72" (ie, hours) but an LLM is better at it, and I can confirm if a citation is provided. > This is very handy, but more importantly, the process of building it > was instructive. The first draft had all sorts of problems: page > footers inside of field definitions, for example. The LLM kept wanting > to add ad hoc heuristics to fix individual instances of such problems; > I finally realized that among the best ways to constrain it to reality > included a) asking it to explain to me what it was doing, in the form > of a written "design" document, up front; b) forcing it to use > test-driven development (to the extent I could force it to do > anything), so that there was a known metric by which to judge the > output of a change, c) making it frame the problem as building a > grammar describing the register definitions I cared about, and then > implementing a parser for that grammar: page footers could then be > recognized as lexical tokens and treated like whitespace, solving that > problem generally. yes, and for a long while, prompt engineering in all its levels has been a critical tool in using an LLM productively. My foundational prompt includes things like: "2 paragraphs maximum replies." "POSIX shell not bash" "stop the flattery in replies I'm not in 5th grade" > > This last point was key: forcing to frame its output in terms of a > much smaller thing that was a) formally defined, like an EBNF grammar, > and b) small enough that I could examine and verify myself, I could > have reasonable confidence in the fidelity of its output. Still, it > always biases towards taking the simplest action to effect an outcome, > often with poor results. Regardless, I kept at it and eventually got > it to the point where I was reasonably happy with the output. After, > it occurred to me that if I didn't have the level of experience I do, > I wouldn't have been able to successfully direct the LLM to build the > tool I wanted. This led me to coin my own little "Dan's Law": an LLM > can only write a program that is as good as the human driving it could > have written. > > The corollary is that these things really are tools for senior > engineers, who have the requisite experience to analyze their output. > In the hands of less experienced folks, they're dangerous. I > presented that tool at a little internal demo the other day and a > colleague asked, "how much time do you estimate that Claude saved > you?" I think this is the wrong question, and my response was that I > wasn't sure that Claude really saved me any time: oh sure, it could > emit text faster than I could type it all in, but I had to continually > correct it and tell it to go back and start over, and in that sense, > it wasted a lot of time by doing things that I would have, I hope, > thought better than doing myself. So so true. That point goes well with the idea that if you don't know the question to ask, you wont get the right answer, and you won't understand it either. > > Finally on this point, applicability of LLMs to a problem domain > likely follows a power law: 90-99% of the training data for software > is probably doing more or less the same thing, and the LLM is pretty > good here. On the other hand, if you're working in the problem space > covering the last 1-10%, the LLM is much worse. You can get it to > generate a simple web UI, no problem; but a verifiably correct > implementations of lock-free concurrent data structures? Eh, not so > much. > >> * How should individual developers and users consider LLMs as tools for >> contributing to the BSDs and other open-source projects? I happily used >> a big tech LLM to deal with an rc file for some very Linuxey software >> wrapped up in systemd clutter. > > This needs to be prefaced by asking, what does it mean to use an LLM? > If essentially every web search is now using one indirectly, it seems > inescapable; but I suspect that's not what you mean: rather, I think > you're referring to direct use by an individual, and incorporating the > output of that use into one's work. > Useful clarification, as I distinguished earlier above. But yes, I mean the latter not the former. > But still, this definitional issue is important. Suppose I point an > LLM at a program and say, "explain what this does to me" and it points > out a bug, which I then fix and produce a patch for; how does one > characterize that? Suppose I verified and developed the patch > _without_ use of an LLM, would sending the resulting patch upstream > violate a project's "no AI" clause, given that the LLM pointed it out > to me in the first place? What if I do a web search for some random > technical term and the unasked-for AI summary is actually useful? > > Where does one draw the line? That seems like an urgent and immediate question. > > Anyway, to address what I suspect is the actual question, I think as a > way to augment a human developer's abilities, basically being a gofer > and search engine++, it's not out-right awful. As a way to explore > and ideate, they're ok. As a replacement for human output (and > importantly human judgement) the things are nowhere near capable > enough for that. As with the tool I mentioned above, I've found that > they work _best_ when constrained by something else that can be > formally verified. I have had good luck asking the LLM to generate a > formal model of a thing using something like TLA+, Promela, or Alloy, > and proving that the model matches code (usually by showing me the > correspondence between the generated model and the base code). I can > then then verify the model using it's tools (SPIN, tlc, etc), and use > it to generate property-based tests for a system, which gives me a > baseline of behavior that the LLM has to meet in whatever it's doing. > > I strongly suspect that I think formal methods, aggressively applying > the type systems of strongly- and statically-typed languages to a > problem domain, and solid understandings of complexity theory and > formal language design, are going to take on a much greater role for > practitioners over the next few years. I never thought I'd say this > as an OS person, but I suspect that theorem provers are going to take > on a pretty big role for me over the remainder of my career. > > In fact, I found a bug using TLA+ on Friday; notably, that bug snuck > past testing and human review. I think this is less an LLM win and > more a formal methods win, but I used an LLM to generate the model > that revealed the bug, so they're related in that sense. I read the above paragraphs a few times and will read again later. This is precisely the nuances of usage I think needs to be explored and appreciated, although I'm convinced it might just be a fleeting moment here. Think about say, the Goog with golang or more likely Python. I'm willing to bet for the obvious reasons, that code generation, bug finding, etc will be incredibly accurate and useful.. and their view of the ecosystem will become "we maintain the core libraries, and humans dumb and smart will build stuff with a peripheral role in the process. Maybe that bazaar known as pypi will look different in a year. > >> Other relevant questions added to this thread are welcomed, including >> references to other relevant public mailing list discussions. > > I mentioned that these tools are still in their infancy, and that > feels very true even in how we interact with them: take Claude Code, > for example. One can run their CLI, and it feels like playing > Adventure or Zork or something. And yes, there _are_ other > interfaces, including say a VS Code plugin, but features get released > into the CLI first. Anyway, we're still in the "GET LAMP" era of > working with these things, and still a long ways from Rogue, let alone > something one of my kids would consider playing. Yes, nothing controversial there, but useful metaphors. > > It is also important to acknowledge the ethics here. There are three > main things that keep me up at night: > > 1. We're re-centralizing the means of producing software. If these > things are going to take on a larger role (and by every indication > they are), then it's deeply concerning to me that a very small handful > of big players are effectively controlling the show. Honestly, that > should concern us all. Furthermore, I think that the true cost of LLM > usage is much higher than what we're currently paying. Using Claude > Code with the latest model effectively practically requires paying > Anthropic for the Max subscription, which isn't exactly cheap. What > do we do when the firehose of VC money shuts off and the cost > increases 2x, 5x, or 10x? So right, I cover that issues on another angle in my presentation. The captive market of big tech with their LLMs and capex for hardware, data centers, etc, causing vicious competition among themselves, but essentially makes us beholden to their fees. OTOH, the firms employing LLMs for SaaS, etc over API all fall into crisis. $10k for a pentest twice a year or monthly? Er, how about I give an LLM confidential data about my applications and pay a teeny fraction of that, plus the LLM gives me the exact remediation. Oh, and no staff needs to implement since it's all integrated with agents. We just need some deskilled devs to review it. That collapse in value, in the amount of labor in technology operations, is devastating and will make the impact of the cloud on sysadmins look trivial. > > 2. There's the issue of the provenance and ownership of the data used > for training models. We're starting to see supply chain attacks in > this area, and people have been pointing out that there are legitimate > questions about the legality of sourcing that data in the first place, > and its fair use, for some time. Some folks will dismiss this by > saying that most of us learn from others or by looking at existing > references, so why is this different? I reply that there is a massive > difference in scale: it was one thing for me to learn about linked > lists as a kid reading a book on data structures; it's entirely > different when a machine sucks in the content of every book on data > structures and reproduces it on demand. As Warner and others have > pointed out, the courts haven't caught up and it's all _really_ > uncertain right now. And did the authors of those books agree to > having their content used thus? If the incentive to read those > references goes away, since the LLM gives me the information anyway, > and there's correspondingly no financial incentive to write new books, > how do we move new ideas out of the research domain and into > mainstream practice? Do LLMs just pull everything towards the median? > (Maybe the "Singularity" will end up being "aggressively mid.") Distressing but yes.... particularly when an increased in productivity expected goes hand-in-hand with deskilling instead of freeing up people to attack hard questions, imagine new things, etc. And it's worth looking at OWASPs top ten on LLMs/genai more generally in terms of the supply chain issues.. if anyone can't visualize: https://genai.owasp.org/llm-top-10/ > > 3. There's the environmental impact. The amount of energy required to > build a new model is growing super-linearly (it appears to have gone > from exponential to "merely" quadratic relative to the previous > generation model), and we're running out of physics for Moore's Law to > keep it reasonable (it's axiomatic that you can only halve the size of > a thing so many times until you start running into fundamental > physical limitations, and we're starting to edge up against that). > Dedicated accelerator hardware and so forth may be able to help, but > at some point, we will run out of the ability to train a bigger model. > What then? Moreover, in their present form, these things are > grotesquely inefficient: everything is free-form text. The whole > thing really smacks of the sort of thing where the big players created > a machine for generating simulacrums of plausible text, and then > realized they could apply that to all kinds of stuff---like software. > But the amounts of energy (and water!!) required to do so are > unsustainable. Honestly, this seems like the worst of the three; one > could imagine running a local model at home, or even a small cluster > at a job, but if we're sucking the water table dry to train the model > required to do that, that's not great. Most of the AI boosters I'm > seen seem to be banking on these problems being solved before it > becomes a really serious problem, or on gains in efficiency due to AI > use offsetting the increase in energy costs, but I'm skeptical: I've > seen no concrete plans how to address this challenge, in particular. > It's hard to say whether the deskilling and job losses will be worse than the environmental costs. The better answer is that they go hand-in-hand. I realize that's posed as "ethics" but it seems more existential. Industry doesn't care about leaking oil tankers if oil is at $200 a barrel and they're making money hand over fist. "give me your fines and hand slaps... it doesn't matter" Data centers could be a so environmentally safe, sane, local, effective, etc. .but they won't be when it's a core aspect in capex in a feverish era of competition. > Ultimately, there don't seem like a lot of easy answers, and I suspect > we're in for a pretty wild ride over the next few years. > > - Dan C. Very much agree, and appreciate your thoughtful answers. g From jklowden at schemamania.org Mon Apr 6 18:49:45 2026 From: jklowden at schemamania.org (James K. Lowden) Date: Mon, 6 Apr 2026 18:49:45 -0400 Subject: the BSDs in the AI Age In-Reply-To: References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: <20260406184945.374aa6c62c142b0526e4b0f6@schemamania.org> On Mon, 6 Apr 2026 14:19:55 -0400 Dan Cross wrote: > Most of the AI boosters I'm seen seem to be banking on these problems > being solved before it becomes a really serious problem, or on gains > in efficiency due to AI use offsetting the increase in energy costs I guarantee no AI booster has an answer to: Socialize the risk Privatize the reward We are not fully funding the accounted-for cost of these models, and they are not funding the unaccounted-for costs, otherwise known as externalities. The environmental damage, social harm, and political harm have scarcely entered the "national conversation", as it's called. A few at the fringe argue for some kind of moratorium until we get our arms around what is happening, and that simple idea is too radical for most. But it's not radical enough. Either 1. this whole AI thing is another dot-com bust after which all that will remain are some depleted VC funds and hulking half-built data centers and power plants, or 2. the investors are right, and a world-changing powerful tool will wind up in the hands of a fortunate few to whom the rest of us will owe fortune and favor, because our very ability to earn a living and communicate will rest on their benevolence. There is a theory that the rate of change in any person's lifetime has been accelerating since the Industrial Revolution. I don't know if turn-by-turn directions and instantaneous free global communication is "more change" than the Brooklyn Bridge or subways or public sanitation or social security. But if I measure my childhood against my grandparents', if I ask how relevant their experience was to my life, and ask the same about my grandchildren, I'm inclined to say things are changing much faster than the political system is adapting, and probably faster than it can adapt. For example, we have had social media for a couple decades. It was not crucial to Obama's election, but it was to Trump's. The bulk of the adult population came of age when media was mediated: when editors decided what to print, and publishers were liable for libel. Consequently, most of us never saw in the media outright lies and faked images unless we had Enquiring minds. We knew not to take alien abductions too seriously. Now, a majority of those same Americans are subjected to manipulated media, which editing algorithms are attuned to each reader's proclivities, and whose publisher is legally immunized by Section 230 of the Internet whatever act. So Ivermectin maybe is a cure and red meat the new bran. Worse, there is no law against promulgating fake and defamatory images, or against publishing private information to create a target for anonymous abuse. What has congress done about that? What has any legislature done, or consider doing? Taking cell phones out of schools and insisting on age limits for Facebook, I guess that's something. I guess it's a finger in the dike while the whole town floods. An *obvious* remedy to sorting out AI-produced images and content from not would be a *law* that requires all such content to be watermarked. Yes, that law would be ignored by some criminals, as laws always are. But we wouldn't have to ask if the latest press release (quaint term, see?) from the Republican Party was real or faked. Criminality has consequences. Just as obvious: that hasn't happened and isn't happening. The power of these media have captured the institutions that would corral them. The list of unanswered undiscussed social problems is long and lengthening. Our stolen privacy. The concentration of "old" media in the hands of the new tech aristocracy, whether Bezos or Ellison. The rank privatization of healthcare and, increasingly, education. The lack of social mobility in the country that invented it. The ubiquity of toxins like PFAS in our environment. A president promising war crimes in our name, with nary a nod to congress. It took 10 years for Unsafe at Any Speed and the rising carnage of automotive death to result in seatbelts in new cars, and another decade for their use to become mandatory. In about the same span of time, the citizens of New York in an earlier era introduced public sanitation and fire protection. Cigarettes went from Bogart-cool to Bloomberg-banned. Our parents and grandparents did those things. Those social problems were met by an intact civil society and body politic. ISTM the problems we face now are a parasite on that body itself, and due the accelerating rate of change we haven't had time (as a body) recognize what it's doing. If AI is alternative #2, a political system so inept that it couldn't hold Microsoft or Facebook to account will be no match for our new overlords, to whom AI, and we, will belong. --jkl From pete at nomadlogic.org Mon Apr 6 20:41:17 2026 From: pete at nomadlogic.org (Pete Wright) Date: Mon, 6 Apr 2026 17:41:17 -0700 Subject: the BSDs in the AI Age In-Reply-To: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: <555d65bd-2a21-4cde-aaa9-06150e5a796f@nomadlogic.org> On 4/2/26 06:14, George Rosamond wrote: > I want to initiate a thread on the "BSDs and AI today." > > There's a few layers to this discussions. Note these are discussions > points, not "Yes" or "No" surveys. > > * How are LLMs (big tech or otherwise) impacting $job now? Are you using > Claude Code or similar tools for day to day? Was it required or was it > your choice? Was there expectations from this tools in terms of > productivity, etc? This question raises the impact of AWS Bedrock/Kiro... > I work for a small startup that focuses on the sales side of things. when i joined years ago we leveraged machine learning and brute force to pattern match trends in our data in an effort to surface useful insights to our customers. since then we've extended that to near real-time analysis using mostly the same mechanisms. we've branded this as "AI" in the past, for marketing purposes, and heck...we are selling to sales people so... in the past several years our company has put quite a bit of effort into working with LLM's. honestly, if you are a small shop looking for funding or getting acquired you *need* to work with them at some level just to get your foot in the door. i would say parts of our company have fully bought in, and others are still really skeptical (I'm a sysadmin, and since i like to know what my computers are doing i'm in the skeptical camp). i've got a few observations based on this experience: 1. a surprising amount of support and engineering staff are really happy to offload critical thinking to LLMs. this makes me sad, but i don't think everyone wants to be detective Columbo like i do. long term this will have negative consequences in individual career growth not to mention harm to companies. 2. less than technical people love LLMs because it looks like they are doing lots of work. they fall for the lines-of-code == productivity trap. --> *but* they also are able to create pretty functional mockups of applications without any ceremony/project-planning/etc. ----> this should be an eye opener, i reminds me of Alan Kay trying to democratize computing. i just wish LLM implementers had the same discipline and wisdom as Alan. 3. we did quite a bit of work running models internally, using deepseek and things like aws redshift. unless you are building an AI Goldrush company i really don't think its worth it in terms of resource utilization. you are probably better off creating an abstraction layer internally where you can plug-and-play LLM providers. at the least you can chase the most up to date model for your use-case, and ideally you can also optimize your spend with whoever is giving you best price-per-token performance with minimal disruption. > * Should BSD projects have explicit LLM-focused policies? Look at the > 2nd point in the NetBSD "Commit Guidelines" at > https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security > already discussed the issue with alleged CVEs discovered by people with > LLMs trying to stack their resume with credentials. Yes, there needs to be a clear well reasoned policy. I also think it is fine to adjust policies based on experience gained. But if were to build a product around FreeBSD today for example, I would need a policy I could refer to as I do my due diligence. I don't know what the policy should be, but based on what I've seen first hand I think the should *not* be used. Humans are just much better are understanding context and intent. Additionally the tendency of LLMs to generate word salad analysis or PRs increases the burden on humans in a non-trivial way. The burn-out in my world is real in those regards. -pete -- Pete Wright pete at nomadlogic.org From pete at nomadlogic.org Mon Apr 6 20:48:21 2026 From: pete at nomadlogic.org (Pete Wright) Date: Mon, 6 Apr 2026 17:48:21 -0700 Subject: the BSDs in the AI Age In-Reply-To: References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: <3e6e5d2a-45a4-41d5-b46c-fc363e4e5e51@nomadlogic.org> On 4/2/26 10:16, Justin Sherrill wrote: > On Thu, Apr 2, 2026 at 9:15?AM George Rosamond > > wrote: > > I want to initiate a thread on the "BSDs and AI today." > > I'm looking to do a presentation on this in the summer for NYC*BUG. > > > Two quantifiable measures, though they will change by the time you are > doing a summer presentation: > > - What models and software run on BSDs?? There's all sorts of tooling > for accessing LLMs, but how much have made it to BSD? > its not the model per-se but drivers and library support between freebsd and most likely the expensive Nvidia GPU you can't afford to perform the math against the model. i would not recommend trying to do this with any bsd at this point, while it may be possible you'll be swimming upstream best case scenario. regarding the tooling and ecosystem in general, there is plenty of opportunity there. most people use python to interact with models - i use freebsd and hugging face as well as other popular libraries to interact with LLM api's. they work mostly ok but require more effort to maintain then linux, this is mostly due to developers integrating rust into python so you'll need to do lots of compiling. then there are things like vector databases that may be well suited to freebsd (these tend to be opensearch based things, but can also be hosted in postgresql for example too). at the end of the day its just computers once you remove the hype...and lots of immature code lol. -pete -- Pete Wright pete at nomadlogic.org From george at ceetonetechnology.com Tue Apr 7 12:29:24 2026 From: george at ceetonetechnology.com (George Rosamond) Date: Tue, 7 Apr 2026 12:29:24 -0400 Subject: another AI question Message-ID: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> There is frequent chatter about the "AI apocalypse".. but the first question I ask is "what does that mean?" To some it means Nvidia stock collapses. To others it means "singularity" or AGI to the point of say, "I, Robot" world. I'm curious what others think "AI apocalypse" means. Is it a concern, and if so, what does it mean in practice? I have my own opinion, but I'll hold it in my pocket until the summer... g From nonesuch at longcount.org Tue Apr 7 13:31:31 2026 From: nonesuch at longcount.org (Mark Saad) Date: Tue, 7 Apr 2026 13:31:31 -0400 Subject: another AI question In-Reply-To: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> References: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> Message-ID: > On Apr 7, 2026, at 12:29?PM, George Rosamond wrote: > > ?There is frequent chatter about the "AI apocalypse".. but the first > question I ask is "what does that mean?" > > To some it means Nvidia stock collapses. > This is "AI winter" the market fizzles out . I would also put Iran bombing the AI data centers in their area in the same bucket . only because few will have the funds to rebuild. > To others it means "singularity" or AGI to the point of say, "I, Robot" > world. > AI apocalypse for me is when people value AI over people . IE why hire people to do X when i can make AI do it and "save money" . We are not there but it some markets we are getting closer . > I'm curious what others think "AI apocalypse" means. Is it a concern, > and if so, what does it mean in practice? > > I have my own opinion, but I'll hold it in my pocket until the summer... > > g To me the biggest question is "then what?" . The powerful bank makes all of its decisions with ai and fires the humans who did the job , then what happens? The hospital makes a new billing system thats all automated, and fires all of their staff and contractors who processes bills , then what . You can not keep removing people and automating their jobs . Eventually enough people will be unemployed that life will become hard or maybe too hard. What needs to happen is some sort of change in the "tech elite" attitude. Money over everything , fear of missing out, us vs them . "we are smarter then everyone else " , everything needs to be uber, conformity , bland boring , more ! Personally AI winter is coming before AGI the cracks are almost here . --- Mark Saad | nonesuch at longcount.org From cracauer at cons.org Tue Apr 7 17:50:21 2026 From: cracauer at cons.org (Martin Cracauer) Date: Tue, 7 Apr 2026 17:50:21 -0400 Subject: another AI question In-Reply-To: References: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> Message-ID: The situation with LLMs on FreeBSD is not totally catastrophic. The NVidia drivers are currently broken on my 5090, so I cannot compare Vulkan/FreeBSD to Linux/Cuda. But they work on my 2080ti with Vulkan and run both ollama and llama.cpp, accelerated. On my laptop with "AMD Ryzen 7 PRO 4750U with Radeon Graphics" also runs Vulkan and accelerates ollama (although only by a factor of 3 compared to CPU). This combo does not run llama.cpp Now that NVidia drivers are running on at least one of my cards I'll give it another go to run CUDA through Linuxulator. Martin -- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Martin Cracauer http://www.cons.org/cracauer/ From cmacgreg at gmail.com Tue Apr 7 19:01:29 2026 From: cmacgreg at gmail.com (Craig MacGregor) Date: Tue, 7 Apr 2026 19:01:29 -0400 Subject: the BSDs in the AI Age In-Reply-To: References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: <9531fb1d-3f91-40fc-b532-6e5946256648@gmail.com> "Claude, remind me how to configure Thunderbird so I don't top-post... it's been a while since I posted to the NYCBUG list" >> - How well do LLMs answer questions about BSD specific technology? Or how >> exact are they when answering questions that could also be for Linux >> systems? This one might be enraging, as in "check your systemd settings to >> tune your ZFS pools..." or some such. This January, cleaning out some old junk from my mom's basement, since I'm funemployed at the moment... I found an old DLT tape drive and some tapes I probably picked up at thrift store in the early 2000s... got a SCSI PCI card working in my oldest 64-bit machine, and was able to read the tapes. The only interesting one is a tape backup of an ISP's admin server circa 1996. I extracted everything, tried to run an ancient BSD/386 installer on QEMU, but gave up pretty quickly, and just left it unfinished. To be clear, I was a bona fide AI hater; I still don't think that LLMs will ever achieve "AGI", and other than (and often including) code, most of what they create is "slop"... but I'll save that for the "AI apocalypse" thread. But, I bit the bullet and finally tried Claude at the urging of a college friend, during the blizzard in February. When I pointed it at the tape dumps and restored fs, within like an hour, I had a modern NetBSD QEMU guest running, a custom kernel with EXEC_AOUT and COMPAT_NOMID enabled, and I am able to chroot into the vintage system, and run most binaries. Could I have figured this out on my own? Sure, but it is unlikely I would have. I haven't been working on any other BSD-specific projects lately, and I really wasn't working on much at all before I got bit by the Claude bug, but since mid-Feb: - a Linux kernel driver for an old USB modem/skype adapter, so that I can use an old phone as headset - a bluetooth app which presents the linux->usb->phone setup as a bluetooth headset, so that I can pick up the receiver, hear a dial tone, and use it like a landline - various crazy TTF font projects (trying to get old timey win2k non-anti-aliased fonts working 100% in modern Linux, and particularly Chrome/Firefox) - a tongue-in-cheek "National Food Days" notifier for Slack - various Arduino/RPi hacking, as well as rooting a cheap wireless carplay adapter - installing/configuring old routers as openwrt devices - porting the old SGI "fsn" 3d file system viewer (famous from Jurassic Park) from just the binaries (work in progress) - asynchronous socket multiplexer using pipes for single threaded scripts (a project for which I wrote a perl proof-of-concept over a decade ago) As for Claude confusing Linux/BSD... it gets confused on Linux with itself all the time, so I doubt it is any different. >>> * How should the BSD projects themselves be using LLMs? Integration in >>> the shell (oh, please no...)? Porting of APIs for big tech LLMs? >>> Utilizing LLMs to discover bad code, CVEs, undiscovered vulnerabilities? Yeah, even the big companies seem to be backing off on "AI in every button" already, this stuff is great when 90% right is good enough; iterative code development in particular. But I'm not letting it drive *anything* other than dev/testing (including a car, heh). >>> * How should individual developers and users consider LLMs as tools for >>> contributing to the BSDs and other open-source projects? I happily used >>> a big tech LLM to deal with an rc file for some very Linuxey software >>> wrapped up in systemd clutter. Do I feel like I wrote all this code? Yes and no, some of these projects I audit every line, others I read none... the line between "toy" and "real development" is pretty much the same as it always was, do you understand the code, or are you just plugging in what works? So even though I had it write a Linux kernel module, the usb->phone adapter is decidedly a "toy", and I've tested it to that extent... and for old hardware, I think that is enough to release it (not in the kernel itself, of course). If you work with Claude like a fellow engineer, and force it to consider the how, and not just the what, review what its doing, and constantly test, you can get some truly amazing results. -craig From rac at conpocococo.org Tue Apr 7 22:13:57 2026 From: rac at conpocococo.org (=?UTF-8?Q?Ra=C3=BAl_Cuza?=) Date: Tue, 07 Apr 2026 22:13:57 -0400 Subject: the BSDs in the AI Age In-Reply-To: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> References: <450cc852-6528-4e04-a4dc-25a1a76062da@ceetonetechnology.com> Message-ID: <563f25b8-c6e7-4090-a6ac-9e90af4af093@app.fastmail.com> On Thu, Apr 2, 2026, at 09:14, George Rosamond wrote: > I want to initiate a thread on the "BSDs and AI today." > > A few things first. > > There are many levels to this discussion, and for the sake of clarity > and sanity, please top posting. All replies should be inline. > > This is useful: > https://subspace.kernel.org/etiquette.html#do-not-top-post-when-replying > > I'm looking to do a presentation on this in the summer for NYC*BUG. > There hasn't been anything in our community which provides the > high-level overview of the impact of AI, covering things from the impact > on the BSD operating systems to the impact on $job, etc. Hopefully this > thread can provide some raw materials, and become an outlet for > individual experiences and more general views. > > I initiated a similar fruitful (but private) discussion for another > open-source project, and think it's high-time for us to do the same on > a public list. > > *** > > There's a few layers to this discussions. Note these are discussions > points, not "Yes" or "No" surveys. > > * How are LLMs (big tech or otherwise) impacting $job now? Are you using > Claude Code or similar tools for day to day? Was it required or was it > your choice? Was there expectations from this tools in terms of > productivity, etc? This question raises the impact of AWS Bedrock/Kiro... I have heard companies that talk about these tools as enabling 1000x developers. The wise ones still see LLM use as experimental, but enough people have produced production ready code in record times that every developer is expected to join the experiment. To go on a tangent about people becoming unnecessary... I don't think technical people will become unnecessary. As stated in other answers, coding agents need people to give them the feedback that what they are doing is what is required. Whether it is in Rust or a series of precise project specifications and test requirements, someone who understand the problems needs to be involved. As a 2016 cartoon put it, "Do you know what the industry term is for a project specification that is comprehensive and precise enough to generate a program? Code." [https://www.commitstrip.com/en/2016/08/25/a-very-comprehensive-and-precise-spec/?] So, to answer your question, the expectation is to use LLMs but also to still get things done. This is stressful, but similar to the stresses of how to be a SysAdmin to 10's of thousands of servers as opposed to 10's of servers. > * Should BSD projects have explicit LLM-focused policies? Look at the > 2nd point in the NetBSD "Commit Guidelines" at > https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security > already discussed the issue with alleged CVEs discovered by people with > LLMs trying to stack their resume with credentials. I don't agree that code from LLMs is tainted, in the licensing sense of the word. I think it is completely public domain, but that is my opinion. What comes from an LLM is a generalization of all the different things that were inputed into the models. It is very rare that the input comes out unaltered and unprocessed. That is why I think it is something new. Now the legality and morality of how the LLMs were built is another matter. One I will opt to not discuss at this time. I do agree that each BSD should adjust their policies in so far that LLMs will change the volume and nature of code submitted to them. Existing policies will probably be challenged by these changes. The policies should get ahead of problems as much as they can. > * How should the BSD projects themselves be using LLMs? Integration in > the shell (oh, please no...)? Porting of APIs for big tech LLMs? > Utilizing LLMs to discover bad code, CVEs, undiscovered vulnerabilities? I think LLMs can offer hackers relatively inexpensive ways of fining novel bugs, zero-days, and chained vulnerabilities in any code base. BSD projects should do this work themselves and fix the problems as best they can. > * How should individual developers and users consider LLMs as tools for > contributing to the BSDs and other open-source projects? I happily used > a big tech LLM to deal with an rc file for some very Linuxey software > wrapped up in systemd clutter. LLMs are great at "talking to a code base". I think LLMs will make it possible for individual developers to hack on BSD and other open-source projects in novel ways. I don't think BSD projects or any developer should become 100% reliant on LLMs though. As others have stated, these models are dependent on large corporations that have nothing FREE about them. I would love to see local models that run on BSD on a PC get to the point that they can create the sense of coding in the same room as Guido can Rossum or Stephen Bourne. But I have not dived that deep into AI tooling to know how realistic that is. - r From cmacgreg at gmail.com Tue Apr 7 23:09:45 2026 From: cmacgreg at gmail.com (Craig MacGregor) Date: Tue, 7 Apr 2026 23:09:45 -0400 Subject: another AI question In-Reply-To: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> References: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> Message-ID: > I'm curious what others think "AI apocalypse" means. Is it a concern, > and if so, what does it mean in practice? I think the AI companies are panicking because the coding use case is growing in popularity, and has paying customers... but they are losing money on both training and inference, and at the same time, local models are catching up quick. The window of time that anybody has the "best" models is short, and technical users are also the ones that will use the most resources (openclaw and junk like that), run local models, and jump from provider to provider, depending on the price that month... the other uses, like cheating on homework, reflecting the user's neuroses back to them, and generating slop images/video are unprofitable at best, and fraught with so many social/legal issues... and every other awful use case similar to chatbots are already essentially commodities, too (I just got an email for the "Wegmans AI Assistant" as I am writing this, haha). The fact that they can now quantify who will pay for their services and how much is why the sky is falling... they're not going to be able to replace every job with AI, at most it's a few hundred dollars per developer, per month. Every non-developer use of ChatGPT and Copilot that I've seen or heard of seems like a waste of time and money (OK maybe image/audio/video generation isn't useless, but it looks awful and is mostly harmful). I figure OpenAI and Oracle will be hit the hardest (the Ellisons seem to have other interests these days, maybe they see the writing on the wall, too). Nvidia will probably be OK, Anthropic has the most to gain (right now anyway). Microsoft will probably also be hit by openai/oracle fallout, but likely minimized (and they will probably absorb openai) Regarding BSDs and other free software... I think forking is going to become a lot more common. It's a lot easier to fork than submit a patch when you've had claude hacking away at some project for a few hours, or there is some sort of disagreement. So there's likely to be a glut of garbage free software projects (not that this is really anything new).. AI-generated "bug fixes" are certainly already a well known issue. I think the for-profit open source companies are cooked, as they say; it's hard to justify paying for extended features when you can just have Claude extend software to mimic the paid features... their entire business model has to be "AI can't be trusted, pay us instead". -craig From edlinuxguru at gmail.com Wed Apr 8 08:26:01 2026 From: edlinuxguru at gmail.com (Edward Capriolo) Date: Wed, 8 Apr 2026 08:26:01 -0400 Subject: another AI question In-Reply-To: References: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> Message-ID: I spent a lot of time and heartache dealing with VLLM and ollama. Here are the issues vllm. CPU is barely supported, and it is 60 GB docker layers to build it, then runs super slow. Ollama is much better on CPU, community wise they take months to merge common sense features. Still better than VLLM is a black hole of denial and then never merging anyway. The problem I have with ollama is similar to VLLM the install itself is about 14GB as it downloads every C and blas library in existence. Enter deliverance https://github.com/edwardcapriolo/deliverance - Written for CPU - Written in Java with selected C modules foursome heavy lifting - A binary of < 55MB! (not 20 GB of python c and tensor libraries) - compiles in < 5 minutes (including tests) - Available in docker hub https://hub.docker.com/r/ecapriolo/deliveranc I do nice work with quantized models for qwen, gemma, and llama. I do most of my devwork on a core i5 that is 8 years old. Ow but EdI hatem da java. Well the tensor library does some math operation in SIMD in C https://github.com/edwardcapriolo/deliverance/blob/main/native/src/main/c/simd/vector_simd.c And and it even has web_dawn CPU support (confession I donthave a GPU to test on) https://github.com/edwardcapriolo/deliverance/blob/main/native/src/main/c/gpu/vector_gpu.c If anyone wants to loan be acccess to a BSD system or a BSD system with GPU I can run some tests there and we can have some fun. The dependencies are very light on the C side my alpine system that i test on looks like this: doas apk add maven doas apk install git doas apk add curl doas apk add docker-compose doas apk add openjdk25 doas apk add gpg doas apk add bash doas apk add clang20-libclang-20.1.8-r0 doas apk add llvm clang lld That will build the SIMD c module, which as I mentioned is significantly less effort then the 12 GB of blas libraries ollama installs and the 4GB of tensorflow stuff transformers will install Thanks, Edward On Tue, Apr 7, 2026 at 5:50?PM Martin Cracauer wrote: > The situation with LLMs on FreeBSD is not totally catastrophic. > > The NVidia drivers are currently broken on my 5090, so I cannot > compare Vulkan/FreeBSD to Linux/Cuda. > > But they work on my 2080ti with Vulkan and run both ollama and > llama.cpp, accelerated. > > On my laptop with "AMD Ryzen 7 PRO 4750U with Radeon Graphics" also > runs Vulkan and accelerates ollama (although only by a factor of 3 > compared to CPU). This combo does not run llama.cpp > > Now that NVidia drivers are running on at least one of my cards I'll > give it another go to run CUDA through Linuxulator. > > Martin > -- > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > Martin Cracauer http://www.cons.org/cracauer/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cracauer at cons.org Wed Apr 8 12:16:29 2026 From: cracauer at cons.org (Martin Cracauer) Date: Wed, 8 Apr 2026 12:16:29 -0400 Subject: another AI question In-Reply-To: References: <3153f3cb-b40e-4d6a-b34d-5ea273a10f7f@ceetonetechnology.com> Message-ID: Martin Cracauer wrote on Tue, Apr 07, 2026 at 05:50:21PM -0400: > The situation with LLMs on FreeBSD is not totally catastrophic. > > The NVidia drivers are currently broken on my 5090, so I cannot > compare Vulkan/FreeBSD to Linux/Cuda. Made them work, you need loader.conf hw.nvidia.registry.EnableGpuFirmware=17 Performance on bartowski/Qwen_Qwen3.5-27B-GGUF:Q6_K_L in llama.cpp is: - FreeBSD Vulkan 49 tokens/second - Linux CUDA 56 tokens/second Will get Linux/Vulkan numbers when I have a chance. But this is encouraging. Windows was also 10% slower than Linux. Martin > But they work on my 2080ti with Vulkan and run both ollama and > llama.cpp, accelerated. > > On my laptop with "AMD Ryzen 7 PRO 4750U with Radeon Graphics" also > runs Vulkan and accelerates ollama (although only by a factor of 3 > compared to CPU). This combo does not run llama.cpp > > Now that NVidia drivers are running on at least one of my cards I'll > give it another go to run CUDA through Linuxulator. That go failed. No CUDA on Linuxulator still. Martin -- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Martin Cracauer http://www.cons.org/cracauer/ From rac at conpocococo.org Wed Apr 8 15:18:04 2026 From: rac at conpocococo.org (=?UTF-8?Q?Ra=C3=BAl_Cuza?=) Date: Wed, 08 Apr 2026 15:18:04 -0400 Subject: =?UTF-8?Q?BSD=E2=80=99s_Cannot_Ignore_LLMs?= Message-ID: BSD projects cannot ignore LLMs because people using them are not ignoring BSDs. https://red.anthropic.com/2026/mythos-preview/ targets OpenBSD for the resale value of finding a vulnerability on ?an operating system known primarily for security.? This article is effectively an advertisement for the unreleased next model from an AI company, but that doesn?t reduce the seriousness of the problem emerging for ALL maintainers of software, open or otherwise. The number of people who will be able to find vulnerabilities and build exploits is growing as LLMs progress. BSD project?s must adjust to the speed reacting to these findings will require. The number of people who can patch vulnerabilities will also grow, if projects can accept their patches. - r From cracauer at cons.org Wed Apr 8 16:16:04 2026 From: cracauer at cons.org (Martin Cracauer) Date: Wed, 8 Apr 2026 16:16:04 -0400 Subject: BSD???s Cannot Ignore LLMs In-Reply-To: References: Message-ID: Ra??l Cuza wrote on Wed, Apr 08, 2026 at 03:18:04PM -0400: > > The number of people who can patch vulnerabilities will also grow, if projects can accept their patches. If you can review them with enough throughput. I think there is an obvious imbalance between the number of independents coming up with holes, exploits and patches and people who are trusted by the project to judge whether those patches are correct, don't break anything unrelated and are not secretly malicious. Martin -- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Martin Cracauer http://www.cons.org/cracauer/ From pete at nomadlogic.org Wed Apr 8 16:55:12 2026 From: pete at nomadlogic.org (Pete Wright) Date: Wed, 8 Apr 2026 13:55:12 -0700 Subject: BSD???s Cannot Ignore LLMs In-Reply-To: References: Message-ID: On 4/8/26 13:16, Martin Cracauer wrote: > Ra??l Cuza wrote on Wed, Apr 08, 2026 at 03:18:04PM -0400: >> >> The number of people who can patch vulnerabilities will also grow, if projects can accept their patches. > > If you can review them with enough throughput. > > I think there is an obvious imbalance between the number of > independents coming up with holes, exploits and patches and people who > are trusted by the project to judge whether those patches are correct, > don't break anything unrelated and are not secretly malicious. not to mention making sure someone a) understands the intent and real impact of a given patch and b) how it logically fits into the wider system. in my experience claude is so overly verbose most engineers eyes glaze over by the 2nd or 3rd it submits and just blindly accept them. -p -- Pete Wright pete at nomadlogic.org From edlinuxguru at gmail.com Wed Apr 8 17:43:37 2026 From: edlinuxguru at gmail.com (Edward Capriolo) Date: Wed, 8 Apr 2026 17:43:37 -0400 Subject: BSD???s Cannot Ignore LLMs In-Reply-To: References: Message-ID: On Wed, Apr 8, 2026 at 4:16?PM Martin Cracauer wrote: > Ra??l Cuza wrote on Wed, Apr 08, 2026 at 03:18:04PM -0400: > > > > The number of people who can patch vulnerabilities will also grow, if > projects can accept their patches. > > If you can review them with enough throughput. > > I think there is an obvious imbalance between the number of > independents coming up with holes, exploits and patches and people who > are trusted by the project to judge whether those patches are correct, > don't break anything unrelated and are not secretly malicious. > > Martin > -- > %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > Martin Cracauer http://www.cons.org/cracauer/ I am guessing this is a reaction to the proclamation by the company that leaked all their source code through typescript that they now have a tool that finds all the bugs. The world already had insurmountable tech debt: https://arxiv.org/pdf/1908.00827 The AI is making it so fast they have to shift the conversation. So the last market blitz (we can write cobol) has now moved to (we can find all the bugs in ffmpeg). https://thenewstack.io/ffmpeg-to-google-fund-us-or-stop-sending-bugs/ Great all! three volunteer FFMPEG committers are tired of the bug reports, It is amazing how companies with say 13-200 billion dollars can tell you how their GPUs find all the bugs. The problem is they sell all their services at a loss. On Redit folks are on fire every day about how the ai providers are capping them :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From edlinuxguru at gmail.com Wed Apr 8 18:08:01 2026 From: edlinuxguru at gmail.com (Edward Capriolo) Date: Wed, 8 Apr 2026 18:08:01 -0400 Subject: =?UTF-8?Q?Re=3A_BSD=E2=80=99s_Cannot_Ignore_LLMs?= In-Reply-To: References: Message-ID: On Wed, Apr 8, 2026, 6:05?PM Ra?l Cuza wrote: > BSD projects cannot ignore LLMs because people using them are not ignoring > BSDs. > > https://red.anthropic.com/2026/mythos-preview/ targets OpenBSD for the > resale value of finding a vulnerability on ?an operating system known > primarily for security.? > > This article is effectively an advertisement for the unreleased next model > from an AI company, but that doesn?t reduce the seriousness of the problem > emerging for ALL maintainers of software, open or otherwise. > > The number of people who will be able to find vulnerabilities and build > exploits is growing as LLMs progress. BSD project?s must adjust to the > speed reacting to these findings will require. > > The number of people who can patch vulnerabilities will also grow, if > projects can accept their patches. > > - r > Yet I have sent 4 patches to vllm, they have 5k open bugs now. Amd some right to anthropic about there mcp server... big suprise not merged. They joined the linux foundation and 9 months later somone tried to close my stale pr. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From edlinuxguru at gmail.com Wed Apr 8 18:12:58 2026 From: edlinuxguru at gmail.com (Edward Capriolo) Date: Wed, 8 Apr 2026 18:12:58 -0400 Subject: =?UTF-8?Q?Re=3A_BSD=E2=80=99s_Cannot_Ignore_LLMs?= In-Reply-To: References: Message-ID: On Wednesday, April 8, 2026, Edward Capriolo wrote: > > > > On Wed, Apr 8, 2026, 6:05?PM Ra?l Cuza wrote: > >> BSD projects cannot ignore LLMs because people using them are not >> ignoring BSDs. >> >> https://red.anthropic.com/2026/mythos-preview/ targets OpenBSD for the >> resale value of finding a vulnerability on ?an operating system known >> primarily for security.? >> >> This article is effectively an advertisement for the unreleased next >> model from an AI company, but that doesn?t reduce the seriousness of the >> problem emerging for ALL maintainers of software, open or otherwise. >> >> The number of people who will be able to find vulnerabilities and build >> exploits is growing as LLMs progress. BSD project?s must adjust to the >> speed reacting to these findings will require. >> >> The number of people who can patch vulnerabilities will also grow, if >> projects can accept their patches. >> >> - r >> > > Yet I have sent 4 patches to vllm, they have 5k open bugs now. Amd some > right to anthropic about there mcp server... big suprise not merged. They > joined the linux foundation and 9 months later somone tried to close my > stale pr. > Here is one from for the bug killing "expers" String _>object -> string 9 months before a review.. guy asks " what does this do. Im gonna close it" Why dont theu tey it on their own repos... lol -- Sorry this was sent from mobile. Will do less grammar and spell check than usual. -------------- next part -------------- An HTML attachment was scrubbed... URL: From edlinuxguru at gmail.com Wed Apr 8 18:13:54 2026 From: edlinuxguru at gmail.com (Edward Capriolo) Date: Wed, 8 Apr 2026 18:13:54 -0400 Subject: =?UTF-8?Q?Re=3A_BSD=E2=80=99s_Cannot_Ignore_LLMs?= In-Reply-To: References: Message-ID: On Wednesday, April 8, 2026, Edward Capriolo wrote: > > > On Wednesday, April 8, 2026, Edward Capriolo > wrote: > >> >> >> >> On Wed, Apr 8, 2026, 6:05?PM Ra?l Cuza wrote: >> >>> BSD projects cannot ignore LLMs because people using them are not >>> ignoring BSDs. >>> >>> https://red.anthropic.com/2026/mythos-preview/ targets OpenBSD for the >>> resale value of finding a vulnerability on ?an operating system known >>> primarily for security.? >>> >>> This article is effectively an advertisement for the unreleased next >>> model from an AI company, but that doesn?t reduce the seriousness of the >>> problem emerging for ALL maintainers of software, open or otherwise. >>> >>> The number of people who will be able to find vulnerabilities and build >>> exploits is growing as LLMs progress. BSD project?s must adjust to the >>> speed reacting to these findings will require. >>> >>> The number of people who can patch vulnerabilities will also grow, if >>> projects can accept their patches. >>> >>> - r >>> >> >> Yet I have sent 4 patches to vllm, they have 5k open bugs now. Amd some >> right to anthropic about there mcp server... big suprise not merged. They >> joined the linux foundation and 9 months later somone tried to close my >> stale pr. >> > > > Here is one from for the bug killing "expers" > > String _>object -> string > > 9 months before a review.. guy asks " what does this do. Im gonna close it" > > Why dont theu tey it on their own repos... lol > > > -- > Sorry this was sent from mobile. Will do less grammar and spell check than > usual. > https://github.com/modelcontextprotocol/java-sdk/issues/156 -- Sorry this was sent from mobile. Will do less grammar and spell check than usual. -------------- next part -------------- An HTML attachment was scrubbed... URL: From njt at ayvali.org Tue Apr 14 23:28:15 2026 From: njt at ayvali.org (N.J. Thomas) Date: Tue, 14 Apr 2026 20:28:15 -0700 Subject: 20 Years on AWS and Never Not My Job Message-ID: On IRC we were discussing Colin Percival's recent blog post reminiscing on 20 years of work porting BSD to AWS. Fascinating read: https://www.daemonology.net/blog/2026-04-11-20-years-on-AWS-and-never-not-my-job.html Thomas