the BSDs in the AI Age

Pete Wright pete at nomadlogic.org
Mon Apr 6 20:41:17 EDT 2026



On 4/2/26 06:14, George Rosamond wrote:
> I want to initiate a thread on the "BSDs and AI today."
<snip>
> 
> There's a few layers to this discussions. Note these are discussions
> points, not "Yes" or "No" surveys.
> 
> * How are LLMs (big tech or otherwise) impacting $job now? Are you using
> Claude Code or similar tools for day to day? Was it required or was it
> your choice? Was there expectations from this tools in terms of
> productivity, etc? This question raises the impact of AWS Bedrock/Kiro...
> 

I work for a small startup that focuses on the sales side of things. 
when i joined years ago we leveraged machine learning and brute force to 
pattern match trends in our data in an effort to surface useful insights 
to our customers.  since then we've extended that to near real-time 
analysis using mostly the same mechanisms.  we've branded this as "AI" 
in the past, for marketing purposes, and heck...we are selling to sales 
people so...

in the past several years our company has put quite a bit of effort into 
working with LLM's.  honestly, if you are a small shop looking for 
funding or getting acquired you *need* to work with them at some level 
just to get your foot in the door.  i would say parts of our company 
have fully bought in, and others are still really skeptical (I'm a 
sysadmin, and since i like to know what my computers are doing i'm in 
the skeptical camp).

i've got a few observations based on this experience:

1. a surprising amount of support and engineering staff are really happy 
to offload critical thinking to LLMs.  this makes me sad, but i don't 
think everyone wants to be detective Columbo like i do.  long term this 
will have negative consequences in individual career growth not to 
mention harm to companies.

2. less than technical people love LLMs because it looks like they are 
doing lots of work.  they fall for the lines-of-code == productivity trap.
--> *but* they also are able to create pretty functional mockups of 
applications without any ceremony/project-planning/etc.
----> this should be an eye opener, i reminds me of Alan Kay trying to 
democratize computing.  i just wish LLM implementers had the same 
discipline and wisdom as Alan.

3. we did quite a bit of work running models internally, using deepseek 
and things like aws redshift.  unless you are building an AI Goldrush 
company i really don't think its worth it in terms of resource 
utilization.  you are probably better off creating an abstraction layer 
internally where you can plug-and-play LLM providers.  at the least you 
can chase the most up to date model for your use-case, and ideally you 
can also optimize your spend with whoever is giving you best 
price-per-token performance with minimal disruption.



> * Should BSD projects have explicit LLM-focused policies? Look at the
> 2nd point in the NetBSD "Commit Guidelines" at
> https://www.netbsd.org/developers/commit-guidelines.html. OSS-Security
> already discussed the issue with alleged CVEs discovered by people with
> LLMs trying to stack their resume with credentials.

Yes, there needs to be a clear well reasoned policy.  I also think it is 
fine to adjust policies based on experience gained.  But if were to 
build a product around FreeBSD today for example, I would need a policy 
I could refer to as I do my due diligence.

I don't know what the policy should be, but based on what I've seen 
first hand I think the should *not* be used.  Humans are just much 
better are understanding context and intent.

Additionally the tendency of LLMs to generate word salad analysis or PRs 
increases the burden on humans in a non-trivial way.  The burn-out in my 
world is real in those regards.

-pete

-- 
Pete Wright
pete at nomadlogic.org



More information about the talk mailing list