[talk] Suggest meeting topic: role of BSD in response to ransomware

Thomas Levine _ at thomaslevine.com
Tue Jul 11 16:03:49 EDT 2017


On Tue, Jul 11, 2017, at 01:31 PM, James E Keenan wrote:
> Is there a role for the BSDs in response to massive ransomware attacks?

I suppose it should be a comparison of how a neglected network of
FreeBSD and OpenBSD computers would fare against a neglected network of
Cisco and Windows computers when attacked by software designed
specifically for the particular respective networks. I would love to see
that. I imagine that the *BSD network would do better, but I don't
really know why. For example, I don't have a good sense of how release
cycles, documentation, good defaults, general software correctness, user
permissions, and freedom of source code contribute to resistance against
contemporary software attacks.

But, while I do think inquiring into this inquiry would be interesting,
I must note that I find your motives to be disingenuous.

On Tue, Jul 11, 2017, at 01:31 PM, James E Keenan wrote:
> In the last few months ransomware attacks such as WannaCry 
> (https://en.wikipedia.org/wiki/WannaCry_ransomware_attack) have had a 
> devastating effect on large organizations.  Organizations affected 
> include one of the largest law firms in the country and one of the 
> world's largest advertising agency networks.  Such organizations are, 
> typically, "Windows shops."
> 
> Suppose that you are a sysadmin or other, non-executive-level techie in 
> such an organization.  You've heard about FreeBSD and OpenBSD and you 
> wonder, "Would using these OSes have helped us either resist a 
> ransomware attack?  Could they help us recover better from such an
> attack?"

I suspect that the use of OpenBSD could help in resisting some software
attacks, but I believe that such resistance has no place in business.
OpenBSD is more correct and simple than Windows, and this is problematic
for business;  a "techie" in a large organization is better off using
complex software that breaks a lot; this way, he or she can get stressed
out enough to make his or her bosses happy, he or she can plausibly
blame a vendor when things go wrong, he or she can feel like he or she
is being challenged, and he or she can more easily convince co-workers,
investors, and customers that the company is doing something magic,
difficult, and novel.

What's more, even if these various requirements were to be removed, I
think there would still be no benefit to using correct software in the
context of a large organization because the individual worker is likely
to have been fired or at least moved to a different project before the
value of the correctness would be realized. I find the benefits of
correct of software to come mostly from avoiding bugs rather than from
speeding up the initial writing of the software, so correct software has
no place in large organizations. And, on the other hand, the
consequences of incorrect software are not so bad in the context of
business, especially in the case of ransomware; the techie probably
won't lose any of his or her own data during the incident, so the worst
that might reasonably happen is that he or she will get fired.

Resisting such an attack would probably jeopardize the techie's career,
with no potential upside. I'm fine with people doing this if they are
aware of the risks, so the problem is only that they often are not. I
thus feel that it is important, ethically, to make the techie aware of
the danger of resisting attacks against his or her company. In case you
do not share my concern for the techie's career, perhaps you at least
share my interest in not losing valuable data: If someone is interested
in improving data security in society, I think that implementing it
within a large organization is a bad approach; resisting attacks is
probably against the interests of anyone in the organization, so there
are probably other places where one can more usefully apply his or her
knowledge.

And in case you do not share either ethical concern, here is a more
practical corollary:  I think it would be far less interesting to me to
discuss how such systems could be implemented in large organizations in
particular, as that such a discussion would have to focus on how to
write good software while maintaining an illusion of stress and
complexity and normalcy that comes with using popular bad software; it
would be more interesting to consider situations where good software is
in fact beneficial and thus does not have to be hidden.



When I consider all of the different ideas that we have presented on
this topic, it in fact sounds more like a doctoral thesis than a NYCBUG
talk.



More information about the talk mailing list