Delivery-Date: Sun, 28 Feb 2016 23:30:39 -0500
Return-Path: <tor-talk-bounces@lists.torproject.org>
X-Spam-Checker-Version: SpamAssassin 3.3.1 (2010-03-16) on moria.seul.org
X-Spam-Level: 
X-Spam-Status: No, score=-4.2 required=5.0 tests=BAYES_00,RCVD_IN_DNSWL_MED,
	T_RP_MATCHES_RCVD autolearn=ham version=3.3.1
X-Original-To: archiver@seul.org
Delivered-To: archiver@seul.org
Received: from eugeni.torproject.org (eugeni.torproject.org [38.229.72.13])
	(using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits))
	(No client certificate requested)
	by khazad-dum.seul.org (Postfix) with ESMTPS id BE7061E06F1;
	Sun, 28 Feb 2016 23:30:16 -0500 (EST)
Received: from eugeni.torproject.org (localhost [127.0.0.1])
	by eugeni.torproject.org (Postfix) with ESMTP id 9E43F3A024;
	Mon, 29 Feb 2016 04:30:09 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by eugeni.torproject.org (Postfix) with ESMTP id D042139FFE
 for <tor-talk@lists.torproject.org>; Mon, 29 Feb 2016 04:30:05 +0000 (UTC)
X-Virus-Scanned: Debian amavisd-new at 
Received: from eugeni.torproject.org ([127.0.0.1])
 by localhost (eugeni.torproject.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 3GRvWvaSn8QP for <tor-talk@lists.torproject.org>;
 Mon, 29 Feb 2016 04:30:05 +0000 (UTC)
Received: from ccs.nrl.navy.mil (mx0.ccs.nrl.navy.mil
 [IPv6:2001:480:20:118:118::211])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate)
 by eugeni.torproject.org (Postfix) with ESMTPS id AD23739FF9
 for <tor-talk@lists.torproject.org>; Mon, 29 Feb 2016 04:30:05 +0000 (UTC)
Received: from vpn212046.nrl.navy.mil (vpn212046.nrl.navy.mil [132.250.212.46])
 by ccs.nrl.navy.mil (8.14.4/8.14.4) with ESMTP id u1T4TwSr025829
 (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256 verify=NOT);
 Sun, 28 Feb 2016 23:30:00 -0500
Date: Sun, 28 Feb 2016 23:29:58 -0500
From: Paul Syverson <paul.syverson@nrl.navy.mil>
To: tor-talk@lists.torproject.org
Message-ID: <20160229042958.GA49953@vpn212046.nrl.navy.mil>
References: <20160116212250.GA14827@ix-293.local> <56D36C49.7090605@witmond.nl>
MIME-Version: 1.0
Content-Disposition: inline
In-Reply-To: <56D36C49.7090605@witmond.nl>
User-Agent: Mutt/1.5.23 (2014-03-12)
X-CCS-MailScanner: No viruses found.
X-CCS-MailScanner-Info: See: http://www.nrl.navy.mil/ccs/support/email
Cc: "rejo@zenger.nl >> Rejo Zenger" <rejo@zenger.nl>
Subject: Re: [tor-talk] trusting .onion services
X-BeenThere: tor-talk@lists.torproject.org
X-Mailman-Version: 2.1.15
Precedence: list
Reply-To: tor-talk@lists.torproject.org
List-Id: "all discussion about theory, design,
 and development of Onion Routing" <tor-talk.lists.torproject.org>
List-Unsubscribe: <https://lists.torproject.org/cgi-bin/mailman/options/tor-talk>, 
 <mailto:tor-talk-request@lists.torproject.org?subject=unsubscribe>
List-Archive: <http://lists.torproject.org/pipermail/tor-talk/>
List-Post: <mailto:tor-talk@lists.torproject.org>
List-Help: <mailto:tor-talk-request@lists.torproject.org?subject=help>
List-Subscribe: <https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk>, 
 <mailto:tor-talk-request@lists.torproject.org?subject=subscribe>
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Errors-To: tor-talk-bounces@lists.torproject.org
Sender: "tor-talk" <tor-talk-bounces@lists.torproject.org>

On Sun, Feb 28, 2016 at 10:53:13PM +0100, Guido Witmond wrote:
> On 01/16/16 22:22, Rejo Zenger wrote:
> > Hi!
> > 
> > I'm wondering... 
> > 
> >  - How can a user reliably determine some .onion address actually
> >    belongs to intended owner?
> 
> Hi Rejo,
> 
> I think that in general, .onion addresses are unauthenticated. That is,
> there is no way of determining who an address belongs to.
> 
> All we know of an .onion address is that its tied to whomever holds the
> private key. And given the risk of disclosure of the private key, all
> bets are off. This is also true of GPG, where an adversary can create a
> clone key bearing my name but their key. Erinn Clark of the Tor Project
> has been a victim of such an attack.

But the whole point of GPG is that there is a web of trust. Yes anyone
can sign something and say that they're you. But only people who have
met you face-to-face and confirmed your key, e.g., using the
Zimmerman-Sassamanprotocol should be signing your key.  Someone can
then trust your key is bound to you to the extent that they trust the
keys of the people who vouch for you.

This is why we suggest this approach for an at-the-moment solution
https://github.com/saint/w2sp-2015/blob/master/SP_SPSI-2015-09-0170.R1_Syverson.pdf

(Note the final edited version, coming out soon in IEEE Security & Privacy
is a little different, but content is basically the same.)

Besides the PGP approach we present, we also give an X509 style
solution that requires some policy changes but will work with the
usual browser certificate semantics for TLS.

For another example of the PGP signature approach not mentioned in
the paper see
https://blog.patternsinthevoid.net/isis.txt

aloha,
Paul

> 
> In my pet project I'm using Hidden Services as a means for people to
> connect to each other. One person opens a HS, send the onion address to
> another in an encrypted message. The other connects to the service. Then
> BOTH people authenticate to the other with their already exchanged keys
> before their software lets the data flow commence. Knowledge of the
> onion address or even having a copy of the private key won't get the
> connection started.
> 
> In short, I built an authentication layer on top of hidden services.
> 
> That authentication layer uses PKI certificates and stuff to distribute
> public keys to each other. And ultimately, the same issue reappears:
> Whom am I talking to? And with risk of disclosure of the private key,
> all bets are off.
> 
> I believe this to be a fundamental property of cryptography. The eternal
> uncertainty of the identity of the other party. The more anonymous the
> key exchange the higher the uncertainty. In other words: the higher the
> need for secrecy and anonymity, the greater the uncertainty.
> 
> The answer you are looking for is to determine how much of a risk there
> is with plain onion asdresses, or what extra authentication and
> repudiation you need to build on top. And how much deanonymisation you
> are willing to accept.
> 
> I believe it's ultimately a design trade off.
> 
> 
> With regards, Guido Witmond.
> 
> 
> >  - How is the provider of .onion service supposed to deal with a lost or
> >    compromised private key, especially from the point of view from the
> >    user of this service? How does the user know a .onion-address has
> >    it's key revoke?
> > 
> > Let me explain...
> > 
> > 
> > One of the advantages of using a .onion address to identify the service
> > you are connecting to, is that you don't have to rely on a third party
> > as you would do in a system with Certificate Authorities. By relying on
> > the certificate signed by a trusted CA, the user can be sure the site he
> > is connecting to is actually belongs to a particular entity. With a
> > .onion address that is no longer needed since those address are
> > self-authenticating. Sounds good.
> > 
> > Now, the problem I have is that the user doesn't have a reliable way to
> > determine whether a given address actually belongs to the site he wants
> > to visit. As far as I can tell, Facebook has two solutions to this: it
> > mentions the correct address in presentations, blogs and press coverage
> > wherever it can and its TLS-certificate mentions both the .onion address
> > as well as it's regular address (as Subject Alt Names).
> > 
> > So, the first solution can't be done by everyone, not everyone has that
> > much coverage. The second solution is nice, but falls back to the CA
> > system. Ironic, isn't it? [1]
> > 
> > Or, to rephrase it: how can a user reliably determine the .onion address
> > for a given entity without relying on the flawed CA system and without
> > the entity having a lot of visibility?
> > 
> > 
> > Given the fact that the hostname is a derivate of the private key used
> > to encrypt the connection to that hostname, there is a bigger issue when
> > the private key is stolen or lost (or any other case where the key needs
> > to be replaced.)
> > 
> > When the key is lost (yes, shouldn't happen, but shit happens), the
> > hostname changes. There is no reliable way for a user to learn what the
> > new key, and therefor the hostname, is.
> > 
> > When the key is stolen (or compromised in any other way), the key should
> > be replaced. This may be even more problematic than the case where the
> > key is lost, which would render the site unreachable. When the key is
> > stolen, the key may be used by an perpetrator. The problem: there is no
> > way to tell the world that a particular key is compromised. [2] The
> > administrator is able to make the site accessible via a new key and new
> > hostname, but the attacker may keep running a modified copy of the site
> > using the stolen key.
> > 
> > 
> > 
> > [1] Ironic, as Roger's blog on this topic makes clear there are all
> > kinds of reasons why we do not want re-enforce this system, partly
> > because it is flawed, partly because it costs money, partly because it
> > undoes the anonymity that some hidden sites need, partly because...
> > 
> > https://blog.torproject.org/blog/facebook-hidden-services-and-https-certs
> > 
> > [2] OK. Not entirely true, maybe. It may be possible to include those
> > key in some listing of the directory authorities marking them as bad
> > nodes. This is a manual process.
> > 
> > 
> > 
> > 
> 
> 



> -- 
> tor-talk mailing list - tor-talk@lists.torproject.org
> To unsubscribe or change other settings go to
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk

-- 
tor-talk mailing list - tor-talk@lists.torproject.org
To unsubscribe or change other settings go to
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-talk

