46

I am working on a web application enabling users to communicate over private messages which is just one part of the whole system. The main focus during my development process is to protect the privacy of my users, I think this should be one of the main responsibilities for every software developer. Especially when you think about the harmful data leaks over the last years.

I've read a lot of articles about end-to-end encryption and that famous messaging apps like Whatsapp use end-to-end encryption to protect their users privacy. I want to achieve that my server and my application have zero knowledge about the contents of messages being transmitted.

When you have applications which are installed on the users device I know theoretically how to store and protect the private keys on the users device. But in my special case, there is going to be no executable installed on the users device. They are accessing the web application over their browsers and a secured https connection. This fact causes me quite a headache. I can't even imagine how to store the private key permanent and secure in this client side environment. So I thought about storing the keys in a secure way on the server, but this is a contradiction to my mind, when thinking about that I have to assume that a potential adversary takes over the control of my server, database, ... the whole machine while everything is running and the "key vault unsealed".

I opened a question on StackOverflow a couple of days before if you want to read more about my way to desperation. Protecting application secrets like encryption keys and other sensitive data. After a long discussion I was referred to this Cryptography Forum of StackExchange.

So I want to ask you without focusing on any special programming language: "Where and how to store private keys in web applications for private messaging with web browsers?". Thanks in advance!

  • 3
    If you're bound to the browser, there's not really any good and understandable solution for the user. You could use client certificates but I doubt you can use them to encrypt / decrypt stuff outside of the TLS handshake (also the UI for client certificates kinda sucks). The best option you possible have (in terms of security) would be a browser extension (but this would bring other problems). The problem is also how do you get the encryption code to the user? JS? Then you can just keep the keys as well, as you could in Theory just dump the users secret with malicious JS. – SEJPM May 23 '16 at 19:22
  • BTW: Welcome to Cryptography.SE :) And maybe have a look at a somewhat related project: Mailvelope (PGP in the browser with similar problems). And of course: +1 for actually asking "I have a browser and want to do E2E, but how could I?" which looks like it has a large re-usability for other people. And may I also ask: Is this supposed to be real-time only or also "asyn" communication? – SEJPM May 23 '16 at 19:23
  • @SEJPM thank you for welcoming me :) I am getting crazy on thinking about how to solve this problem without creating more system requirements than a browser and https support. I thought about using browser extensions as well but this would not be an acceptable solution for most of my users. Client side JavaScript code must be certified to be not altered as you already mentioned , it is so annoying that achieving good security with standard tools like browsers to protect users privacy is that hard, there should already exist solutions [..] Maybe there is something I have overseen ... ? –  May 23 '16 at 19:58
  • @SEJPM Thank you for providing the reference to Mailvelope, I'll have a look at the project. No this should not be a real-time chat, users should be able to communicate with the classic "you have a message in your inbox" procedure. But user messages should never ever get in the wrong hands even if the adversary gets control over the whole system. And thank you I wrote my question with other developers in mind, because finding good introduction points to this topic is more than difficult and maybe we can contribute to other developer projects, with the same problems. –  May 23 '16 at 20:08
  • 3
    I think the problem you have to solve before even thinking about encryption is ensuring the code that encrypts is delivered safely. This means that you have to use server-provided code (as you want to minimize user-burden), you have to enforce code validation, you have to ensure the code can't get secrets if your server gets breached and you have to ensure the code can't be tampered. – SEJPM May 23 '16 at 21:08
  • Store the private keys on a memory chip and keep the chip in your pocket until you need to use it ??? – William Hird May 23 '16 at 21:14
  • @SEJPM yes you are absolutely right, if an adversary is able to alter code which is processing the private key during being clear text, all the security is lost. I had some ideas during the last hour, I am going to structure my thoughts and provide some conceptual data model as answer to this question which I would love to discuss with you and other community members. –  May 23 '16 at 21:16
  • @WilliamHird thank you for your answer. This would be a good solution, but I think it is not a feasible solution for application users and a huge impact on usablity. And the problem mentioned by SEJPM with malicious code leaking the key during reading from memory is still present. But thank you I did not thought about "external storage devices" until you mentioned it. –  May 23 '16 at 21:22
  • @WilliamHird, who would use the chip (the dev / the admin / the user)? How is he protected against physical data exfiltration? What problems woud this solve? Which would remain? – SEJPM May 23 '16 at 21:37
  • @SEJPM Everyone in the loop would have their own chip. Think of it this way. The perfect way to securely store a key that you absolutely had to have remain secure is to memorize the key yourself, of course no human can be expected to memorize long bit strings so the next best thing is a "brain extension": a personal device that stores the information that you always keep with you . ( of course you have to take steps to make sure it cant be stolen). If there is a better solution to this problem, step right up and let's hear it :-) – William Hird May 23 '16 at 21:53
  • @WilliamHird I think what you're talking about is "smart cards". This solution does scale well in controlled environments, but requires the users to invest money in cards and readers (or tokens) which make it unpractical for wider audiences. Also there's no smartcard-access API for browsers AFAICT which means the user would have to install a dedicated application. I'll admit however that this solution can provide very high security. – SEJPM May 23 '16 at 22:39
  • 2
    the safest way to store keys is not to store them at all; use a different key per message derived from a different key per conversation for maximum privacy. – dandavis May 25 '16 at 02:52
  • @dandavis i think your idea is very interesting ! How would you achieve this behavior ? –  May 25 '16 at 17:09
  • to start a conversation, alice uses ECC to send bob a public key. bob uses the pk to send alice an aes key, which he remembers. now a+b have a shared aes key that no-one else knows. SJCL supports both ecc and aes; https://github.com/bitwiseshiftleft/sjcl/wiki/Getting-Started, adding --with-ecc to ./configure as the only change, then taking core.js. there's example of both generating ecc and using aes on the sjcl github wiki. it's easier than you might think. – dandavis May 25 '16 at 20:58

3 Answers3

40

You may want to consider using the Web Cryptography API for client-side cryptography in the web browser. Then, you can create a keypair using the webcrypto api, and store the CryptoKey object, containing the user's private key, with the .extractable property set to false, using Indexed DB storage. This way the private key can only be used for decrypting and/or signing messages within the browser - but can not be read (even by client-side scripting in the browser).

For more info, see:

Squeamish Ossifrage
  • 48,392
  • 3
  • 116
  • 223
mti2935
  • 929
  • 8
  • 9
  • 1
    Another link for the list: https://github.com/fission-codes/keystore-idb "In-browser key management with IndexedDB and the Web Crypto API." – gotjosh Oct 27 '22 at 18:02
4

Reffering to the question provided by me: "Where and how to store private keys in web applications for private messaging with web browsers?" which means that I want to find a bullet proof mechanism to permanently store and protect public keys in web-browser for end-to-end encrypted messaging without needing more than:

  • web-browser
  • HTTPS support
  • javascript
  • permanent storage within the browser

I had to sadly accept, that there is no answer (which is perfectly secure) to my question.

I want to justify this statement by providing a summary of the comments section discussion and refer to some external sources to undergird the weak points within the destination environment.

  1. Storing Private Key in a Cookie - This approach is very straight forward, but also involves some security issues. First of all a cookie is sent within each http request if the "URL requested is within the same domain and path defined in the cookie" - Stackoverflow which means that the private key is transmitted to the server with each http request of the user owning the private key. The client-server communication is protected through HTTPS but if the server is compromised then the private key is leaked. The next problem associated with storing secrets in cookies is that if the cookie expires or is removed through browser settings or by the user himself, then the private key gets lost forever.

What about storing an encrypted private key in the cookie and a backup of this encrypted private key on a secured client-side data storage device?

Well, that is an option. Let's assume that the user could encrypt the private key with AES and a strong password remembered within the users mind. Then even if the cookie is transmitted over a secured line to the server with a compromised application running, than the adversary could not accomplish anything with the encrypted key. And if the cookie expires or is removed from the browser the user could restore it through his external backup. At this point of writing we could think we have resolved the problem. I'll show you that we unfortunately have not, but before I will provide you a vivid example, I have found on a website during my research (but unfortunately I've lost the URI), why transfering the private key even if encrypted and over a secure line is not rational at all. Imagine you have made some very private pictures of yourself, you have encrypted these pictures with a strong password and a secure crypto mechanism, anybody could decrypt these pictures within the next millions and millions of years, but you would not send these encrypted files to your friends, because you'll feel uncomfort about the risk of decrypting your pictures by one of your friends. You should also feel uncomfort about transfering these encrypted private keys to a server.

  1. Client-side javascript encryption - at the time of writing this answer there are different javascript encryption libraries, one of the most advanced is the "Stanford Javascript Crypto Library (SJCL)" which can be used to encrypt data like, in our case, the private key. The problem about client side encryption with javascript is that an adversary could inject malicious javascript code through compromised servers or cross-side-scripting attacks. You cannot sign your javascript files (at least not with the questions system requirements) and you cannot verify the entire document to be trustworthy. So an adversary could steal users private keys before they are encrypted or when they are decrypted through the users password. For further explanation I would reffer to the URI provided by @ArtjomB. Javascript Cryptography Considered Harmful

At this point I have provided enough serious security problems, that any further discussions about the question with the requirements listed above is senseless. But for completeness I will have a look at further solving approaches discussed in the comments.

  1. Browser plugins - Browser plugins do not match the system requirements listed above. For further explanation about browser plugins have a look at (Javascript Cryptography Considered Harmful)

  2. Web Storage API - Better solution than storing in cookies because offline content without transfering values with each http request, but still vulnerable through malicious javascript and xss.

Conclusion: With the system requirements listed above there is no secure way to accomplish what I asked for. JavaScript is no suitable language for client-side encryption (see Javascript Cryptography Considered Harmful) and Browsers are not designed for this purposes at the time of this writing.

  • 13
    article is way out of date. you can use tls, window.crypto, and the newer browser features like csp and subresource integrity to verify scripts. you can also use an IIFE to gen the keys and define an interface not available to later-added scripts, or use a sealed Worker to prevent arbitrary code from altering the sjcl code. lastly, if you use a user-prompted password (+b/scrypt) to feed aes, you can store the ct in localStorage w/o worry. – dandavis May 26 '16 at 21:08
  • 1
    i added a few links to the comment, and can zoom-in on any problems areas, but there's a lot to cover. unless you show other users content from other users, XSS is impossible. to protect chat messages, use a sanitize function and https, and 90% of the attack vectors go away. then, add a CSP HTTP header to the page, and integrity attribs to any off-site <script> tags to stop the really cleaver attacks. ask me for specific details if needed. lastly, i agreed with JCCH back in the day, it's just been outmoded since written... – dandavis May 26 '16 at 21:22
0

For me, there is still some problematic aspects, as it described in the mix of Web Cryptographic API and IndexedDB plus signed (with sub-resource integrity) script or (especially web Worker).

Nor sub-resource integrity, neither IndexedDB and web worker code itself can not be specialized to site-specific user id.

Like, if you signing user out, you, probably, must wipe the db associated with user id, and everytime user signes in on a device, new keypair created, and new indexedDB. (assuming that public keys roster stored at server with associated user id and messages are encrypted by AES CBC or whatever symmetric available today, otherwise you will have to upload N copies of data one per user public key for cross-device user access (PC at work or uni/school, phone in pocket and laptop or tablet at home/weekend).

But even if you will keep these databases between logins/logouts at same browser instance, you may need name them based on user id, like app_name_user_${id}.

In case of web worker and notifications API, they are not user-specific too, more like origin-specific as well. So even if Alice signed out and Bob signed in instead in same browser window, code running for bob may still receive the push notifications for Alice. So on sign out, you may need to unsubscribe Alice context from notifications, or queue these notifications for Alice into special indexedb table for Alice while Bob using this app in browser.

Kote Isaev
  • 113
  • 3
  • For two users to share the same browser, they must share the same computer. I think if Alice gave Bob access to her whole laptop, the fact that Bob can in theory reach Alice's IndexedDB is a bit of a stretch to worry about. – Stijn de Witt Mar 25 '24 at 14:07