Git Product home page Git Product logo

lavabit / magma Goto Github PK

View Code? Open in Web Editor NEW
1.8K 119.0 171.0 570.74 MB

The magma server daemon, is an encrypted email system with support for SMTP, POP, IMAP, HTTP and MOLTEN,. Additional support for DMTP and DMAP is currently in active development.

License: GNU Affero General Public License v3.0

Makefile 0.04% C 92.87% C++ 0.09% Shell 0.37% HTML 0.52% M4 0.02% Perl 0.01% Python 0.04% PHP 0.03% CSS 0.29% JavaScript 5.73%
magma webmail imap smtp pop http encryption encrypted-message-protocol communications dark-mail

magma's People

Contributors

andpeterson avatar ba0f3 avatar fabianfrz avatar gwoplock avatar jadkins avatar jpadkins avatar kenthawk avatar l2dy avatar ladar avatar lbiv avatar mikezackles avatar tmuratore avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

magma's Issues

Docker Magma

Hi Guys,

Is there any way of having this running in a docker container?
I would be really interested to see if I could deploy it on a test server using docker.

Thanks

Document and fix fletcher hashing algorithm

  • Document where to find the specification
  • Write a unit test that validates some test vectors from that specification
  • Fix the code so that it doesn't depend on the endianness of the platform (hint: uint16_t *)
  • Since the algorithm doesn't modify the given data, make the pointer to it const

Incorporate Code Beautification Tool

The Programming Style Guide identifies both code structure approach and code style. In order to effectively manage code style, anything and everything that can be dealt with using a beautification tool will be. The coding standard doc will be the canonical source for how the configuration of a code beautification tool is defined.

After evaluation (uncrustify is floating to the top of the list), a tool needs to be selected, tested and a configuration file made based on rules outlined in the coding docs.

Encryption Engine for end-to-end encryption with symmetric encryption

Dear Ladar,

Having followed the Lavabit case and many other cases, I would like to have a private conversation with you, outside of this forum.

My prospect contribution:
I have developed an encryption engine, which works in that upon use, it creates an encryption algorithm from a password, and then that algorithm is using another password to encrypt with. The generator algorithm is pretty swift, and creates algorithms so strong that the outcome is FIPS-140-II conformant.

Ever worked with added redundance, deliberate distortion and Viterbi? Now imagine that you encrypt the Viterbi part, which is used to recover the lost data? You can say that if you distort a signal to the rim of recognition and reconstruction, and then you distort the way back by encryption, then you are establishing the worst possible scenario for brute force attack. Add to it, that the algorithm is arbitrarily deep. When you see it, you will after some analysis agree that it in seconds can establish such rough and tough encryption so it is virtually unbreakable.

Enough about the techie stuff / that we can always discuss.

Similarly the generated algorithm (made by the generator) is pretty swift, yet it can likely be optimized quite a lot, but the principle remains the same / the algorithm is one-way unique to the password like a hash is one-way unique.

The point is this: When the algorithm is determined by the password, and the cipher-code is depending on the algorithm, then it can be a bit of a task to do a brute force attack. Due to the way of encoding, we add redundancy to the plain-code, and the amount of code added, is also determined by the password.

It means, that even at gun-point, it is impossible to break the code, because the author cannot tell anything about which algorithm has been used, without having the password.

Now, I have worked with such encoding technologies through out my 34 years of software career.

**I want to discuss with you, how we can implement such hard encryption and end-to-end security in such a way that it cannot be exploited in any unethical manner.

Therefore: How can we protect law-abiding, or ethically "correctly" working persons against being listened to by big-brother? And how can we protect this code from being used by criminals, terrorists, and unethical use?
**

Finally I would also like to address the issue of security in the end of the computers, that is - the peers. These will be the vulnerable point: I have thought out how we can protect these against for instance a court order to "replace" the code at the client-side which would be similar to the Lavapit order. The way to deal with this is to create the software such that the encryption algorithms and their generation, are dealt with at the end-user level, such that our Dark Matter code will be like an agent-host, and the choice of algorithm will ultimately be the work of the end-user.

We will then create a system where developers all over the world can contribute with their own encryption algorithm, get it numbered, and distributed. This will ensure that it will be virtually impossible to follow such an court order, as the software which is literally key (no pun intended) will not be produced by our Dark Matter team, but rather be produced around the world.

Therefore, a users choice of encryption algorithms will be independent of our Dark Matter, and we can therefore not do a thing ..

We shall then also let the encryption generator remain in the public domain just as you have started it.

Now a court order could render the entire project illegal, however, I guess that would just make the entire security community go underground, and start using steganography etc. etc., which would be detrimental.

But back to the topic. What can we do to ensure that our pretty neutral encryption technology will not fall into the hands of the enemy - being either terrorists, or snoopy governments?

Please email me on [email protected]

Sincerely
David Svarrer

Continuous integration

Now that zach has made progress on incorporating autotools to handle the make dependencies, previoulsy handled in eclipse, the project is closer to implementing continuous integration tools.

Identify any remaining issues that must be resolved prior to implementing continuous integration technology.

  1. the huge, all-in-one magmad.so build (~380Mb) complicates continuous integration. Sites like github don't allow files that large, so in many (all?) cases, the magmad.so output may have to be recompiled on the fly. Is this enough to drive a need to split up magmad.so? Or better yet, is it time to consider building a release that depends on supporting packages to be installed vs building everything in one supporting .so file?

[Webmail] Captcha render failed

all libs were built w/ script build.lib.sh

xxx: captcha hvf value = [Cth@TsusRU]

Could not initialize the rectangle: libgd was not built with FreeType font support

   magmad(print_backtrace+0x2d)[0x466257]
   magmad(log_internal+0x3da)[0x46678d]
   magmad(http_print_500+0x35)[0x4b499a]
   magmad(register_print_captcha+0xc2)[0x47bcb0]
   magmad(register_process+0x19e)[0x47c4e2]
   magmad(http_response+0x351)[0x4b7e0c]
   magmad(dequeue+0xbd)[0x492143]
   /lib64/libpthread.so.0(+0x7aa1)[0x7fe85b0bfaa1]
   /lib64/libc.so.6(clone+0x6d)[0x7fe85ae0caad]

Improve MySQL NULL handling

The facilities for dealing with NULL database values appear to be lacking. Most of the res_field family of functions return valid results in the case of a null value.

Example

In this case, res_field_uint8_t returns 0, which is a valid uint8_t.

It is possible to check for NULL using res_field_generic, but to me this is not at all clear.

Note that the existing magma MySQL interface appears to copy all result data into its own buffer that is accessed using a hand-rolled interface. In the future we might consider using the MySQL prepared statement interface directly.

[DESIGN] credentials_t and meta_user_t unification.

Originally the major difference between the purpose for these two was that one was accessing the database and therefore needed to be cached(meta_user_t) while the other was used before the database needed to be accessed and didn't need to be cached (credentials_t).

Now we need to access the database for credentials_t but we still don't cache it. And the two are starting to step on each other's feet.

So here are some questions to think about:

Should we aim towards keeping both? Should both be cached? Should we be caching anything that comes from the database? What non-intersecting set of uses should we assign to these two objects if we intend to keep both of them.

Here are some possibilities:
1)
After initially being created and authenticated credential_t gets attached to its corresponding meta_user_t (singly-linked meta_user_t -> credential_t) That way we don't need to independently cache credentials.

Cache credential_t and meta_user_t separately. credential_t then will probably take the brunt of the memory pressure and can be refreshed aggressively, allowing us to refresh meta_user_t more conservatively.

Build: List current dependencies on eclipse IDE

In the process of removing dependencies on the Eclipse IDE for the Lavabit Dev environment, we need more information on what those dependencies are.

  1. Other build directives? autotools is running the entire suite of check tests. But there's more to uncover. Is the debug target a separate autotools build?

  2. Run configurations? Not much is understood about the run configurations in eclipse. Are these wrappers on what's otherwise available using scripts from the ~/bin directory?. Or independent magma run configurations?

  3. Is configuring/running valgrind something that's controlled via Eclipse?

  4. Other dependencies?

Note: Any change to the build configuration or any reliance on the eclipse IDE will need to involve Ladar's approval before committing this update to the "develop" branch.

More info: After evaluation, autotools has been selected for our new build system #26, thereby addressing the most glaring dependency on the eclipse IDE. While autotools builds the code without a dependency on eclipse in Zach's wip branch, referencing autotools from eclipse hasn't been wired in yet, so there's more work to do if we don't want to break the ability to use eclipse. Also, all dependencies no eclipse must be removed if we're to support an continuous build/integration environment.

[SMTP] Wrong user lookup

  • Authenticated user send to local mailbox will go directly to relay server (postfix)
  • Anonymous user can send email to local mailbox using local email address (authentication is not required)
  • External user (gmail) can send email to local mailbox nomally

Docs: Magma scripts need supporting documentation

There are ~ 50 scripts in the magma project that with the help of the linkup.sh script are delivered in bulk to the ~/bin directory. There needs to be a description for the development team as well as the public for what these scripts are used for. Some scripts are used for building, most are for testing, some may not function correctly and are being deprecated. All these issues as well as usage should be explained for all to see. Manpage format for each of the scripts would be in order.

Code: Signet Server Prototype

Rapid prototyping of signet server functionality is in order to support other coding efforts in the upcoming libDIME incorporation. Details TBD.

Unused columns in Users table

Users table definition

Registration query

I don't see where these fields are used:

  • advertising (boolean flag)
  • email (boolean flag)
  • chat (boolean flag)
  • timezone (int)

Note that email is used in some queries, but they just check for the default value. The others don't appear to be used in either queries.h or daily.sql. I don't see any reference to them in src either.

We should remove them if they're not doing anything.

Docs: Coding Standards

Publish a list of Coding Standards for all new development for this project, with a link made available from the top README.md page. The audience is the development team as well as public contributors.

Use proper commit messages

@ladar, I really love the work you and your team are doing, but why do you use commits like 1a55128:

Moved the unreliable autotools based build system into the dev tree, but am intentionally leaving it out of this commit. I also removed the auto generated Makefiles Eclipse was creating, and replaced them both with a single, hand crafted Makefile. Only magma is done so far, and its still just building the archive library, not the executable, but its a big step in the right direction. Lots left to do before the new Makefile has everything it needs, but at least in the interim we finally have a clean directory tree.

Too long! Please do not see this as rant or busting your balls, I just mean that reading the source code gets more difficult for other developers and may even be a reason why you have too few contributors.

When writing a commit message via GitHub frontend, usually this is shown when it is too long:

ProTip! Great commit summaries are 50 characters or less. Place extra information in the extended description.

If you push changes via your computer, I recommend to use SmartGit, which features this as well. Thanks!

SMTP wildcard email address search

src/servers/smtp/datatier.c lines 135-151:

The comment says that if no result for the given address was found we should check for wildcards being enabled and perform a wildcard search, however this code seems to do the same thing twice.

Docs: Building magma.classic from scratch

Complete and publish Markdown instructions sufficient for anyone in the public to build the magma.classic project from scratch. A link to the instructions will be present on the project's main README.md.

Evaluate latest check testing framework features and implement as appropriate

The check testing framework has been updated from it's current use in the magma.classic project. Examine the differences, install the update and incorporate the newer capabilities of the framework into the TDD process in magma.classic. The ultimate goal is to make use of the newer facilities and have one digestible output for all check tests on the project.

Rename project to simply magma...

The sooner we take care of this, the better. It was awkward telling people to help out with magma at DEF CON, but then direct them to a repo called "magma.classic." Either that, or go with the original plan and duplicate the repo as magma.dark, and remove the DIME related code from the classic repo... although I believe the latter plan has been abandoned for good reason.

P.S. Schult, I think your the man to take care of this, since you have admin rights...

magmad.so symlinks confuse ctags

The magmad.so build creates some symlinks in the zlib and openssl sources as a workaround for configure scripts that expect a particular directory structure. They should be removed after a successful magmad.so build.

Docs: Methodologies Used

Summarize the methodologies we have agreed and discovered as a function of working on this project. Close this issue if this subject is already going to be covered in the "Coding Standards" effort and reopen the Coding Standards issue to track that there's more to add. I'm referring to the list of lessons learned in magma.classic's coding structure that everyone needs to understand and is documented no where. Those coding rules are to be listed as 1) Required, 2) Required w/ exception via Ladar's agreement and 3) Strongly recommended.

Understanding programmatic flow of user credentials

I'm going to start this as an issue and I'll continue adding more text to it, so that it can help others get through this stuff as fast as possible.

Input is parsed from a connection:
It's a bit of a mess at this point, but here's where/how it works.
SMTP:
src/servers/smtp/parse.c line 118, function stringer_t * smtp_parse_auth(stringer_t *data)

This function is poorly commented. The comment claims that it takes a connection_t as a parameter, but instead it accepts a stringer. In all instances where this function is called it seems to be passed a member of the connection_t structure as such: (con->network.line) This returns a new stringer, which contains the "authentication response" string (RFC 2554, RFC 4954)

src/servers/smtp/smtp.c line 246 void smtp_auth_plain(connection_t *con)

lines 264, 271 calls the smtp_parse_auth() function, but further parsing is needed in order to separate username from password.

281: We decode the parsed string from base64
292: We tokenize the string into authorize_id, username and password.
314: We call credential_alloc_auth with username and password to create the credentials_t structure.

POP:
src/servers/pop/parse.c line 196, function stringer_t * pop_pass_parse(connection_t *con)
src/servers/pop/parse.c line 257, function stringer_t * pop_user_parse(connection_t *con)

Each function parses the data in con->network.line into a stringer containing password or username.

src/servers/pop/pop.c line 136, function void pop_user(connection_t *con)
src/servers/pop/pop.c line 175, function void pop_pass(connection_t *con)

pop_user:
146: calls pop_user_parse and parses it for the address using credentials_address(stringer_t *s)
157: saves the parsed address to the username member in pop_session_t inside the provided connection_t.

pop_pass:
187: ensures that there is already a set username inside the pop_session_t member of connection_t,
193: calls pop_parse to parse out the password.
209: calls credentials_alloc_auth with username and password to create a credentials structure.

pop_session_t is a member of an anonymous union inside connection_t, which also contains:
imap_session_t
smtp_session_t
http_session_t

smtp_session_t doesn't store login information
pop_session_t stores the username
imap_session_t stores both the username and the password, since IMAP allows multiple request per session.

IMAP:
Similar to POP, but lots of parsing.
It saves both the username and password inside imap_session_t.
src/servers/imap/imap.c line 102 function void imap_login(connection_t *con)

122: calls credentials_alloc_auth using username and password stored inside the connection imap_session_t.


At this point each protocol successfully created a credentials_t object using credentials_alloc_auth()

src/objects/neue/credentials.c line 207 function credentials_t * credentials_alloc_auth(stringer_t *username, stringer_t *password)

uses username and password to generate the credentials object which contains both the authentication token and the key to decrypt the private key necessary to decrypt protected user information.

@kenthawk @mikezackles @jfnixon

Document and fix MurmurHash hashing algorithm

  • Document where to find the specification
  • Write a unit test that validates some test vectors from that specification
  • Fix the code so that it doesn't depend on the endianness of the platform (hint: uint32_t *)
  • Since the algorithm doesn't modify the given data, make the pointer to it const

Docs: Running the Magma Daemon

This takes up after #12 "Docs: Building magma.classic from scratch" leaves off.

The audience for this initial effort is internal and public contributors to the code base so they have a set of instructions for how to configure and run the magma deamon for testing purposes. At least one example that should be covered in this reference is the extended messages test that was covered in Ladar's tech week videos. This is also an opportunity to explain, by example, how some of the testing scripts are used to test the magma server.

A later document will cover the system administration issues of configuring and running the deamon and all supporting services necessary to run a full up system.

[Long term] Performance: Caching the salt?

This depends on whether incorrect login credentials terminate the session with the user. If they don't perhaps we should cache a username-salt pair and attach it to the session so that consecutive login attempts don't require repeated salt queries.

edit:

Although this maybe something that's already optimized by the database itself... It probably is.

Docs: Technologies Used

Summarize a list, complete with reference links to the various technologies used on this project. This should provide an easy resources for the public to understand which technologies this project depends on versus accidentally discovering the same in an effort to contribute.

Lack of header dependency structure

Currently nearly every source file includes magma.h, which in turn includes nearly every header file in the project. This introduces dependencies on the order of header inclusion, gets in the way of dependency analysis, and triggers a global rebuild when any header file is changed. The way things are now, a necessary prototype/define etc. can easily live in pretty much any header file and the project will still build.

To be fair, a global rebuild isn't that slow, but we should at least consider making header dependencies more explicit.

Here is more discussion regarding this topic.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.