Просмотр исходного кода

Add thread local allocators, BRK locks, platform independency, lock reuse, local allocators, more tests, abort-as-panic, additional benches, provide an Allocator struct, memleak detection, fix bugs related to the is_empty function, micro caches (implemented, but not wired up), a prelude, allocator specific OOM handler, fix bound checks, Leak trait, fix bug in Mutex.

ticki 8 лет назад
Родитель
Сommit
146a5db98f
30 измененных файлов с 854 добавлено и 433 удалено
  1. 26 3
      Cargo.toml
  2. 100 16
      README.md
  3. 8 2
      TODO.md
  4. 1 1
      benches/box.rs
  5. 32 0
      benches/mpsc.rs
  6. 21 0
      benches/no_lock.rs
  7. 1 1
      benches/vec.rs
  8. 1 1
      benches/vec_box.rs
  9. 91 45
      src/allocator.rs
  10. 56 36
      src/block.rs
  11. 190 52
      src/bookkeeper.rs
  12. 0 48
      src/fail.rs
  13. 17 0
      src/leak.rs
  14. 13 15
      src/lib.rs
  15. 93 0
      src/micro.rs
  16. 5 0
      src/prelude.rs
  17. 15 18
      src/ptr.rs
  18. 22 14
      src/sync.rs
  19. 27 84
      src/sys.rs
  20. 16 37
      src/vec.rs
  21. 17 13
      tests/box.rs
  22. 14 10
      tests/btreemap.rs
  23. 17 0
      tests/join.rs
  24. 25 21
      tests/mpsc.rs
  25. 11 4
      tests/multithreading.rs
  26. 15 10
      tests/realloc.rs
  27. 10 2
      tests/send.rs
  28. 2 0
      tests/string.rs
  29. 4 0
      tests/vec.rs
  30. 4 0
      tests/vec_box.rs

+ 26 - 3
Cargo.toml

@@ -3,9 +3,32 @@ name = "ralloc"
 version = "0.1.0"
 authors = ["ticki <ticki@users.noreply.github.com>"]
 
-[target.'cfg(unix)'.dependencies]
-syscall = "0.2.1"
+# URLs and paths
+description = "An efficient alternative platform-agnostic allocator."
+repository = "https://github.com/redox-os/ralloc"
+readme = "README.md"
+
+# Metadata
+keywords = ["alloc", "malloc", "allocator", "ralloc", "redox"]
+license = "MIT"
+
+[dependencies.clippy]
+git = "https://github.com/Manishearth/rust-clippy.git"
+optional = true
+
+[profile.release]
+panic = "abort"
+opt-level = 3
+debug = false
+rpath = false
+lto = true
+debug-assertions = false
+codegen-units = 1
 
 [features]
-default = ["allocator"]
+default = ["allocator", "clippy"]
 allocator = []
+debug_tools = []
+security = []
+unsafe_no_brk_lock = []
+unsafe_no_mutex_lock = []

+ 100 - 16
README.md

@@ -25,7 +25,7 @@ then import it in your main file:
 extern crate ralloc;
 ```
 
-`ralloc` is now ready to roll!
+ralloc is now ready to roll!
 
 Note that ralloc cannot coexist with another allocator, unless they're deliberately compatible.
 
@@ -37,24 +37,25 @@ You can set custom OOM handlers, by:
 
 ```rust
 extern crate ralloc;
-use fail::set_oom_handler;
 
 fn my_handler() -> ! {
     println!("Oh no. Blame somebody.");
 }
 
 fn main() {
-    set_oom_handler(my_handler);
+    ralloc::lock().set_oom_handler(my_handler);
     // Do some stuff...
 }
 ```
 
 ### Debug check: double free
 
-Ooh, this one is a cool one. `ralloc` detects various memory bugs when compiled
-with `debug_assertions`. These checks include double free checks:
+Ooh, this one is a cool one. ralloc detects various memory bugs when compiled
+with the `debug_tools` feature. These checks include double free checks:
 
 ```rust
+extern crate ralloc;
+
 fn main() {
     // We start by allocating some stuff.
     let a = Box::new(500u32);
@@ -62,7 +63,7 @@ fn main() {
     let b = Box::from_raw(&a as *mut u32);
     // Now both destructors are called. First a, then b, which is a double
     // free. Luckily, ralloc provides a nice message for you, when in debug
-    // mode:
+    // tools mode:
     //    Assertion failed: Double free.
 
     // Setting RUST_BACKTRACE allows you to get a stack backtrace, so that you
@@ -70,13 +71,41 @@ fn main() {
 }
 ```
 
+### Debug check: memory leaks.
+
+ralloc got memleak superpowers too! Enable `debug_tools` and do:
+
+```rust
+extern crate ralloc;
+
+use std::mem;
+
+fn main() {
+    {
+        // We start by allocating some stuff.
+        let a = Box::new(500u32);
+        // We then leak `a`.
+        let b = mem::forget(a);
+    }
+    // The box is now leaked, and the destructor won't be called.
+
+    // To debug this we insert a memory leak check in the end of our programs.
+    // This will panic if a memory leak is found (and will be a NOOP without
+    // `debug_tools`).
+    ralloc::lock().debug_assert_no_leak();
+}
+```
+
 ### Partial deallocation
 
 Many allocators limits deallocations to be allocated block, that is, you cannot
-perform arithmetics or split it. `ralloc` does not have such a limitation:
+perform arithmetics or split it. ralloc does not have such a limitation:
 
 ```rust
+extern crate ralloc;
+
 use std::mem;
+
 fn main() {
     // We allocate 200 bytes.
     let vec = vec![0u8; 200];
@@ -95,7 +124,7 @@ fn main() {
 }
 ```
 
-### Seperate deallocation
+### Separate deallocation
 
 Another cool feature is that you can deallocate things that weren't even
 allocated buffers in the first place!
@@ -111,7 +140,7 @@ static mut BUFFER: [u8; 256] = [2; 256];
 fn main() {
     // Throw `BUFFER` into the memory pool.
     unsafe {
-        ralloc::free(&mut BUFFER as *mut u8, 256);
+        ralloc::lock().free(&mut BUFFER as *mut u8, 256);
     }
 
     // Do some allocation.
@@ -119,18 +148,73 @@ fn main() {
 }
 ```
 
-### Thread local allocator
+### Top notch security
 
-TODO
+If you are willing to trade a little performance, for extra security you can
+compile ralloc with the `security` flag. This will, along with other things,
+make frees zeroing.
 
-### Safe SBRK
-
-TODO
+In other words, an attacker cannot for example inject malicious code or data,
+which can be exploited when forgetting to initialize the data you allocate.
 
 ### Lock reuse
 
-TODO
+Acquiring a lock sequentially multiple times can be expensive. Therefore,
+ralloc allows you to lock the allocator once, and reuse that:
+
+```rust
+extern crate ralloc;
+
+fn main() {
+    // Get that lock!
+    let lock = ralloc::lock();
+
+    // All in one:
+    let _ = lock.alloc(4, 2);
+    let _ = lock.alloc(4, 2);
+    let _ = lock.alloc(4, 2);
+
+    // It is automatically released through its destructor.
+}
+```
+
+### Security through the type system
+
+ralloc makes heavy use of Rust's type system, to make safety guarantees.
+Internally, ralloc has a primitive named `Block`. This is fairly simple,
+denoting a contagious segment of memory, but what is interesting is how it is
+checked at compile time to be unique. This is done through the affine type
+system.
+
+This is just one of many examples.
 
 ### Platform agnostic
 
-TODO
+ralloc is platform independent, with the only requirement of the following symbols:
+
+1. `sbrk`: For extending the data segment size.
+2. `sched_yield`: For the spinlock.
+3. `memcpy`, `memcmp`, `memset`: Core memory routines.
+4. `rust_begin_unwind`: For panicking.
+
+### Local allocators
+
+ralloc allows you to create non-global allocators, for e.g. thread specific purposes:
+
+```rust
+extern crate ralloc;
+
+fn main() {
+    // We create an allocator.
+    let my_alloc = ralloc::Allocator::new();
+
+    // Allocate some stuff through our local allocator.
+    let _ = my_alloc.alloc(4, 2);
+    let _ = my_alloc.alloc(4, 2);
+    let _ = my_alloc.alloc(4, 2);
+}
+```
+
+### Safe SBRK
+
+ralloc provides a `sbrk`, which can be used safely without breaking the allocator.

+ 8 - 2
TODO.md

@@ -1,6 +1,12 @@
-- [ ] Thread local allocator.
-- [ ] Lock reuse
+- [x] Thread local allocator.
+- [x] Lock reuse
 - [ ] Checkpoints
 - [ ] Fast `calloc`
+- [ ] Microcaches.
+- [ ] Skip blocks.
+- [ ] Flattening
+- [ ] Deallocation cache.
+- [ ] Static assumptions.
+- [ ] Thread local storage.
 - [x] Check `checks` handling of null overlaps.
 - [x] `insert` (and probably `free_inplace`) is possibly due to null overlaps.

+ 1 - 1
benches/box.rs

@@ -12,5 +12,5 @@ fn bench(b: &mut Bencher) {
         let _bx2 = Box::new(0xF0002);
 
         "abc".to_owned().into_boxed_str()
-    })
+    });
 }

+ 32 - 0
benches/mpsc.rs

@@ -0,0 +1,32 @@
+#![feature(test)]
+
+extern crate ralloc;
+extern crate test;
+
+use std::thread;
+use std::sync::mpsc;
+
+use test::Bencher;
+
+#[bench]
+fn bench(b: &mut Bencher) {
+    b.iter(|| {
+        let (tx, rx) = mpsc::channel::<Box<u64>>();
+        thread::spawn(move || {
+            tx.send(Box::new(0xBABAFBABAF)).unwrap();
+            tx.send(Box::new(0xDEADBEAF)).unwrap();
+            tx.send(Box::new(0xDECEA5E)).unwrap();
+            tx.send(Box::new(0xDEC1A551F1E5)).unwrap();
+        });
+
+        let (ty, ry) = mpsc::channel();
+        for _ in 0..0xFF {
+            let ty = ty.clone();
+            thread::spawn(move || {
+                ty.send(Box::new(0xFA11BAD)).unwrap();
+            });
+        }
+
+        (rx, ry)
+    });
+}

+ 21 - 0
benches/no_lock.rs

@@ -0,0 +1,21 @@
+#![feature(test)]
+
+extern crate ralloc;
+extern crate test;
+
+use test::Bencher;
+
+#[bench]
+fn bench(b: &mut Bencher) {
+    b.iter(|| {
+        let mut lock = ralloc::lock();
+
+        for _ in 0..100000 {
+            let a = lock.alloc(200, 2);
+            unsafe {
+                let a = lock.realloc(a, 200, 300, 2);
+                lock.free(a, 300);
+            }
+        }
+    });
+}

+ 1 - 1
benches/vec.rs

@@ -15,5 +15,5 @@ fn bench(b: &mut Bencher) {
         stuff.reserve(100000);
 
         stuff
-    })
+    });
 }

+ 1 - 1
benches/vec_box.rs

@@ -17,5 +17,5 @@ fn bench(b: &mut Bencher) {
         stuff.reserve(100000);
 
         stuff
-    })
+    });
 }

+ 91 - 45
src/allocator.rs

@@ -1,60 +1,106 @@
 //! The global allocator.
 //!
 //! This contains primitives for the cross-thread allocator.
-use block::Block;
+
+use prelude::*;
+
 use bookkeeper::Bookkeeper;
-use ptr::Pointer;
 use sync;
 
-/// The bookkeeper.
-///
-/// This is the associated bookkeeper of this allocator.
-static BOOKKEEPER: sync::Mutex<Bookkeeper> = sync::Mutex::new(Bookkeeper::new());
+/// The global default allocator.
+static ALLOCATOR: sync::Mutex<Allocator> = sync::Mutex::new(Allocator::new());
 
-/// Allocate a block of memory.
-#[inline]
-pub fn alloc(size: usize, align: usize) -> *mut u8 {
-    *BOOKKEEPER.lock().alloc(size, align).into_ptr()
+/// Lock the allocator.
+pub fn lock<'a>() -> sync::MutexGuard<'a, Allocator> {
+    ALLOCATOR.lock()
 }
 
-/// Free a buffer.
+/// An allocator.
 ///
-/// Note that this do not have to be a buffer allocated through ralloc. The only requirement is
-/// that it is not used after the free.
-#[inline]
-pub unsafe fn free(ptr: *mut u8, size: usize) {
-    // Lock the bookkeeper, and do a `free`.
-    BOOKKEEPER.lock().free(Block::from_raw_parts(Pointer::new(ptr), size));
+/// This keeps metadata and relevant information about the allocated blocks. All allocation,
+/// deallocation, and reallocation happens through this.
+pub struct Allocator {
+    /// The inner bookkeeper.
+    inner: Bookkeeper,
 }
 
-/// Reallocate memory.
-///
-/// Reallocate the buffer starting at `ptr` with size `old_size`, to a buffer starting at the
-/// returned pointer with size `size`.
-#[inline]
-pub unsafe fn realloc(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8 {
-    // Lock the bookkeeper, and do a `realloc`.
-    *BOOKKEEPER.lock().realloc(
-        Block::from_raw_parts(Pointer::new(ptr), old_size),
-        size,
-        align
-    ).into_ptr()
-}
+impl Allocator {
+    /// Create a new, empty allocator.
+    pub const fn new() -> Allocator {
+        Allocator {
+            inner: Bookkeeper::new(),
+        }
+    }
 
-/// Try to reallocate the buffer _inplace_.
-///
-/// In case of success, return the new buffer's size. On failure, return the old size.
-///
-/// This can be used to shrink (truncate) a buffer as well.
-#[inline]
-pub unsafe fn realloc_inplace(ptr: *mut u8, old_size: usize, size: usize) -> Result<(), ()> {
-    // Lock the bookkeeper, and do a `realloc_inplace`.
-    if BOOKKEEPER.lock().realloc_inplace(
-        Block::from_raw_parts(Pointer::new(ptr), old_size),
-        size
-    ).is_ok() {
-        Ok(())
-    } else {
-        Err(())
+    /// Allocate a block of memory.
+    #[inline]
+    pub fn alloc(&mut self, size: usize, align: usize) -> *mut u8 {
+        *Pointer::from(self.inner.alloc(size, align))
+    }
+
+    /// Free a buffer.
+    ///
+    /// Note that this do not have to be a buffer allocated through ralloc. The only requirement is
+    /// that it is not used after the free.
+    #[inline]
+    pub unsafe fn free(&mut self, ptr: *mut u8, size: usize) {
+        // When compiled with `security`, we zero this block.
+        #[cfg(feature = "security")]
+        block.zero();
+
+        // Lock the bookkeeper, and do a `free`.
+        self.inner.free(Block::from_raw_parts(Pointer::new(ptr), size));
+    }
+
+    /// Reallocate memory.
+    ///
+    /// Reallocate the buffer starting at `ptr` with size `old_size`, to a buffer starting at the
+    /// returned pointer with size `size`.
+    #[inline]
+    pub unsafe fn realloc(&mut self, ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8 {
+        // Lock the bookkeeper, and do a `realloc`.
+        *Pointer::from(self.inner.realloc(
+            Block::from_raw_parts(Pointer::new(ptr), old_size),
+            size,
+            align
+        ))
+    }
+
+    /// Try to reallocate the buffer _inplace_.
+    ///
+    /// In case of success, return the new buffer's size. On failure, return the old size.
+    ///
+    /// This can be used to shrink (truncate) a buffer as well.
+    #[inline]
+    pub unsafe fn realloc_inplace(&mut self, ptr: *mut u8, old_size: usize, size: usize) -> Result<(), ()> {
+        // Lock the bookkeeper, and do a `realloc_inplace`.
+        if self.inner.realloc_inplace(
+            Block::from_raw_parts(Pointer::new(ptr), old_size),
+            size
+        ).is_ok() {
+            Ok(())
+        } else {
+            Err(())
+        }
+    }
+
+    /// Set the OOM handler.
+    ///
+    /// This is called when the process is out-of-memory.
+    pub fn set_oom_handler(&mut self, handler: fn() -> !) {
+        self.inner.set_oom_handler(handler);
+    }
+
+    /// Assert that no leaks are done.
+    ///
+    /// This should be run in the end of your program, after destructors have been run. It will then
+    /// panic if some item is not freed.
+    ///
+    /// In release mode, this is a NOOP.
+    pub fn debug_assert_no_leak(&self) {
+        #[cfg(features = "debug_tools")]
+        self.inner.assert_no_leak();
     }
 }
+
+impl Leak for Allocator {}

+ 56 - 36
src/block.rs

@@ -3,11 +3,12 @@
 //! Blocks are the main unit for the memory bookkeeping. A block is a simple construct with a
 //! `Pointer` pointer and a size. Occupied (non-free) blocks are represented by a zero-sized block.
 
-use core::{ptr, cmp, mem, fmt};
+use prelude::*;
 
-use ptr::Pointer;
 use sys;
 
+use core::{ptr, cmp, mem, fmt};
+
 /// A contiguous memory block.
 ///
 /// This provides a number of guarantees,
@@ -25,16 +26,6 @@ pub struct Block {
 }
 
 impl Block {
-    /// Create an empty block starting at `ptr`.
-    #[inline]
-    pub fn empty(ptr: &Pointer<u8>) -> Block {
-        Block {
-            size: 0,
-            // This won't alias `ptr`, since the block is empty.
-            ptr: unsafe { Pointer::new(**ptr) },
-        }
-    }
-
     /// Construct a block from its raw parts (pointer and size).
     #[inline]
     pub unsafe fn from_raw_parts(ptr: Pointer<u8>, size: usize) ->  Block {
@@ -44,19 +35,24 @@ impl Block {
         }
     }
 
-    /// Get the size of the block.
-    pub fn size(&self) -> usize {
-        self.size
+    /// BRK allocate a block.
+    #[inline]
+    pub fn brk(size: usize) -> Result<Block, ()> {
+        Ok(Block {
+            size: size,
+            ptr: unsafe {
+                Pointer::new(try!(sys::sbrk(size as isize)))
+            },
+        })
     }
 
-    /// BRK allocate a block.
-    ///
-    /// This is unsafe due to the allocator assuming that only it makes use of BRK.
+    /// Create an empty block starting at `ptr`.
     #[inline]
-   pub unsafe fn brk(size: usize) -> Block {
+    pub fn empty(ptr: Pointer<u8>) -> Block {
         Block {
-            size: size,
-            ptr: sys::inc_brk(size).unwrap_or_else(|x| x.handle()),
+            size: 0,
+            // This won't alias `ptr`, since the block is empty.
+            ptr: unsafe { Pointer::new(*ptr) },
         }
     }
 
@@ -65,13 +61,18 @@ impl Block {
     /// This will simply extend the block, adding the size of the block, and then set the size to
     /// zero. The return value is `Ok(())` on success, and `Err(())` on failure (e.g., the blocks
     /// are not adjacent).
+    ///
+    /// If you merge with a zero sized block, it will succeed, even if they are not adjacent.
     #[inline]
     pub fn merge_right(&mut self, block: &mut Block) -> Result<(), ()> {
-        if self.left_to(&block) {
+        if block.is_empty() {
+            Ok(())
+        } else if self.left_to(block) {
             // Since the end of `block` is bounded by the address space, adding them cannot
             // overflow.
             self.size += block.pop().size;
             // We pop it to make sure it isn't aliased.
+
             Ok(())
         } else { Err(()) }
     }
@@ -79,7 +80,12 @@ impl Block {
     /// Is this block empty/free?
     #[inline]
     pub fn is_empty(&self) -> bool {
-        self.size != 0
+        self.size == 0
+    }
+
+    /// Get the size of the block.
+    pub fn size(&self) -> usize {
+        self.size
     }
 
     /// Is this block aligned to `align`?
@@ -88,12 +94,6 @@ impl Block {
         *self.ptr as usize % align == 0
     }
 
-    /// Get the inner pointer.
-    #[inline]
-    pub fn into_ptr(self) -> Pointer<u8> {
-        self.ptr
-    }
-
     /// memcpy the block to another pointer.
     ///
     /// # Panics
@@ -109,12 +109,22 @@ impl Block {
         }
     }
 
+    /// Volatile zero this memory.
+    #[cfg(feature = "security")]
+    pub fn zero(&mut self) {
+        use core::intrinsics;
+
+        unsafe {
+            intrinsics::volatile_set_memory(*self.ptr, 0, self.size);
+        }
+    }
+
     /// "Pop" this block.
     ///
     /// This marks it as free, and returns the old value.
     #[inline]
     pub fn pop(&mut self) -> Block {
-        let empty = Block::empty(&self.ptr);
+        let empty = Block::empty(self.ptr.clone());
         mem::replace(self, empty)
     }
 
@@ -137,7 +147,7 @@ impl Block {
         (
             Block {
                 size: pos,
-                ptr: self.ptr.duplicate(),
+                ptr: self.ptr.clone(),
             },
             Block {
                 size: self.size - pos,
@@ -161,7 +171,7 @@ impl Block {
             Some((
                 Block {
                     size: aligner,
-                    ptr: old.ptr.duplicate(),
+                    ptr: old.ptr.clone(),
                 },
                 Block {
                     size: old.size - aligner,
@@ -172,6 +182,14 @@ impl Block {
     }
 }
 
+impl !Sync for Block {}
+
+impl From<Block> for Pointer<u8> {
+    fn from(from: Block) -> Pointer<u8> {
+        from.ptr
+    }
+}
+
 impl PartialOrd for Block {
     #[inline]
     fn partial_cmp(&self, other: &Block) -> Option<cmp::Ordering> {
@@ -204,9 +222,7 @@ impl fmt::Debug for Block {
 
 #[cfg(test)]
 mod test {
-    use super::*;
-
-    use ptr::Pointer;
+    use prelude::*;
 
     #[test]
     fn test_array() {
@@ -231,7 +247,7 @@ mod test {
         assert!(rest.is_empty());
         assert!(lorem.align(2).unwrap().1.aligned_to(2));
         assert!(rest.align(16).unwrap().1.aligned_to(16));
-        assert_eq!(*lorem.into_ptr() as usize + 5, *rest.into_ptr() as usize);
+        assert_eq!(*Pointer::from(lorem) as usize + 5, *Pointer::from(rest) as usize);
     }
 
     #[test]
@@ -243,6 +259,10 @@ mod test {
 
         let (mut lorem, mut rest) = block.split(5);
         lorem.merge_right(&mut rest).unwrap();
+
+        let mut tmp = rest.split(0).0;
+        assert!(tmp.is_empty());
+        lorem.split(2).0.merge_right(&mut tmp).unwrap();
     }
 
     #[test]

+ 190 - 52
src/bookkeeper.rs

@@ -1,11 +1,10 @@
 //! Memory bookkeeping.
 
-use block::Block;
+use prelude::*;
+
 use vec::Vec;
-use fail;
 
-use core::{ptr, cmp, mem};
-use core::mem::{align_of, size_of};
+use core::{ptr, cmp, mem, intrinsics};
 
 /// Canonicalize a BRK request.
 ///
@@ -19,7 +18,12 @@ use core::mem::{align_of, size_of};
 /// The return value is always greater than or equals to the argument.
 #[inline]
 fn canonicalize_brk(min: usize) -> usize {
+    /// The BRK multiplier.
+    ///
+    /// The factor determining the linear dependence between the minimum segment, and the acquired
+    /// segment.
     const BRK_MULTIPLIER: usize = 2;
+    /// The minimum size to be BRK'd.
     const BRK_MIN: usize = 65536;
     /// The maximal amount of _extra_ elements.
     const BRK_MAX_EXTRA: usize = 4 * 65536;
@@ -32,6 +36,15 @@ fn canonicalize_brk(min: usize) -> usize {
     res
 }
 
+/// The default OOM handler.
+///
+/// This will simply abort the process.
+fn default_oom_handler() -> ! {
+    unsafe {
+        intrinsics::abort();
+    }
+}
+
 /// The memory bookkeeper.
 ///
 /// This is the main component of ralloc. Its job is to keep track of the free blocks in a
@@ -58,6 +71,11 @@ pub struct Bookkeeper {
     ///
     /// These are invariants assuming that only the public methods are used.
     pool: Vec<Block>,
+    /// The inner OOM handler.
+    oom_handler: fn() -> !,
+    /// The number of bytes currently allocated.
+    #[cfg(features = "debug_tools")]
+    allocated: usize,
 }
 
 impl Bookkeeper {
@@ -65,9 +83,22 @@ impl Bookkeeper {
     ///
     /// This will make no allocations or BRKs.
     #[inline]
+    #[cfg(features = "debug_tools")]
+    pub const fn new() -> Bookkeeper {
+        Bookkeeper {
+            pool: Vec::new(),
+            oom_handler: default_oom_handler,
+            allocated: 0,
+        }
+
+    }
+
+    #[inline]
+    #[cfg(not(features = "debug_tools"))]
     pub const fn new() -> Bookkeeper {
         Bookkeeper {
             pool: Vec::new(),
+            oom_handler: default_oom_handler,
         }
     }
 
@@ -119,6 +150,7 @@ impl Bookkeeper {
         if let Some((n, b)) = self.pool.iter_mut().enumerate().filter_map(|(n, i)| {
             // Try to split at the aligner.
             i.align(align).map(|(a, b)| {
+                // Override the old block.
                 *i = a;
                 (n, b)
             })
@@ -136,10 +168,12 @@ impl Bookkeeper {
             debug_assert!(res.size() == size, "Requested space does not match with the returned \
                           block.");
 
-            res
+            self.leave(res)
         } else {
             // No fitting block found. Allocate a new block.
-            self.alloc_fresh(size, align)
+            let res = self.alloc_fresh(size, align);
+            // "Leave" the allocator.
+            self.leave(res)
         }
     }
 
@@ -187,7 +221,11 @@ impl Bookkeeper {
     /// See [`insert`](#method.insert) for details.
     #[inline]
     pub fn free(&mut self, block: Block) {
+        // "Enter" the allocator.
+        let block = self.enter(block);
+
         let ind = self.find(&block);
+
         self.free_ind(ind, block);
     }
 
@@ -227,9 +265,11 @@ impl Bookkeeper {
         // Find the index.
         let ind = self.find(&block);
 
+        // "Leave" the allocator.
+        let block = self.enter(block);
         // Try to do an inplace reallocation.
         match self.realloc_inplace_ind(ind, block, new_size) {
-            Ok(block) => block,
+            Ok(block) => self.leave(block),
             Err(block) => {
                 // Reallocation cannot be done inplace.
 
@@ -245,10 +285,10 @@ impl Bookkeeper {
                 // Check consistency.
                 self.check();
                 debug_assert!(res.aligned_to(align), "Alignment failed.");
-                debug_assert!(res.size() == new_size, "Requested space does not match with the \
+                debug_assert!(res.size() >= new_size, "Requested space does not match with the \
                               returned block.");
 
-                res
+                self.leave(res)
             },
         }
     }
@@ -280,23 +320,11 @@ impl Bookkeeper {
     /// "Fresh" means that the space is allocated through a BRK call to the kernel.
     ///
     /// The returned pointer is guaranteed to be aligned to `align`.
+    #[inline]
     fn alloc_fresh(&mut self, size: usize, align: usize) -> Block {
-        // Calculate the canonical size (extra space is allocated to limit the number of system calls).
-        let brk_size = canonicalize_brk(size).checked_add(align).unwrap_or_else(|| fail::oom());
-
-        // Use SYSBRK to allocate extra data segment. The alignment is used as precursor for our
-        // allocated block. This ensures that it is properly memory aligned to the requested value.
-        let (alignment_block, rest) = unsafe {
-            Block::brk(brk_size)
-        }.align(align).unwrap();
+        // BRK what you need.
+        let (alignment_block, res, excessive) = self.brk(size, align);
 
-        // Split the block to leave the excessive space.
-        let (res, excessive) = rest.split(size);
-
-        // Make some assertions.
-        debug_assert!(res.aligned_to(align), "Alignment failed.");
-        debug_assert!(res.size() + alignment_block.size() + excessive.size() == brk_size, "BRK memory \
-                      leak in fresh allocation.");
         // Add it to the list. This will not change the order, since the pointer is higher than all
         // the previous blocks.
         self.push(alignment_block);
@@ -314,6 +342,9 @@ impl Bookkeeper {
     ///
     /// See [`realloc_inplace_ind`](#method.realloc_inplace.html) for more information.
     fn realloc_inplace_ind(&mut self, ind: usize, mut block: Block, new_size: usize) -> Result<Block, Block> {
+        /// Assertions...
+        debug_assert!(self.find(&block) == ind, "Block is not inserted at the appropriate index.");
+
         if new_size <= block.size() {
             // Shrink the block.
 
@@ -358,21 +389,26 @@ impl Bookkeeper {
 
     /// Free a block placed on some index.
     ///
+    /// This will at maximum insert one element.
+    ///
     /// See [`free`](#method.free) for more information.
+    #[inline]
     fn free_ind(&mut self, ind: usize, mut block: Block) {
-        // Do a lazy shortcut.
-        if block.is_empty() { return; }
+        /// Assertions...
+        debug_assert!(self.find(&block) == ind, "Block is not inserted at the appropriate index.");
 
         // Try to merge left, and then right.
-        if {
+        if self.pool.is_empty() || {
             // To avoid double bound checking and other shenanigans, we declare a variable holding our
             // entry's pointer.
             let entry = &mut self.pool[ind];
 
             // Make some handy assertions.
-            debug_assert!(entry != &mut block, "Double free.");
+            #[cfg(features = "debug_tools")]
+            assert!(entry != &mut block, "Double free.");
+
             entry.merge_right(&mut block).is_err()
-        } | (ind == 0 || self.pool[ind - 1].merge_right(&mut block).is_err()) {
+        } || ind == 0 || self.pool[ind - 1].merge_right(&mut block).is_err() {
             // Since merge failed, we will have to insert it in a normal manner.
             self.insert(ind, block);
         }
@@ -381,23 +417,42 @@ impl Bookkeeper {
         self.check();
     }
 
+    /// Extend the data segment.
+    #[inline]
+    fn brk(&self, size: usize, align: usize) -> (Block, Block, Block) {
+        // Calculate the canonical size (extra space is allocated to limit the number of system calls).
+        let brk_size = canonicalize_brk(size).checked_add(align).unwrap_or_else(|| self.oom());
+
+        // Use SBRK to allocate extra data segment. The alignment is used as precursor for our
+        // allocated block. This ensures that it is properly memory aligned to the requested value.
+        let (alignment_block, rest) = Block::brk(brk_size)
+            .unwrap_or_else(|_| self.oom())
+            .align(align)
+            .unwrap();
+
+        // Split the block to leave the excessive space.
+        let (res, excessive) = rest.split(size);
+
+        // Make some assertions.
+        debug_assert!(res.aligned_to(align), "Alignment failed.");
+        debug_assert!(res.size() + alignment_block.size() + excessive.size() == brk_size, "BRK memory leak");
+
+        (alignment_block, res, excessive)
+    }
+
     /// Push to the block pool.
     ///
     /// This will append a block entry to the end of the block pool. Make sure that this entry has
     /// a value higher than any of the elements in the list, to keep it sorted.
     #[inline]
     fn push(&mut self, mut block: Block) {
-        // First, we will do a shortcut in case that the block is empty.
-        if block.is_empty() {
-            return;
-        }
-
         // We will try to simply merge it with the last block.
         if let Some(x) = self.pool.last_mut() {
             if x.merge_right(&mut block).is_ok() {
                 return;
             }
-        }
+        } else if block.is_empty() { return; }
+
         // Merging failed. Note that trailing empty blocks are not allowed, hence the last block is
         // the only non-empty candidate which may be adjacent to `block`.
 
@@ -427,7 +482,7 @@ impl Bookkeeper {
             let len = self.pool.len();
 
             // Calculate the index.
-            let ind = self.find(&Block::empty(&self.pool.ptr().duplicate().cast()));
+            let ind = self.find(&Block::empty(Pointer::from(&*self.pool).cast()));
             // Temporarily steal the block, placing an empty vector in its place.
             let block = Block::from(mem::replace(&mut self.pool, Vec::new()));
             // TODO allow BRK-free non-inplace reservations.
@@ -435,9 +490,9 @@ impl Bookkeeper {
             // Reallocate the block pool.
 
             // We first try do it inplace.
-            match self.realloc_inplace_ind(ind, block, needed * size_of::<Block>()) {
+            match self.realloc_inplace_ind(ind, block, needed * mem::size_of::<Block>()) {
                 Ok(succ) => {
-                    // Set the extend block back.
+                    // Inplace reallocation suceeeded, place the block back as the pool.
                     self.pool = unsafe { Vec::from_raw_parts(succ, len) };
                 },
                 Err(block) => {
@@ -449,12 +504,24 @@ impl Bookkeeper {
                     // Make a fresh allocation.
                     let size = needed.saturating_add(
                         cmp::min(self.pool.capacity(), 200 + self.pool.capacity() / 2)
-                    ) * size_of::<Block>();
-                    let alloc = self.alloc_fresh(size, align_of::<Block>());
-
-                    // Inplace reallocation suceeeded, place the block back as the pool.
-                    let refilled = self.pool.refill(alloc);
-                    self.free_ind(ind, refilled);
+                        // We add:
+                        + 1 // block for the alignment block.
+                        + 1 // block for the freed vector.
+                        + 1 // block for the excessive space.
+                    ) * mem::size_of::<Block>();
+                    let (alignment_block, alloc, excessive) = self.brk(size, mem::align_of::<Block>());
+
+                    // Refill the pool.
+                    let old = self.pool.refill(alloc);
+
+                    // Push the alignment block (note that it is in fact in the end of the pool,
+                    // due to BRK _extending_ the segment).
+                    self.push(alignment_block);
+                    // The excessive space.
+                    self.push(excessive);
+
+                    // Free the old vector.
+                    self.free_ind(ind, old);
                 },
             }
 
@@ -537,21 +604,32 @@ impl Bookkeeper {
     /// The insertion is now completed.
     #[inline]
     fn insert(&mut self, ind: usize, block: Block) {
+        // Bound check.
+        assert!(self.pool.len() > ind, "Insertion out of bounds.");
+
         // Some assertions...
-        debug_assert!(block >= self.pool[ind + 1], "Inserting at {} will make the list unsorted.", ind);
+        debug_assert!(self.pool.is_empty() || block >= self.pool[ind + 1], "Inserting at {} will \
+                      make the list unsorted.", ind);
         debug_assert!(self.find(&block) == ind, "Block is not inserted at the appropriate index.");
 
         // TODO consider moving right before searching left.
 
         // Find the next gap, where a used block were.
         if let Some((n, _)) = self.pool.iter().skip(ind).enumerate().filter(|&(_, x)| !x.is_empty()).next() {
-            // Memmove the blocks to close in that gap.
-            unsafe {
-                ptr::copy(self.pool[ind..].as_ptr(), self.pool[ind + 1..].as_mut_ptr(), self.pool.len() - n);
+            // Reserve capacity.
+            {
+                let new_len = self.pool.len() + 1;
+                self.reserve(new_len);
             }
 
-            // Place the block left to the moved line.
-            self.pool[ind] = block;
+            unsafe {
+                // Memmove the elements.
+                ptr::copy(self.pool.get_unchecked(ind) as *const Block,
+                          self.pool.get_unchecked_mut(ind + 1) as *mut Block, self.pool.len() - n);
+
+                // Set the element.
+                *self.pool.get_unchecked_mut(ind) = block;
+            }
         } else {
             self.push(block);
         }
@@ -560,8 +638,56 @@ impl Bookkeeper {
         self.check();
     }
 
+    /// Call the OOM handler.
+    ///
+    /// This is used one out-of-memory errors, and will never return. Usually, it simply consists
+    /// of aborting the process.
+    fn oom(&self) -> ! {
+        (self.oom_handler)()
+    }
+
+    /// Set the OOM handler.
+    ///
+    /// This is called when the process is out-of-memory.
+    #[inline]
+    pub fn set_oom_handler(&mut self, handler: fn() -> !) {
+        self.oom_handler = handler;
+    }
+
+    /// Leave the allocator.
+    ///
+    /// A block should be "registered" through this function when it leaves the allocated (e.g., is
+    /// returned), these are used to keep track of the current heap usage, and memory leaks.
+    #[inline]
+    fn leave(&mut self, block: Block) -> Block {
+        // Update the number of bytes allocated.
+        #[cfg(features = "debug_tools")]
+        {
+            self.allocated += block.size();
+        }
+
+        block
+    }
+
+    /// Enter the allocator.
+    ///
+    /// A block should be "registered" through this function when it enters the allocated (e.g., is
+    /// given as argument), these are used to keep track of the current heap usage, and memory
+    /// leaks.
+    #[inline]
+    fn enter(&mut self, block: Block) -> Block {
+        // Update the number of bytes allocated.
+        #[cfg(features = "debug_tools")]
+        {
+            self.allocated -= block.size();
+        }
+
+        block
+    }
+
     /// No-op in release mode.
     #[cfg(not(debug_assertions))]
+    #[inline]
     fn check(&self) {}
 
     /// Perform consistency checks.
@@ -576,13 +702,25 @@ impl Bookkeeper {
             let mut prev = x;
             for (n, i) in self.pool.iter().enumerate().skip(1) {
                 // Check if sorted.
-                assert!(i >= prev, "The block pool is not sorted at index, {} ({:?} < {:?})", n, i, prev);
+                assert!(i >= prev, "The block pool is not sorted at index, {} ({:?} < {:?})", n, i,
+                        prev);
                 // Make sure no blocks are adjacent.
-                assert!(!prev.left_to(i) || i.is_empty(), "Adjacent blocks at index, {} ({:?} and {:?})", n, i, prev);
+                assert!(!prev.left_to(i) || i.is_empty(), "Adjacent blocks at index, {} ({:?} and \
+                        {:?})", n, i, prev);
 
                 // Set the variable tracking the previous block.
                 prev = i;
             }
         }
     }
+
+    /// Check for memory leaks.
+    ///
+    /// This will ake sure that all the allocated blocks have been freed.
+    #[cfg(features = "debug_tools")]
+    pub fn assert_no_leak(&self) {
+        assert!(self.allocated == self.pool.capacity() * mem::size_of::<Block>(), "Not all blocks \
+                freed. Total allocated space is {} ({} free blocks).", self.allocated,
+                self.pool.len());
+    }
 }

+ 0 - 48
src/fail.rs

@@ -1,48 +0,0 @@
-//! Primitives for allocator failures.
-
-use core::sync::atomic::{self, AtomicPtr};
-use core::{mem, intrinsics};
-
-/// The OOM handler.
-static OOM_HANDLER: AtomicPtr<()> = AtomicPtr::new(default_oom_handler as *mut ());
-
-/// The default OOM handler.
-///
-/// This will simply abort the process.
-fn default_oom_handler() -> ! {
-    unsafe {
-        intrinsics::abort();
-    }
-}
-
-/// Call the OOM handler.
-#[cold]
-#[inline(never)]
-pub fn oom() -> ! {
-    let value = OOM_HANDLER.load(atomic::Ordering::SeqCst);
-    let handler: fn() -> ! = unsafe { mem::transmute(value) };
-    handler();
-}
-
-/// Set the OOM handler.
-///
-/// This allows for overwriting the default OOM handler with a custom one.
-pub fn set_oom_handler(handler: fn() -> !) {
-    OOM_HANDLER.store(handler as *mut (), atomic::Ordering::SeqCst);
-}
-
-#[cfg(test)]
-mod test {
-    use super::*;
-
-    #[test]
-    #[should_panic]
-    fn test_handler() {
-        fn panic() -> ! {
-            panic!("blame canada for the OOM.");
-        }
-
-        set_oom_handler(panic);
-        oom();
-    }
-}

+ 17 - 0
src/leak.rs

@@ -0,0 +1,17 @@
+//! Traits for leakable types.
+//!
+//! In the context of writing a memory allocator, leaks are never ideal. To avoid these, we have a
+//! trait for types which are "leakable".
+
+use prelude::*;
+
+/// Types that have no destructor.
+///
+/// This trait act as a simple static assertions catching dumb logic errors and memory leaks.
+///
+/// Since one cannot define mutually exclusive traits, we have this as a temporary hack.
+pub trait Leak {}
+
+impl Leak for () {}
+impl Leak for Block {}
+impl Leak for u8 {}

+ 13 - 15
src/lib.rs

@@ -4,36 +4,34 @@
 //! efficiency.
 
 #![cfg_attr(feature = "allocator", allocator)]
+#![cfg_attr(feature="clippy", feature(plugin))]
+#![cfg_attr(feature="clippy", plugin(clippy))]
+
 #![no_std]
 
 #![feature(allocator, const_fn, core_intrinsics, stmt_expr_attributes, drop_types_in_const,
-           nonzero)]
-
+           nonzero, optin_builtin_traits, type_ascription)]
 #![warn(missing_docs)]
 
-#[cfg(target_os = "redox")]
-extern crate system;
-#[cfg(not(target_os = "redox"))]
-#[macro_use]
-extern crate syscall;
-
-mod allocator;
 mod block;
 mod bookkeeper;
+mod leak;
+mod prelude;
 mod ptr;
 mod sync;
 mod sys;
 mod vec;
-pub mod fail;
+mod allocator;
 
-pub use allocator::{free, alloc, realloc, realloc_inplace};
+pub use allocator::{lock, Allocator};
+pub use sys::sbrk;
 
 /// Rust allocation symbol.
 #[no_mangle]
 #[inline]
 #[cfg(feature = "allocator")]
 pub extern fn __rust_allocate(size: usize, align: usize) -> *mut u8 {
-    alloc(size, align)
+    lock().alloc(size, align)
 }
 
 /// Rust deallocation symbol.
@@ -41,7 +39,7 @@ pub extern fn __rust_allocate(size: usize, align: usize) -> *mut u8 {
 #[inline]
 #[cfg(feature = "allocator")]
 pub unsafe extern fn __rust_deallocate(ptr: *mut u8, size: usize, _align: usize) {
-    free(ptr, size);
+    lock().free(ptr, size);
 }
 
 /// Rust reallocation symbol.
@@ -49,7 +47,7 @@ pub unsafe extern fn __rust_deallocate(ptr: *mut u8, size: usize, _align: usize)
 #[inline]
 #[cfg(feature = "allocator")]
 pub unsafe extern fn __rust_reallocate(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8 {
-    realloc(ptr, old_size, size, align)
+    lock().realloc(ptr, old_size, size, align)
 }
 
 /// Rust reallocation inplace symbol.
@@ -57,7 +55,7 @@ pub unsafe extern fn __rust_reallocate(ptr: *mut u8, old_size: usize, size: usiz
 #[inline]
 #[cfg(feature = "allocator")]
 pub unsafe extern fn __rust_reallocate_inplace(ptr: *mut u8, old_size: usize, size: usize, _align: usize) -> usize {
-    if realloc_inplace(ptr, old_size, size).is_ok() {
+    if lock().realloc_inplace(ptr, old_size, size).is_ok() {
         size
     } else {
         old_size

+ 93 - 0
src/micro.rs

@@ -0,0 +1,93 @@
+//! Micro slots for caching small allocations.
+
+use prelude::*;
+
+use core::{marker, mem};
+
+const CACHE_LINE_SIZE: usize = 128;
+const CACHE_LINES: usize = 32;
+
+/// A "microcache".
+///
+/// A microcache consists of some number of equal sized slots, whose state is stored as bitflags.
+pub struct MicroCache {
+    free: u32,
+    lines: [CacheLine; CACHE_LINES],
+}
+
+impl MicroCache {
+    pub const fn new() -> MicroCache {
+        MicroCache {
+            free: !0,
+            lines: [CacheLine::new(); CACHE_LINES],
+        }
+    }
+
+    pub fn alloc(&mut self, size: usize, align: usize) -> Result<Block, ()> {
+        if size <= CACHE_LINE_SIZE && self.free != 0 {
+            let ind = self.free.trailing_zeros();
+            let line = &mut self.lines[ind as usize];
+            let res = unsafe { line.take(size) };
+
+            if res.aligned_to(align) {
+                self.free ^= 1u32.wrapping_shl(ind);
+
+                return Ok(res);
+            } else {
+                line.reset();
+            }
+        }
+
+        Err(())
+    }
+
+    pub fn free(&mut self, mut block: Block) -> Result<(), Block> {
+        let res = block.pop();
+        let ptr: Pointer<u8> = block.into();
+        let ind = (*ptr as usize - &self.lines as *const CacheLine as usize) / mem::size_of::<Block>();
+
+        if let Some(line) = self.lines.get_mut(ind) {
+            line.used -= res.size();
+            if line.used == 0 {
+                debug_assert!(self.free & 1u32.wrapping_shl(ind as u32) == 0, "Freeing a block \
+                              already marked as free.");
+                self.free ^= 1u32.wrapping_shl(ind as u32);
+            }
+
+            Ok(())
+        } else {
+            Err(res)
+        }
+    }
+}
+
+#[derive(Clone, Copy)]
+struct CacheLine {
+    /// The cache line's data.
+    ///
+    /// We use `u32` as a hack to be able to derive `Copy`.
+    data: [u32; CACHE_LINE_SIZE / 4],
+    used: usize,
+    _static: marker::PhantomData<&'static mut [u8]>,
+}
+
+impl CacheLine {
+    pub const fn new() -> CacheLine {
+        CacheLine {
+            data: [0; CACHE_LINE_SIZE / 4],
+            used: 0,
+            _static: marker::PhantomData,
+        }
+    }
+
+    fn reset(&mut self) {
+        self.used = 0;
+    }
+
+    unsafe fn take(&mut self, size: usize) -> Block {
+        debug_assert!(self.used == 0, "Block not freed!");
+
+        self.used = size;
+        Block::from_raw_parts(Pointer::new(&mut self.data[0] as *mut u32 as *mut u8), size)
+    }
+}

+ 5 - 0
src/prelude.rs

@@ -0,0 +1,5 @@
+//! Frequently used imports.
+
+pub use block::Block;
+pub use leak::Leak;
+pub use ptr::Pointer;

+ 15 - 18
src/ptr.rs

@@ -7,7 +7,7 @@ use core::{ops, marker};
 ///
 /// A wrapper around a raw non-null `*mut T` that indicates that the possessor of this wrapper owns
 /// the referent.
-#[derive(PartialEq, Debug)]
+#[derive(PartialEq, Debug, Clone)]
 pub struct Pointer<T> {
     /// The internal pointer.
     ptr: NonZero<*mut T>,
@@ -35,14 +35,13 @@ impl<T> Pointer<T> {
         }
     }
 
-    /// Duplicate this pointer.
+    /// Create an "empty" `Pointer`.
     ///
-    /// For technical reasons, this is not implemented through the `Clone` trait, although it acts
-    /// similarly.
+    /// This acts as a null pointer, although it is represented by 0x1 instead of 0x0.
     #[inline]
-    pub fn duplicate(&self) -> Pointer<T> {
+    pub const fn empty() -> Pointer<T> {
         Pointer {
-            ptr: self.ptr,
+            ptr: unsafe { NonZero::new(0x1 as *mut T) },
             _phantom: marker::PhantomData,
         }
     }
@@ -58,17 +57,6 @@ impl<T> Pointer<T> {
         }
     }
 
-    /// Create an "empty" `Pointer`.
-    ///
-    /// This acts as a null pointer, although it is represented by 0x1 instead of 0x0.
-    #[inline]
-    pub const fn empty() -> Pointer<T> {
-        Pointer {
-            ptr: unsafe { NonZero::new(0x1 as *mut T) },
-            _phantom: marker::PhantomData,
-        }
-    }
-
     /// Offset this pointer.
     ///
     /// This will add some value multiplied by the size of T to the pointer.
@@ -82,6 +70,15 @@ impl<T> Pointer<T> {
     }
 }
 
+unsafe impl<T: Send> Send for Pointer<T> {}
+unsafe impl<T: Sync> Sync for Pointer<T> {}
+
+impl<'a, T> From<&'a [T]> for Pointer<T> {
+    fn from(from: &[T]) -> Pointer<T> {
+        unsafe { Pointer::new(from.as_ptr() as *mut T) }
+    }
+}
+
 impl<T> ops::Deref for Pointer<T> {
     type Target = *mut T;
 
@@ -102,7 +99,7 @@ mod test {
         unsafe {
             let ptr = Pointer::new(&mut x[0] as *mut u8);
             assert_eq!(**ptr, b'a');
-            assert_eq!(**ptr.duplicate().cast::<[u8; 1]>(), [b'a']);
+            assert_eq!(**ptr.clone().cast::<[u8; 1]>(), [b'a']);
             assert_eq!(**ptr.offset(1), b'b');
         }
     }

+ 22 - 14
src/sync.rs

@@ -1,19 +1,21 @@
 //! Synchronization primitives.
 
-use core::sync::atomic::{self, AtomicBool};
-use core::ops;
+use prelude::*;
 
 use sys;
 
+use core::cell::UnsafeCell;
+use core::sync::atomic::{self, AtomicBool};
+use core::ops;
+
 /// A mutual exclusive container.
 ///
 /// This assures that only one holds mutability of the inner value. To get the inner value, you
 /// need acquire the "lock". If you try to lock it while a lock is already held elsewhere, it will
 /// block the thread until the lock is released.
-// TODO soundness issue when T: Drop?
-pub struct Mutex<T> {
+pub struct Mutex<T: Leak> {
     /// The inner value.
-    inner: T,
+    inner: UnsafeCell<T>,
     /// The lock boolean.
     ///
     /// This is true, if and only if the lock is currently held.
@@ -23,39 +25,43 @@ pub struct Mutex<T> {
 /// A mutex guard.
 ///
 /// This acts as the lock.
-pub struct MutexGuard<'a, T: 'a> {
+pub struct MutexGuard<'a, T: 'a + Leak> {
     mutex: &'a Mutex<T>,
 }
 
 /// Release the mutex.
-impl<'a, T> Drop for MutexGuard<'a, T> {
+impl<'a, T: Leak> Drop for MutexGuard<'a, T> {
     #[inline]
     fn drop(&mut self) {
         self.mutex.locked.store(false, atomic::Ordering::SeqCst);
     }
 }
 
-impl<'a, T> ops::Deref for MutexGuard<'a, T> {
+impl<'a, T: Leak> ops::Deref for MutexGuard<'a, T> {
     type Target = T;
 
     #[inline]
     fn deref(&self) -> &T {
-        &self.mutex.inner
+        unsafe {
+            &*self.mutex.inner.get()
+        }
     }
 }
 
-impl<'a, T> ops::DerefMut for MutexGuard<'a, T> {
+impl<'a, T: Leak> ops::DerefMut for MutexGuard<'a, T> {
     fn deref_mut(&mut self) -> &mut T {
-        unsafe { &mut *(&self.mutex.inner as *const T as *mut T) }
+        unsafe {
+            &mut *self.mutex.inner.get()
+        }
     }
 }
 
-impl<T> Mutex<T> {
+impl<T: Leak> Mutex<T> {
     /// Create a new mutex with some inner value.
     #[inline]
     pub const fn new(inner: T) -> Mutex<T> {
         Mutex {
-            inner: inner,
+            inner: UnsafeCell::new(inner),
             locked: AtomicBool::new(false),
         }
     }
@@ -66,6 +72,7 @@ impl<T> Mutex<T> {
     #[inline]
     pub fn lock(&self) -> MutexGuard<T> {
         // Lock the mutex.
+        #[cfg(not(feature = "unsafe_no_mutex_lock"))]
         while self.locked.compare_and_swap(false, true, atomic::Ordering::SeqCst) {
             // ,___,
             // {O,o}
@@ -80,7 +87,8 @@ impl<T> Mutex<T> {
     }
 }
 
-unsafe impl<T> Sync for Mutex<T> {}
+unsafe impl<T: Send + Leak> Send for Mutex<T> {}
+unsafe impl<T: Send + Leak> Sync for Mutex<T> {}
 
 #[cfg(test)]
 mod test {

+ 27 - 84
src/sys.rs

@@ -1,97 +1,44 @@
 //! System primitives.
 
-use ptr::Pointer;
-use fail;
+#[cfg(not(feature = "unsafe_no_brk_lock"))]
+use sync;
 
-/// A system call error.
-#[derive(Copy, Clone, Debug, PartialEq, Eq)]
-pub enum Error {
-    /// Sir, we're running outta memory!
-    OutOfMemory,
-    /// An OS error occurred.
-    Os,
-}
-
-impl Error {
-    /// Handle this error with the appropriate method.
-    pub fn handle(self) -> ! {
-        match self {
-            Error::OutOfMemory => fail::oom(),
-            Error::Os => panic!("Unknown OS error.")
-        }
+mod symbols {
+    extern {
+        pub fn sched_yield() -> isize;
+        pub fn sbrk(diff: isize) -> *mut u8;
     }
 }
 
-/// Cooperatively gives up a timeslice to the OS scheduler.
-pub fn yield_now() {
-    unsafe {
-        #[cfg(not(target_os = "redox"))]
-        syscall!(SCHED_YIELD);
-
-        #[cfg(target_os = "redox")]
-        ::system::syscall::unix::sys_yield();
-    }
-}
-
-/// Retrieve the end of the current data segment.
+/// The BRK mutex.
 ///
-/// This will not change the state of the process in any way, and is thus safe.
-    #[inline]
-pub fn segment_end() -> Result<*const u8, Error> {
-    unsafe {
-        sys_brk(0)
-    }.map(|x| x as *const _)
-}
+/// This is used for avoiding data races in multiple allocator.
+#[cfg(not(feature = "unsafe_no_brk_lock"))]
+static BRK_MUTEX: sync::Mutex<()> = sync::Mutex::new(());
 
 /// Increment data segment of this process by some, _n_, return a pointer to the new data segment
 /// start.
 ///
 /// This uses the system call BRK as backend.
-///
-/// This is unsafe for multiple reasons. Most importantly, it can create an inconsistent state,
-/// because it is not atomic. Thus, it can be used to create Undefined Behavior.
 #[inline]
-pub unsafe fn inc_brk(n: usize) -> Result<Pointer<u8>, Error> {
-    let orig_seg_end = try!(segment_end()) as usize;
-    if n == 0 { return Ok(Pointer::new(orig_seg_end as *mut u8)) }
-
-    let expected_end = try!(orig_seg_end.checked_add(n).ok_or(Error::OutOfMemory));
-    let new_seg_end = try!(sys_brk(expected_end));
-
-    if new_seg_end != expected_end {
-        // Reset the break.
-        try!(sys_brk(orig_seg_end));
+pub fn sbrk(n: isize) -> Result<*mut u8, ()> {
+    // Lock the BRK mutex.
+    #[cfg(not(feature = "unsafe_no_brk_lock"))]
+    let _guard = BRK_MUTEX.lock();
 
-        Err(Error::OutOfMemory)
-    } else {
-        Ok(Pointer::new(orig_seg_end as *mut u8))
-    }
-}
-
-/// Redox syscall, BRK.
-#[inline]
-#[cfg(target_os = "redox")]
-unsafe fn sys_brk(n: usize) -> Result<usize, Error> {
-    use system::syscall;
-
-    if let Ok(ret) = syscall::sys_brk(n) {
-        Ok(ret)
-    } else {
-        Err(Error::Os)
+    unsafe {
+        let brk = symbols::sbrk(n as isize);
+        if brk as usize == !0 {
+            Err(())
+        } else {
+            Ok(brk)
+        }
     }
 }
 
-/// Unix syscall, BRK.
-#[inline]
-#[cfg(not(target_os = "redox"))]
-unsafe fn sys_brk(n: usize) -> Result<usize, Error> {
-    let ret = syscall!(BRK, n);
-
-    if ret == !0 {
-        Err(Error::Os)
-    } else {
-        Ok(ret)
-    }
+/// Cooperatively gives up a timeslice to the OS scheduler.
+pub fn yield_now() {
+    assert_eq!(unsafe { symbols::sched_yield() }, 0);
 }
 
 #[cfg(test)]
@@ -100,16 +47,12 @@ mod test {
 
     #[test]
     fn test_oom() {
-        unsafe {
-            assert_eq!(inc_brk(9999999999999).err(), Some(Error::OutOfMemory));
-        }
+        assert!(sbrk(9999999999999).is_err());
     }
 
     #[test]
     fn test_overflow() {
-        unsafe {
-            assert_eq!(inc_brk(!0).err(), Some(Error::OutOfMemory));
-            assert_eq!(inc_brk(!0 - 2000).err(), Some(Error::OutOfMemory));
-        }
+        assert!(sbrk(!0).is_err());
+        assert!(sbrk(!0 - 2000).is_err());
     }
 }

+ 16 - 37
src/vec.rs

@@ -1,16 +1,14 @@
 //! Vector primitive.
 
-use core::mem::size_of;
-use core::{slice, ops, ptr, mem};
+use prelude::*;
 
-use block::Block;
-use ptr::Pointer;
+use core::{slice, ops, mem, ptr};
 
 /// A low-level vector primitive.
 ///
 /// This does not perform allocation nor reallaction, thus these have to be done manually.
 /// Moreover, no destructors are called, making it possible to leak memory.
-pub struct Vec<T: NoDrop> {
+pub struct Vec<T: Leak> {
     /// A pointer to the start of the buffer.
     ptr: Pointer<T>,
     /// The capacity of the buffer.
@@ -23,7 +21,7 @@ pub struct Vec<T: NoDrop> {
     len: usize,
 }
 
-impl<T: NoDrop> Vec<T> {
+impl<T: Leak> Vec<T> {
     /// Create a new empty vector.
     ///
     /// This won't allocate a buffer, thus it will have a capacity of zero.
@@ -45,12 +43,12 @@ impl<T: NoDrop> Vec<T> {
     #[inline]
     pub unsafe fn from_raw_parts(block: Block, len: usize) -> Vec<T> {
         // Make some handy assertions.
-        debug_assert!(block.size() % size_of::<T>() == 0, "The size of T does not divide the \
+        debug_assert!(block.size() % mem::size_of::<T>() == 0, "The size of T does not divide the \
                       block's size.");
 
         Vec {
-            cap: block.size() / size_of::<T>(),
-            ptr: Pointer::new(*block.into_ptr() as *mut T),
+            cap: block.size() / mem::size_of::<T>(),
+            ptr: Pointer::new(*Pointer::from(block) as *mut T),
             len: len,
         }
     }
@@ -66,11 +64,11 @@ impl<T: NoDrop> Vec<T> {
     /// debug mode.
     pub fn refill(&mut self, block: Block) -> Block {
         // Calculate the new capacity.
-        let new_cap = block.size() / size_of::<T>();
+        let new_cap = block.size() / mem::size_of::<T>();
 
         // Make some assertions.
         assert!(self.len <= new_cap, "Block not large enough to cover the vector.");
-        debug_assert!(new_cap * size_of::<T>() == block.size(), "The size of T does not divide the \
+        debug_assert!(new_cap * mem::size_of::<T>() == block.size(), "The size of T does not divide the \
                       block's size.");
 
         let old = mem::replace(self, Vec::new());
@@ -80,21 +78,12 @@ impl<T: NoDrop> Vec<T> {
 
         // Update the fields of `self`.
         self.cap = new_cap;
-        self.ptr = unsafe { Pointer::new(*block.into_ptr() as *mut T) };
+        self.ptr = unsafe { Pointer::new(*Pointer::from(block) as *mut T) };
         self.len = old.len;
 
         Block::from(old)
     }
 
-    /// Get the inner pointer.
-    ///
-    /// Do not perform mutation or any form of manipulation through this pointer, since doing so
-    /// might break invariants.
-    #[inline]
-    pub fn ptr(&self) -> &Pointer<T> {
-        &self.ptr
-    }
-
     /// Get the capacity of this vector.
     #[inline]
     pub fn capacity(&self) -> usize {
@@ -122,13 +111,13 @@ impl<T: NoDrop> Vec<T> {
 }
 
 /// Cast this vector to the respective block.
-impl<T: NoDrop> From<Vec<T>> for Block {
+impl<T: Leak> From<Vec<T>> for Block {
     fn from(from: Vec<T>) -> Block {
-        unsafe { Block::from_raw_parts(from.ptr.cast(), from.cap * size_of::<T>()) }
+        unsafe { Block::from_raw_parts(from.ptr.cast(), from.cap * mem::size_of::<T>()) }
     }
 }
 
-impl<T: NoDrop> ops::Deref for Vec<T> {
+impl<T: Leak> ops::Deref for Vec<T> {
     #[inline]
     type Target = [T];
 
@@ -139,7 +128,7 @@ impl<T: NoDrop> ops::Deref for Vec<T> {
     }
 }
 
-impl<T: NoDrop> ops::DerefMut for Vec<T> {
+impl<T: Leak> ops::DerefMut for Vec<T> {
     #[inline]
     fn deref_mut(&mut self) -> &mut [T] {
         unsafe {
@@ -148,21 +137,11 @@ impl<T: NoDrop> ops::DerefMut for Vec<T> {
     }
 }
 
-/// Types that have no destructor.
-///
-/// This trait act as a simple static assertions catching dumb logic errors and memory leaks.
-///
-/// Since one cannot define mutually exclusive traits, we have this as a temporary hack.
-pub trait NoDrop {}
-
-impl NoDrop for Block {}
-impl NoDrop for u8 {}
-
 #[cfg(test)]
 mod test {
     use super::*;
-    use block::Block;
-    use ptr::Pointer;
+
+    use prelude::*;
 
     #[test]
     fn test_vec() {

+ 17 - 13
tests/box.rs

@@ -6,19 +6,23 @@ fn alloc_box() -> Box<u32> {
 
 #[test]
 fn test() {
-    let mut a = Box::new(1);
-    let mut b = Box::new(2);
-    let mut c = Box::new(3);
+    {
+        let mut a = Box::new(1);
+        let mut b = Box::new(2);
+        let mut c = Box::new(3);
 
-    assert_eq!(*a, 1);
-    assert_eq!(*b, 2);
-    assert_eq!(*c, 3);
-    assert_eq!(*alloc_box(), 0xDEADBEAF);
+        assert_eq!(*a, 1);
+        assert_eq!(*b, 2);
+        assert_eq!(*c, 3);
+        assert_eq!(*alloc_box(), 0xDEADBEAF);
 
-    *a = 0;
-    *b = 0;
-    *c = 0;
-    assert_eq!(*a, 0);
-    assert_eq!(*b, 0);
-    assert_eq!(*c, 0);
+        *a = 0;
+        *b = 0;
+        *c = 0;
+        assert_eq!(*a, 0);
+        assert_eq!(*b, 0);
+        assert_eq!(*c, 0);
+    }
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 14 - 10
tests/btreemap.rs

@@ -4,16 +4,20 @@ use std::collections::BTreeMap;
 
 #[test]
 fn test() {
-    let mut map = BTreeMap::new();
+    {
+        let mut map = BTreeMap::new();
 
-    map.insert("Nicolas", "Cage");
-    map.insert("is", "God");
-    map.insert("according", "to");
-    map.insert("ca1ek", ".");
+        map.insert("Nicolas", "Cage");
+        map.insert("is", "God");
+        map.insert("according", "to");
+        map.insert("ca1ek", ".");
 
-    assert_eq!(map.get("Nicolas"), Some(&"Cage"));
-    assert_eq!(map.get("is"), Some(&"God"));
-    assert_eq!(map.get("according"), Some(&"to"));
-    assert_eq!(map.get("ca1ek"), Some(&"."));
-    assert_eq!(map.get("This doesn't exist."), None);
+        assert_eq!(map.get("Nicolas"), Some(&"Cage"));
+        assert_eq!(map.get("is"), Some(&"God"));
+        assert_eq!(map.get("according"), Some(&"to"));
+        assert_eq!(map.get("ca1ek"), Some(&"."));
+        assert_eq!(map.get("This doesn't exist."), None);
+    }
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 17 - 0
tests/join.rs

@@ -0,0 +1,17 @@
+extern crate ralloc;
+
+use std::thread;
+
+#[test]
+fn test() {
+    for i in 0..0xFFFF {
+        let bx = Box::new("frakkkko");
+        let join = thread::spawn(move || Box::new(!i));
+        drop(bx);
+        let bx = Box::new("frakkkko");
+        join.join().unwrap();
+        drop(bx);
+    }
+
+    ralloc::lock().debug_assert_no_leak();
+}

+ 25 - 21
tests/mpsc.rs

@@ -6,27 +6,31 @@ use std::sync::mpsc;
 #[test]
 fn test() {
     {
-        let (tx, rx) = mpsc::channel::<Box<u64>>();
-        thread::spawn(move || {
-            tx.send(Box::new(0xBABAFBABAF)).unwrap();
-            tx.send(Box::new(0xDEADBEAF)).unwrap();
-            tx.send(Box::new(0xDECEA5E)).unwrap();
-            tx.send(Box::new(0xDEC1A551F1E5)).unwrap();
-        });
-        assert_eq!(*rx.recv().unwrap(), 0xBABAFBABAF);
-        assert_eq!(*rx.recv().unwrap(), 0xDEADBEAF);
-        assert_eq!(*rx.recv().unwrap(), 0xDECEA5E);
-        assert_eq!(*rx.recv().unwrap(), 0xDEC1A551F1E5);
-    }
+        {
+            let (tx, rx) = mpsc::channel::<Box<u64>>();
+            thread::spawn(move || {
+                tx.send(Box::new(0xBABAFBABAF)).unwrap();
+                tx.send(Box::new(0xDEADBEAF)).unwrap();
+                tx.send(Box::new(0xDECEA5E)).unwrap();
+                tx.send(Box::new(0xDEC1A551F1E5)).unwrap();
+            });
+            assert_eq!(*rx.recv().unwrap(), 0xBABAFBABAF);
+            assert_eq!(*rx.recv().unwrap(), 0xDEADBEAF);
+            assert_eq!(*rx.recv().unwrap(), 0xDECEA5E);
+            assert_eq!(*rx.recv().unwrap(), 0xDEC1A551F1E5);
+        }
 
-    let (tx, rx) = mpsc::channel();
-    for _ in 0..0xFFFF {
-        let tx = tx.clone();
-        thread::spawn(move || {
-            tx.send(Box::new(0xFA11BAD)).unwrap();
-        });
-    }
-    for _ in 0..0xFFFF {
-        assert_eq!(*rx.recv().unwrap(), 0xFA11BAD);
+        let (tx, rx) = mpsc::channel();
+        for _ in 0..0xFFFF {
+            let tx = tx.clone();
+            thread::spawn(move || {
+                tx.send(Box::new(0xFA11BAD)).unwrap();
+            });
+        }
+        for _ in 0..0xFFFF {
+            assert_eq!(*rx.recv().unwrap(), 0xFA11BAD);
+        }
     }
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 11 - 4
tests/multithreading.rs

@@ -2,7 +2,7 @@ extern crate ralloc;
 
 use std::thread;
 
-fn make_thread() {
+fn make_thread() -> thread::JoinHandle<()> {
     thread::spawn(|| {
         let mut vec = Vec::new();
 
@@ -14,12 +14,19 @@ fn make_thread() {
         for i in 0..0xFFFF {
             assert_eq!(vec[i], i);
         }
-    });
+    })
 }
 
 #[test]
 fn test() {
-    for _ in 0..5 {
-        make_thread();
+    let mut join = Vec::new();
+    for _ in 0..50 {
+        join.push(make_thread());
     }
+
+    for i in join {
+        i.join().unwrap();
+    }
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 15 - 10
tests/realloc.rs

@@ -2,15 +2,20 @@ extern crate ralloc;
 
 #[test]
 fn test() {
-    let mut vec = Vec::new();
-    vec.reserve(1);
-    vec.reserve(2);
-    vec.reserve(3);
-    vec.reserve(100);
-    vec.reserve(600);
-    vec.reserve(1000);
-    vec.reserve(2000);
+    {
+        let mut vec = Vec::new();
 
-    vec.push(1);
-    vec.push(2);
+        vec.reserve(1);
+        vec.reserve(2);
+        vec.reserve(3);
+        vec.reserve(100);
+        vec.reserve(600);
+        vec.reserve(1000);
+        vec.reserve(2000);
+
+        vec.push(1);
+        vec.push(2);
+    }
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 10 - 2
tests/send.rs

@@ -4,11 +4,19 @@ use std::thread;
 
 #[test]
 fn test() {
+    let mut join = Vec::new();
+
     for _ in 0..0xFFFF {
         let bx: Box<u64> = Box::new(0x11FE15C001);
 
-        thread::spawn(move || {
+        join.push(thread::spawn(move || {
             assert_eq!(*bx, 0x11FE15C001);
-        });
+        }));
+    }
+
+    for i in join {
+        i.join().unwrap();
     }
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 2 - 0
tests/string.rs

@@ -5,4 +5,6 @@ fn test() {
     assert_eq!(&String::from("you only live twice"), "you only live twice");
     assert_eq!(&String::from("wtf have you smoked"), "wtf have you smoked");
     assert_eq!(&String::from("get rekt m8"), "get rekt m8");
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 4 - 0
tests/vec.rs

@@ -24,4 +24,8 @@ fn test() {
         vec[i] = 0;
         assert_eq!(vec[i], 0);
     }
+
+    drop(vec);
+
+    ralloc::lock().debug_assert_no_leak();
 }

+ 4 - 0
tests/vec_box.rs

@@ -22,4 +22,8 @@ fn test() {
         *vec[i] = 0;
         assert_eq!(*vec[i], 0);
     }
+
+    drop(vec);
+
+    ralloc::lock().debug_assert_no_leak();
 }