Browse Source

Ralloc 1.0.0: A trait-based local/global allocator model.

- Use a trait-based approach to local/global allocators. In particular,
  we make what was previously provided by `Bookkeeper` to a trait in
  which the custom methods are overriden.
- Enable ralloc-based block list reallocation. Instead of using BRK for
  this, we allow the allocator itself to manage the bookkeeper's
  capacity. This is done by adding the guarantee of the vector's
  capacity always being greater than or equal two the length plus two.
- Remove the `UniCell` primitive and provides a `MoveCell` primitive
  instead.
- Move `Bookkeeper`-specific methods to `Bookkeeper`'s impl instead of
  the `Allocator` trait.
- Introduce the `LazyInit` primitive, which provides functionality
  similar to the `lazy_static` crate. In particular, it allows for an
  initializer to be executed if initialization is needed. This is used
  for allocating the initial segment.
- Use lonely method for allocation API (abolish `lock`, which is no
  longer relevant due to the new model).
- Due to an unsoundness discovered by @nilset, we remove the correctness
  guarantee of the "seperate deallocation" example.
- Remove microcaches. This method is outdated with the new model. In the
  future, it will likely be replaced by a small-allocation LL arena.
- Wrap TLS variables in a newtype that guarantees against crossthread
  leakage.
- Update the tests.
ticki 8 years ago
parent
commit
cdec4b0e08
24 changed files with 806 additions and 834 deletions
  1. 7 4
      Cargo.toml
  2. 20 55
      README.md
  3. 0 21
      benches/no_lock.rs
  4. 4 0
      shim/Cargo.toml
  5. 2 2
      shim/src/lib.rs
  6. 251 63
      src/allocator.rs
  7. 9 7
      src/block.rs
  8. 218 405
      src/bookkeeper.rs
  9. 17 2
      src/brk.rs
  10. 20 90
      src/cell.rs
  11. 45 3
      src/fail.rs
  12. 81 0
      src/lazy_init.rs
  13. 13 2
      src/lib.rs
  14. 0 95
      src/micro.rs
  15. 5 1
      src/prelude.rs
  16. 6 0
      src/ptr.rs
  17. 11 4
      src/symbols.rs
  18. 13 3
      src/sys.rs
  19. 48 33
      src/tls.rs
  20. 20 21
      src/vec.rs
  21. 5 7
      tests/manual.rs
  22. 6 10
      tests/partial_free.rs
  23. 4 5
      tests/partial_realloc.rs
  24. 1 1
      tests/util/mod.rs

+ 7 - 4
Cargo.toml

@@ -1,6 +1,6 @@
 [package]
 name = "ralloc"
-version = "0.1.0"
+version = "1.0.0"
 authors = ["ticki <ticki@users.noreply.github.com>"]
 
 # URLs and paths
@@ -12,13 +12,16 @@ readme = "README.md"
 keywords = ["alloc", "malloc", "allocator", "ralloc", "redox"]
 license = "MIT"
 
-[dependencies]
-ralloc_shim = { path = "shim" }
+[dependencies.ralloc_shim]
+path = "shim"
 
 [dependencies.clippy]
-git = "https://github.com/Manishearth/rust-clippy.git"
+version = "0.0.80"
 optional = true
 
+[dependencies.unborrow]
+git = "https://github.com/durka/unborrow.git"
+
 [profile.release]
 panic = "abort"
 opt-level = 3

+ 20 - 55
README.md

@@ -40,6 +40,12 @@ Note that `ralloc` cannot coexist with another allocator, unless they're deliber
 
 ## Features
 
+### Thread-local allocation
+
+Ralloc makes use of a global-local model allowing one to allocate or deallocate
+without locks, syncronization, or atomic writes. This provides reasonable
+performance, while preserving flexibility and ability to multithread.
+
 ### Custom out-of-memory handlers
 
 You can set custom OOM handlers, by:
@@ -99,6 +105,8 @@ fn main() {
 
 ### Debug check: memory leaks.
 
+TODO Temporarily disabled.
+
 `ralloc` got memleak superpowers too! Enable `debug_tools` and do:
 
 ```rust
@@ -112,7 +120,7 @@ fn main() {
             // We start by allocating some stuff.
             let a = Box::new(500u32);
             // We then leak `a`.
-            let b = mem::forget(a);
+            mem::forget(a);
         }
         // The box is now leaked, and the destructor won't be called.
 
@@ -149,30 +157,6 @@ fn main() {
 }
 ```
 
-### Separate deallocation
-
-Another cool feature is that you can deallocate things that weren't even
-allocated buffers in the first place!
-
-Consider that you got a unused static variable, that you want to put into the
-allocation pool:
-
-```rust
-extern crate ralloc;
-
-static mut BUFFER: [u8; 256] = [2; 256];
-
-fn main() {
-    // Throw `BUFFER` into the memory pool.
-    unsafe {
-        ralloc::lock().free(&mut BUFFER as *mut u8, 256);
-    }
-
-    // Do some allocation.
-    assert_eq!(*Box::new(0xDEED), 0xDEED);
-}
-```
-
 ### Top notch security
 
 If you are willing to trade a little performance, for extra security you can
@@ -209,27 +193,6 @@ it is important that the code is reviewed and verified.
 6. Manual reviewing. One or more persons reviews patches to ensure high
    security.
 
-### Lock reuse
-
-Acquiring a lock sequentially multiple times can be expensive. Therefore,
-`ralloc` allows you to lock the allocator once, and reuse that:
-
-```rust
-extern crate ralloc;
-
-fn main() {
-    // Get that lock!
-    let lock = ralloc::lock();
-
-    // All in one:
-    let _ = lock.alloc(4, 2);
-    let _ = lock.alloc(4, 2);
-    let _ = lock.alloc(4, 2);
-
-    // The lock is automatically released through its destructor.
-}
-```
-
 ### Security through the type system
 
 `ralloc` makes heavy use of Rust's type system, to make safety guarantees.
@@ -246,21 +209,23 @@ This is just one of many examples.
 interface for platform dependent functions. An default implementation of
 `ralloc_shim` is provided (supporting Mac OS, Linux, and BSD).
 
-### Local allocators
+### Forcing inplace reallocation
+
+Inplace reallocation can be significantly faster than memcpy'ing reallocation.
+A limitation of libc is that you cannot do reallocation inplace-only (a
+failable method that guarantees the absence of memcpy of the buffer).
 
-`ralloc` allows you to create non-global allocators, for e.g. thread specific purposes:
+Having a failable way to do inplace reallocation provides some interesting possibilities.
 
 ```rust
 extern crate ralloc;
 
 fn main() {
-    // We create an allocator.
-    let my_alloc = ralloc::Allocator::new();
+    let buf = ralloc::alloc(40, 1);
+    // BRK'ing 20 bytes...
+    let ptr = unsafe { ralloc::inplace_realloc(buf, 40, 45).unwrap() };
 
-    // Allocate some stuff through our local allocator.
-    let _ = my_alloc.alloc(4, 2);
-    let _ = my_alloc.alloc(4, 2);
-    let _ = my_alloc.alloc(4, 2);
+    // The buffer is now 45 bytes long!
 }
 ```
 
@@ -315,7 +280,7 @@ especially true when dealing with very big allocation.
 extern crate ralloc;
 
 fn main() {
-    let buf = ralloc::lock().try_alloc(8, 4);
+    let buf = ralloc::try_alloc(8, 4);
     // `buf` is a Result: It is Err(()) if the allocation failed.
 }
 ```

+ 0 - 21
benches/no_lock.rs

@@ -1,21 +0,0 @@
-#![feature(test)]
-
-extern crate ralloc;
-extern crate test;
-
-use test::Bencher;
-
-#[bench]
-fn bench(b: &mut Bencher) {
-    b.iter(|| {
-        let mut lock = ralloc::lock();
-
-        for _ in 0..100000 {
-            let a = lock.alloc(200, 2);
-            unsafe {
-                let a = lock.realloc(a, 200, 300, 2);
-                lock.free(a, 300);
-            }
-        }
-    });
-}

+ 4 - 0
shim/Cargo.toml

@@ -11,3 +11,7 @@ rpath = false
 lto = true
 debug-assertions = false
 codegen-units = 1
+
+[dependencies.libc]
+version = "0.2"
+default-features = false

+ 2 - 2
shim/src/lib.rs

@@ -2,7 +2,7 @@
 //!
 //! This crate provides implementation/import of these in Linux, BSD, and Mac OS.
 
-#![feature(lang_items, linkage)]
+#![feature(linkage)]
 #![no_std]
 #![warn(missing_docs)]
 
@@ -12,7 +12,7 @@ pub use libc::sched_yield;
 
 extern {
     /// Change the data segment. See `man sbrk`.
-    pub fn sbrk(libc::intptr_t) -> *const libc::c_void;
+    pub fn sbrk(_: libc::intptr_t) -> *const libc::c_void;
 }
 
 /// Thread destructors for Linux.

+ 251 - 63
src/allocator.rs

@@ -4,98 +4,286 @@
 
 use prelude::*;
 
-use {sync, breaker};
-use bookkeeper::Bookkeeper;
+use core::{ops, mem, ptr};
+
+use {brk, tls, sys};
+use bookkeeper::{self, Bookkeeper, Allocator};
+use sync::Mutex;
 
 /// The global default allocator.
-static GLOBAL_ALLOCATOR: sync::Mutex<Allocator<breaker::Sbrk>> = sync::Mutex::new(Allocator::new());
+// TODO remove these filthy function pointers.
+static GLOBAL_ALLOCATOR: Mutex<LazyInit<fn() -> GlobalAllocator, GlobalAllocator>> =
+    Mutex::new(LazyInit::new(global_init));
 tls! {
     /// The thread-local allocator.
-    static ALLOCATOR: Option<UniCell<Allocator<breaker::Global>>> = None;
+    static THREAD_ALLOCATOR: MoveCell<LazyInit<fn() -> LocalAllocator, LocalAllocator>> =
+        MoveCell::new(LazyInit::new(local_init));
 }
 
-/// Get the allocator.
-#[inline]
-pub fn get() -> Result<Allocator<breaker::Global>, ()> {
-    if ALLOCATOR.is_none() {
-        // Create the new allocator.
-        let mut alloc = Allocator::new();
+/// Initialize the global allocator.
+fn global_init() -> GlobalAllocator {
+    // The initial acquired segment.
+    let (aligner, initial_segment, excessive) =
+        brk::get(bookkeeper::EXTRA_ELEMENTS * 4, mem::align_of::<Block>());
+
+    // Initialize the new allocator.
+    let mut res = GlobalAllocator {
+        inner: Bookkeeper::new(unsafe {
+            Vec::from_raw_parts(initial_segment, 0)
+        }),
+    };
+
+    // Free the secondary space.
+    res.push(aligner);
+    res.push(excessive);
+
+    res
+}
+
+/// Initialize the local allocator.
+fn local_init() -> LocalAllocator {
+    // The initial acquired segment.
+    let initial_segment = GLOBAL_ALLOCATOR
+        .lock()
+        .get()
+        .alloc(bookkeeper::EXTRA_ELEMENTS * 4, mem::align_of::<Block>());
+
+    unsafe {
+        // Initialize the new allocator.
+        let mut res = LocalAllocator {
+            inner: Bookkeeper::new(Vec::from_raw_parts(initial_segment, 0)),
+        };
         // Attach the allocator to the current thread.
-        alloc.attach();
+        res.attach();
 
-        // To get mutable access, we wrap it in an `UniCell`.
-        ALLOCATOR = Some(UniCell::new(alloc));
+        res
+    }
+}
 
-        &ALLOCATOR
+/// Temporarily get the allocator.
+///
+/// This is simply to avoid repeating ourself, so we let this take care of the hairy stuff.
+fn get_allocator<T, F: FnOnce(&mut LocalAllocator) -> T>(f: F) -> T {
+    /// A dummy used as placeholder for the temporary initializer.
+    fn dummy() -> LocalAllocator {
+        unreachable!();
     }
+
+    // Get the thread allocator.
+    let thread_alloc = THREAD_ALLOCATOR.get();
+    // Just dump some placeholding initializer in the place of the TLA.
+    let mut thread_alloc = thread_alloc.replace(LazyInit::new(dummy));
+
+    // Call the closure involved.
+    let res = f(thread_alloc.get());
+
+    // Put back the original allocator.
+    THREAD_ALLOCATOR.get().replace(thread_alloc);
+
+    res
 }
 
-/// An allocator.
+/// Derives `Deref` and `DerefMut` to the `inner` field.
+macro_rules! derive_deref {
+    ($imp:ty, $target:ty) => {
+        impl ops::Deref for $imp {
+            type Target = $target;
+
+            fn deref(&self) -> &$target {
+                &self.inner
+            }
+        }
+
+        impl ops::DerefMut for $imp {
+            fn deref_mut(&mut self) -> &mut $target {
+                &mut self.inner
+            }
+        }
+    };
+}
+
+/// Global SBRK-based allocator.
 ///
-/// This keeps metadata and relevant information about the allocated blocks. All allocation,
-/// deallocation, and reallocation happens through this.
-pub struct Allocator {
-    /// The inner bookkeeper.
+/// This will extend the data segment whenever new memory is needed. Since this includes leaving
+/// userspace, this shouldn't be used when other allocators are available (i.e. the bookkeeper is
+/// local).
+struct GlobalAllocator {
+    // The inner bookkeeper.
     inner: Bookkeeper,
 }
 
-impl Allocator {
-    /// Create a new, empty allocator.
+derive_deref!(GlobalAllocator, Bookkeeper);
+
+impl Allocator for GlobalAllocator {
     #[inline]
-    pub const fn new() -> Allocator {
-        Allocator {
-            inner: Bookkeeper::new(),
-        }
+    fn alloc_fresh(&mut self, size: usize, align: usize) -> Block {
+        // Obtain what you need.
+        let (alignment_block, res, excessive) = brk::get(size, align);
+
+        // Add it to the list. This will not change the order, since the pointer is higher than all
+        // the previous blocks (BRK extends the data segment). Although, it is worth noting that
+        // the stack is higher than the program break.
+        self.push(alignment_block);
+        self.push(excessive);
+
+        res
     }
+}
 
-    /// Allocate a block of memory.
-    ///
-    /// # Errors
+/// A local allocator.
+///
+/// This acquires memory from the upstream (global) allocator, which is protected by a `Mutex`.
+pub struct LocalAllocator {
+    // The inner bookkeeper.
+    inner: Bookkeeper,
+}
+
+derive_deref!(LocalAllocator, Bookkeeper);
+
+impl LocalAllocator {
+    /// Attach this allocator to the current thread.
     ///
-    /// The OOM handler handles out-of-memory conditions.
-    #[inline]
-    pub fn alloc(&mut self, size: usize, align: usize) -> *mut u8 {
-        *Pointer::from(self.inner.alloc(size, align))
+    /// This will make sure this allocator's data  is freed to the
+    pub unsafe fn attach(&mut self) {
+        extern fn dtor(ptr: *mut LocalAllocator) {
+            let alloc = unsafe { ptr::read(ptr) };
+
+            // Lock the global allocator.
+            // TODO dumb borrowck
+            let mut global_alloc = GLOBAL_ALLOCATOR.lock();
+            let global_alloc = global_alloc.get();
+
+            // Gotta' make sure no memleaks are here.
+            #[cfg(feature = "debug_tools")]
+            alloc.assert_no_leak();
+
+            // TODO, we know this is sorted, so we could abuse that fact to faster insertion in the
+            // global allocator.
+
+            alloc.inner.for_each(move |block| global_alloc.free(block));
+        }
+
+        sys::register_thread_destructor(self as *mut LocalAllocator, dtor).unwrap();
     }
+}
 
-    /// Free a buffer.
-    ///
-    /// Note that this do not have to be a buffer allocated through ralloc. The only requirement is
-    /// that it is not used after the free.
-    ///
-    /// # Errors
-    ///
-    /// The OOM handler handles out-of-memory conditions.
+impl Allocator for LocalAllocator {
     #[inline]
-    pub unsafe fn free(&mut self, ptr: *mut u8, size: usize) {
-        self.inner.free(Block::from_raw_parts(Pointer::new(ptr), size))
+    fn alloc_fresh(&mut self, size: usize, align: usize) -> Block {
+        /// Canonicalize the requested space.
+        ///
+        /// We request excessive space to the upstream allocator to avoid repeated requests and
+        /// lock contentions.
+        #[inline]
+        fn canonicalize_space(min: usize) -> usize {
+            // TODO tweak this.
+
+            // To avoid having mega-allocations allocate way to much space, we
+            // have a maximal extra space limit.
+            if min > 8192 { min } else {
+                // To avoid paying for short-living or little-allocating threads, we have no minimum.
+                // Instead we multiply.
+                min * 4
+                // This won't overflow due to the conditition of this branch.
+            }
+        }
+
+        // Get the block from the global allocator.
+        let (res, excessive) = GLOBAL_ALLOCATOR.lock()
+            .get()
+            .alloc(canonicalize_space(size), align)
+            .split(size);
+
+        // Free the excessive space to the current allocator. Note that you cannot simply push
+        // (which is the case for SBRK), due the block not necessarily being above all the other
+        // blocks in the pool. For this reason, we let `free` handle the search and so on.
+        self.free(excessive);
+
+        res
     }
+}
 
-    /// Reallocate memory.
-    ///
-    /// Reallocate the buffer starting at `ptr` with size `old_size`, to a buffer starting at the
-    /// returned pointer with size `size`.
-    ///
-    /// # Errors
-    ///
-    /// The OOM handler handles out-of-memory conditions.
-    #[inline]
-    pub unsafe fn realloc(&mut self, ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8 {
-        *Pointer::from(self.inner.realloc(
+/// Allocate a block of memory.
+///
+/// # Errors
+///
+/// The OOM handler handles out-of-memory conditions.
+#[inline]
+pub fn alloc(size: usize, align: usize) -> *mut u8 {
+    get_allocator(|alloc| {
+        *Pointer::from(alloc.alloc(size, align))
+    })
+}
+
+/// Free a buffer.
+///
+/// Note that this do not have to be a buffer allocated through ralloc. The only requirement is
+/// that it is not used after the free.
+///
+/// # Important!
+///
+/// You should only allocate buffers allocated through `ralloc`. Anything else is considered
+/// invalid.
+///
+/// # Errors
+///
+/// The OOM handler handles out-of-memory conditions.
+///
+/// # Safety
+///
+/// Rust assume that the allocation symbols returns correct values. For this reason, freeing
+/// invalid pointers might introduce memory unsafety.
+///
+/// Secondly, freeing an used buffer can introduce use-after-free.
+#[inline]
+pub unsafe fn free(ptr: *mut u8, size: usize) {
+    get_allocator(|alloc| {
+        alloc.free(Block::from_raw_parts(Pointer::new(ptr), size))
+    });
+}
+
+/// Reallocate memory.
+///
+/// Reallocate the buffer starting at `ptr` with size `old_size`, to a buffer starting at the
+/// returned pointer with size `size`.
+///
+/// # Important!
+///
+/// You should only reallocate buffers allocated through `ralloc`. Anything else is considered
+/// invalid.
+///
+/// # Errors
+///
+/// The OOM handler handles out-of-memory conditions.
+///
+/// # Safety
+///
+/// Due to being able to potentially memcpy an arbitrary buffer, as well as shrinking a buffer,
+/// this is marked unsafe.
+#[inline]
+pub unsafe fn realloc(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8 {
+    get_allocator(|alloc| {
+        *Pointer::from(alloc.realloc(
             Block::from_raw_parts(Pointer::new(ptr), old_size),
             size,
             align
         ))
-    }
+    })
+}
 
-    /// Try to reallocate the buffer _inplace_.
-    ///
-    /// In case of success, return the new buffer's size. On failure, return the old size.
-    ///
-    /// This can be used to shrink (truncate) a buffer as well.
-    #[inline]
-    pub unsafe fn realloc_inplace(&mut self, ptr: *mut u8, old_size: usize, size: usize) -> Result<(), ()> {
-        if self.inner.realloc_inplace(
+/// Try to reallocate the buffer _inplace_.
+///
+/// In case of success, return the new buffer's size. On failure, return the old size.
+///
+/// This can be used to shrink (truncate) a buffer as well.
+///
+/// # Safety
+///
+/// Due to being able to shrink (and thus free) the buffer, this is marked unsafe.
+#[inline]
+pub unsafe fn realloc_inplace(ptr: *mut u8, old_size: usize, size: usize) -> Result<(), ()> {
+    get_allocator(|alloc| {
+        if alloc.realloc_inplace(
             Block::from_raw_parts(Pointer::new(ptr), old_size),
             size
         ).is_ok() {
@@ -103,5 +291,5 @@ impl Allocator {
         } else {
             Err(())
         }
-    }
+    })
 }

+ 9 - 7
src/block.rs

@@ -157,6 +157,7 @@ impl Block {
     /// This marks it as free, and returns the old value.
     #[inline]
     pub fn pop(&mut self) -> Block {
+        // TODO durka/unborrow#2 is blocking.
         let empty = Block::empty(self.ptr.clone());
         mem::replace(self, empty)
     }
@@ -267,13 +268,6 @@ impl fmt::Debug for Block {
     }
 }
 
-/// Make sure dropped blocks are empty.
-impl Drop for Block {
-    fn drop(&mut self) {
-        debug_assert!(self.is_empty(), "Dropping a non-empty block.");
-    }
-}
-
 #[cfg(test)]
 mod test {
     use prelude::*;
@@ -351,4 +345,12 @@ mod test {
         assert_eq!(*Pointer::from(block.empty_left()) as *const u8, arr.as_ptr());
         assert_eq!(block.empty_right(), block.split(arr.len()).1);
     }
+
+    #[test]
+    fn test_brk_grow_up() {
+        let brk1 = Block::brk(5);
+        let brk2 = Block::brk(100);
+
+        assert!(brk1 < brk2);
+    }
 }

+ 218 - 405
src/bookkeeper.rs

@@ -2,70 +2,212 @@
 
 use prelude::*;
 
-use brk;
-use vec::Vec;
-
 use core::marker::PhantomData;
 use core::ops::Range;
-use core::{ptr, cmp, mem};
+use core::{ptr, cmp, mem, ops};
+
+/// Elements required _more_ than the length as capacity.
+///
+/// See guarantee 4.
+pub const EXTRA_ELEMENTS: usize = 2;
 
 /// The memory bookkeeper.
 ///
-/// This is the main component of ralloc. Its job is to keep track of the free blocks in a
-/// structured manner, such that allocation, reallocation, and deallocation are all efficient.
-/// Particularly, it keeps a list of blocks, commonly called the "block pool". This list is kept.
-/// Entries in the block pool can be "empty", meaning that you can overwrite the entry without
-/// breaking consistency.
+/// This stores data about the state of the allocator, and in particular, the free memory.
 ///
-/// Only making use of only [`alloc`](#method.alloc), [`free`](#method.free),
-/// [`realloc`](#method.realloc) (and following their respective assumptions) guarantee that no
-/// buffer overrun, arithmetic overflow, panic, or otherwise unexpected crash will happen.
-pub struct Bookkeeper<B> {
+/// The actual functionality is provided by [`Allocator`](./trait.Allocator.html).
+#[derive(Default)]
+pub struct Bookkeeper {
     /// The internal block pool.
     ///
-    /// # Guarantees
+    /// Entries in the block pool can be "empty", meaning that you can overwrite the entry without
+    /// breaking consistency.
     ///
-    /// Certain guarantees are made:
+    /// # Assumptions
+    ///
+    /// Certain assumptions are made:
     ///
     /// 1. The list is always sorted with respect to the block's pointers.
     /// 2. No two consecutive or empty block delimited blocks are adjacent, except if the right
     ///    block is empty.
     /// 3. There are no trailing empty blocks.
+    /// 4. The capacity is always two blocks more than the length (this is due to reallocation
+    ///    pushing at maximum two elements, so we reserve two extra to allow pushing one
+    ///    additional element without unbounded recursion).
     ///
-    /// These are invariants assuming that only the public methods are used.
+    /// These are **not** invariants: If these assumpptions are not held, it will simply act strange
+    /// (e.g. logic bugs), but not memory unsafety.
     pool: Vec<Block>,
-    /// The number of bytes currently allocated.
-    #[cfg(feature = "debug_tools")]
-    allocated: usize,
-    /// The "breaker", i.e. the fresh allocator.
-    ///
-    /// This has as job as acquiring new memory through some external source (e.g. BRK or the
-    /// global allocator).
-    breaker: PhantomData<B>,
+    /// Is this currently reallocating?
+    reallocating: bool,
 }
 
-impl<B: Breaker> Bookkeeper<B> {
-    /// Create a new, empty block pool.
-    ///
-    /// This will make no allocations or BRKs.
-    #[inline]
-    #[cfg(feature = "debug_tools")]
-    pub const fn new() -> Bookkeeper {
+impl Bookkeeper {
+    /// Create a new bookkeeper with some initial vector.
+    pub fn new(vec: Vec<Block>) -> Bookkeeper {
+        // Make sure the assumptions are satisfied.
+        debug_assert!(vec.capacity() >= EXTRA_ELEMENTS, "Not enough initial capacity of the vector.");
+        debug_assert!(vec.is_empty(), "Initial vector isn't empty.");
+
         Bookkeeper {
-            pool: Vec::new(),
-            allocated: 0,
+            pool: vec,
+            // Be careful with this!
+            .. Bookkeeper::default()
         }
+    }
+
+    /// Perform a binary search to find the appropriate place where the block can be insert or is
+    /// located.
+    ///
+    /// It is guaranteed that no block left to the returned value, satisfy the above condition.
+    #[inline]
+    fn find(&mut self, block: &Block) -> usize {
+        // TODO optimize this function.
+        // Logging.
+        log!(self.pool, "Searching (exact) for {:?}.", block);
 
+        let ind = match self.pool.binary_search(block) {
+            Ok(x) | Err(x) => x,
+        };
+        let len = self.pool.len();
+
+        // Move left.
+        ind - self.pool.iter_mut()
+            .rev()
+            .skip(len - ind)
+            .take_while(|x| x.is_empty())
+            .count()
     }
 
+    /// Perform a binary search to find the appropriate bound where the block can be insert or is
+    /// located.
+    ///
+    /// It is guaranteed that no block left to the returned value, satisfy the above condition.
     #[inline]
-    #[cfg(not(feature = "debug_tools"))]
-    pub const fn new() -> Bookkeeper {
-        Bookkeeper {
-            pool: Vec::new(),
+    fn find_bound(&mut self, block: &Block) -> Range<usize> {
+        // TODO optimize this function.
+        // Logging.
+        log!(self.pool, "Searching (bounds) for {:?}.", block);
+
+        let mut left_ind = match self.pool.binary_search(block) {
+            Ok(x) | Err(x) => x,
+        };
+
+        let len = self.pool.len();
+
+        // Move left.
+        left_ind -= self.pool.iter_mut()
+            .rev()
+            .skip(len - left_ind)
+            .take_while(|x| x.is_empty())
+            .count();
+
+        let mut right_ind = match self.pool.binary_search(&block.empty_right()) {
+            Ok(x) | Err(x) => x,
+        };
+
+        // Move right.
+        right_ind += self.pool.iter()
+            .skip(right_ind)
+            .take_while(|x| x.is_empty())
+            .count();
+
+        left_ind..right_ind
+    }
+
+    /// Go over every block in the allocator and call some function.
+    ///
+    /// Technically, this could be done through an iterator, but this, more unidiomatic, way is
+    /// slightly faster in some cases.
+    pub fn for_each<F: FnMut(Block)>(mut self, mut f: F) {
+        // Run over all the blocks in the pool.
+        while let Some(i) = self.pool.pop() {
+            f(i);
         }
+
+        // Take the block holding the pool.
+        f(Block::from(self.pool));
     }
 
+    /// Perform consistency checks.
+    ///
+    /// This will check for the following conditions:
+    ///
+    /// 1. The list is sorted.
+    /// 2. No blocks are adjacent.
+    ///
+    /// This is NOOP in release mode.
+    fn check(&self) {
+        if cfg!(debug_assertions) {
+            // Logging.
+            log!(self.pool, "Checking...");
+
+            // Reverse iterator over the blocks.
+            let mut it = self.pool.iter().enumerate().rev();
+
+            if let Some((_, x)) = it.next() {
+                // Make sure there are no leading empty blocks.
+                assert!(!x.is_empty());
+
+                let mut next = x;
+                for (n, i) in it {
+                    // Check if sorted.
+                    assert!(next >= i, "The block pool is not sorted at index, {} ({:?} < {:?}).",
+                            n, next, i);
+                    // Make sure no blocks are adjacent.
+                    assert!(!i.left_to(next) || i.is_empty(), "Adjacent blocks at index, {} ({:?} and \
+                            {:?})", n, i, next);
+                    // Make sure an empty block has the same address as its right neighbor.
+                    assert!(!i.is_empty() || i == next, "Empty block not adjacent to right neighbor \
+                            at index {} ({:?} and {:?})", n, i, next);
+
+                    // Set the variable tracking the previous block.
+                    next = i;
+                }
+
+                // Check for trailing empty blocks.
+                assert!(!self.pool.last().unwrap().is_empty(), "Trailing empty blocks.");
+            }
+
+            // Logging...
+            log!(self.pool, "Check OK!");
+        }
+    }
+
+    /// Check for memory leaks.
+    ///
+    /// This will ake sure that all the allocated blocks have been freed.
+    #[cfg(feature = "debug_tools")]
+    fn assert_no_leak(&self) {
+        assert!(self.allocated == self.pool.capacity() * mem::size_of::<Block>(), "Not all blocks \
+                freed. Total allocated space is {} ({} free blocks).", self.allocated,
+                self.pool.len());
+    }
+}
+
+/// An allocator.
+///
+/// This provides the functionality of the memory bookkeeper, requiring only provision of two
+/// methods, defining the "breaker" (fresh allocator). The core functionality is provided by
+/// default methods, which aren't generally made to be overwritten.
+///
+/// The reason why these methods aren't implemented directly on the bookkeeper is the distinction
+/// between different forms of allocators (global, local, and so on). Any newtype of
+/// [`Bookkeeper`](./struct.Bookkeeper.html).
+///
+/// # Guarantees vs. assumptions
+///
+/// Please note that whenever a guarantee is mentioned, it relies on that the all the methods
+/// overwritten are upholding the guarantees specified in the documentation.
+pub trait Allocator: ops::DerefMut<Target = Bookkeeper> {
+    /// Allocate _fresh_ space.
+    ///
+    /// "Fresh" means that the space is allocated through some breaker (be it SBRK or the global
+    /// allocator).
+    ///
+    /// The returned pointer is assumed to be aligned to `align`. If this is not held, all future
+    /// guarantees are invalid.
+    fn alloc_fresh(&mut self, size: usize, align: usize) -> Block;
 
     /// Allocate a chunk of memory.
     ///
@@ -109,7 +251,7 @@ impl<B: Breaker> Bookkeeper<B> {
     /// ```
     ///
     /// A block representing the marked area is then returned.
-    pub fn alloc(&mut self, size: usize, align: usize) -> Block {
+    fn alloc(&mut self, size: usize, align: usize) -> Block {
         // TODO: scan more intelligently.
 
         // Logging.
@@ -153,13 +295,10 @@ impl<B: Breaker> Bookkeeper<B> {
             debug_assert!(res.size() == size, "Requested space does not match with the returned \
                           block.");
 
-            self.leave(res)
+            res
         } else {
             // No fitting block found. Allocate a new block.
-            let res = self.alloc_fresh(size, align);
-
-            // "Leave" the allocator.
-            self.leave(res)
+            self.alloc_external(size, align)
         }
     }
 
@@ -206,14 +345,10 @@ impl<B: Breaker> Bookkeeper<B> {
     /// And we're done. If it cannot be done, we insert the block, while keeping the list sorted.
     /// See [`insert`](#method.insert) for details.
     #[inline]
-    pub fn free(&mut self, block: Block) {
+    fn free(&mut self, block: Block) {
         // Just logging for the unlucky people debugging this shit. No problem.
         log!(self.pool, "Freeing {:?}...", block);
 
-        // "Enter" the allocator.
-        let block = self.enter(block);
-        self.reserve_more(1);
-
         // Binary search for the block.
         let bound = self.find_bound(&block);
 
@@ -252,18 +387,16 @@ impl<B: Breaker> Bookkeeper<B> {
     /// space as free. If these conditions are not met, we have to allocate a new list, and then
     /// deallocate the old one, after which we use memmove to copy the data over to the newly
     /// allocated list.
-    pub fn realloc(&mut self, block: Block, new_size: usize, align: usize) -> Block {
+    fn realloc(&mut self, block: Block, new_size: usize, align: usize) -> Block {
         // Find the index bound.
         let ind = self.find_bound(&block);
 
         // Logging.
         log!(self.pool;ind, "Reallocating {:?} to size {} with align {}...", block, new_size, align);
 
-        // "Leave" the allocator.
-        let block = self.enter(block);
         // Try to do an inplace reallocation.
         match self.realloc_inplace_bound(ind, block, new_size) {
-            Ok(block) => self.leave(block),
+            Ok(block) => block,
             Err(block) => {
                 // Reallocation cannot be done inplace.
 
@@ -284,8 +417,7 @@ impl<B: Breaker> Bookkeeper<B> {
                 debug_assert!(res.size() >= new_size, "Requested space does not match with the \
                               returned block.");
 
-                // Leave the allocator.
-                self.leave(res)
+                res
             },
         }
     }
@@ -301,7 +433,7 @@ impl<B: Breaker> Bookkeeper<B> {
     /// search to find the blocks index. When you know the index use
     /// [`realloc_inplace_bound`](#method.realloc_inplace_bound.html).
     #[inline]
-    pub fn realloc_inplace(&mut self, block: Block, new_size: usize) -> Result<Block, Block> {
+    fn realloc_inplace(&mut self, block: Block, new_size: usize) -> Result<Block, Block> {
         // Logging.
         log!(self.pool, "Reallocating {:?} inplace to {}...", block, new_size);
 
@@ -367,7 +499,7 @@ impl<B: Breaker> Bookkeeper<B> {
                 let (res, excessive) = block.split(new_size);
                 // Remove_at may have shortened the vector.
                 if ind.start == self.pool.len() {
-                    self.push_no_reserve(excessive);
+                    self.push(excessive);
                 } else if !excessive.is_empty() {
                     self.pool[ind.start] = excessive;
                 }
@@ -402,7 +534,7 @@ impl<B: Breaker> Bookkeeper<B> {
         block.sec_zero();
 
         if ind.start == self.pool.len() {
-            self.push_no_reserve(block);
+            self.push(block);
             return;
         }
 
@@ -433,17 +565,17 @@ impl<B: Breaker> Bookkeeper<B> {
         self.check();
     }
 
-    /// Allocate _fresh_ space.
+    /// Allocate external ("fresh") space.
     ///
     /// "Fresh" means that the space is allocated through the breaker.
     ///
     /// The returned pointer is guaranteed to be aligned to `align`.
-    fn alloc_fresh(&mut self, size: usize, align: usize) -> Block {
+    fn alloc_external(&mut self, size: usize, align: usize) -> Block {
         // Logging.
         log!(self.pool, "Fresh allocation of size {} with alignment {}.", size, align);
 
         // Break it to me!
-        let res = B::alloc_fresh(size, align);
+        let res = self.alloc_fresh(size, align);
 
         // Check consistency.
         self.check();
@@ -451,31 +583,8 @@ impl<B: Breaker> Bookkeeper<B> {
         res
     }
 
-
-    /// Push two blocks to the block pool.
-    ///
-    /// This will append the blocks to the end of the block pool (and merge if possible). Make sure
-    /// that these blocks has a value higher than any of the elements in the list, to keep it
-    /// sorted.
-    ///
-    /// This guarantees linearity so that the blocks will be adjacent.
-    #[inline]
-    fn double_push(&mut self, block_a: Block, block_b: Block) {
-        // Logging.
-        log!(self.pool;self.pool.len(), "Pushing {:?} and {:?}.", block_a, block_b);
-
-        // Catch stupid bug...
-        debug_assert!(block_a <= block_b, "The first pushed block is not lower or equal to the second.");
-
-        // Reserve extra elements.
-        self.reserve_more(2);
-
-        self.push_no_reserve(block_a);
-        self.push_no_reserve(block_b);
-    }
-
     /// Push an element without reserving.
-    fn push_no_reserve(&mut self, mut block: Block) {
+    fn push(&mut self, mut block: Block) {
         // Logging.
         log!(self.pool;self.pool.len(), "Pushing {:?}.", block);
 
@@ -488,6 +597,10 @@ impl<B: Breaker> Bookkeeper<B> {
                 }
             }
 
+            // Reserve space.
+            unborrow!(self.reserve(self.pool.len() + 1));
+
+
             // Merging failed. Note that trailing empty blocks are not allowed, hence the last block is
             // the only non-empty candidate which may be adjacent to `block`.
 
@@ -498,73 +611,32 @@ impl<B: Breaker> Bookkeeper<B> {
             debug_assert!(res.is_ok(), "Push failed (buffer full).");
         }
 
+        // Check consistency.
         self.check();
     }
 
-    /// Perform a binary search to find the appropriate place where the block can be insert or is
-    /// located.
-    ///
-    /// It is guaranteed that no block left to the returned value, satisfy the above condition.
-    #[inline]
-    fn find(&mut self, block: &Block) -> usize {
-        // TODO optimize this function.
-        // Logging.
-        log!(self.pool, "Searching (exact) for {:?}.", block);
-
-        let ind = match self.pool.binary_search(block) {
-            Ok(x) | Err(x) => x,
-        };
-        let len = self.pool.len();
-
-        // Move left.
-        ind - self.pool.iter_mut()
-            .rev()
-            .skip(len - ind)
-            .take_while(|x| x.is_empty())
-            .count()
-    }
-
-    /// Perform a binary search to find the appropriate bound where the block can be insert or is
-    /// located.
-    ///
-    /// It is guaranteed that no block left to the returned value, satisfy the above condition.
-    #[inline]
-    fn find_bound(&mut self, block: &Block) -> Range<usize> {
-        // TODO optimize this function.
-        // Logging.
-        log!(self.pool, "Searching (bounds) for {:?}.", block);
+    /// Reserve some number of elements.
+    fn reserve(&mut self, min_cap: usize) {
+        if self.pool.capacity() < self.pool.len() + EXTRA_ELEMENTS || self.pool.capacity() < min_cap {
+            // One extra for the old buffer.
+            let new_cap = (min_cap + EXTRA_ELEMENTS) * 2 + 16 + 1;
 
-        let mut left_ind = match self.pool.binary_search(block) {
-            Ok(x) | Err(x) => x,
-        };
-
-        let len = self.pool.len();
-
-        // Move left.
-        left_ind -= self.pool.iter_mut()
-            .rev()
-            .skip(len - left_ind)
-            .take_while(|x| x.is_empty())
-            .count();
+            // Catch 'em all.
+            debug_assert!(new_cap > self.pool.capacity(), "Reserve shrinks?!");
 
-        let mut right_ind = match self.pool.binary_search(&block.empty_right()) {
-            Ok(x) | Err(x) => x,
-        };
-
-        // Move right.
-        right_ind += self.pool.iter()
-            .skip(right_ind)
-            .take_while(|x| x.is_empty())
-            .count();
+            // Break it to me!
+            let new_buf = self.alloc_external(new_cap * mem::size_of::<Block>(), mem::align_of::<Block>());
+            let old_buf = self.pool.refill(new_buf);
 
-        left_ind..right_ind
+            // Free the old buffer.
+            self.free(old_buf);
+        }
     }
 
     /// Insert a block entry at some index.
     ///
     /// If the space is non-empty, the elements will be pushed filling out the empty gaps to the
-    /// right. If all places to the right is occupied, it will reserve additional space to the
-    /// block pool.
+    /// right.
     ///
     /// # Panics
     ///
@@ -635,6 +707,9 @@ impl<B: Breaker> Bookkeeper<B> {
         debug_assert!(self.find(&block) == ind, "Block is not inserted at the appropriate index.");
         debug_assert!(!block.is_empty(), "Inserting an empty block.");
 
+        // Reserve space.
+        unborrow!(self.reserve(self.pool.len() + 1));
+
         // Find the next gap, where a used block were.
         let n = {
             // The element we search for.
@@ -647,9 +722,6 @@ impl<B: Breaker> Bookkeeper<B> {
                 .map(|(n, _)| n);
 
             elem.unwrap_or_else(|| {
-                // Reserve capacity.
-                self.reserve_more(1);
-
                 // We default to the end of the pool.
                 self.pool.len() - ind
             })
@@ -661,7 +733,7 @@ impl<B: Breaker> Bookkeeper<B> {
         unsafe {
             // TODO clean this mess up.
 
-            if ind + n == self.pool.len() {
+            {
                 // We will move a block into reserved memory but outside of the vec's bounds. For
                 // that reason, we push an uninitialized element to extend the length, which will
                 // be assigned in the memcpy.
@@ -671,12 +743,12 @@ impl<B: Breaker> Bookkeeper<B> {
                 debug_assert!(res.is_ok(), "Push failed (buffer full).");
             }
 
-            // Memmove the elements.
+            // Memmove the elements to make a gap to the new block.
             ptr::copy(self.pool.get_unchecked(ind) as *const Block,
                       self.pool.get_unchecked_mut(ind + 1) as *mut Block, n);
 
             // Set the element.
-            *self.pool.get_unchecked_mut(ind) = block;
+            ptr::write(self.pool.get_unchecked_mut(ind), block);
         }
 
         // Check consistency.
@@ -714,263 +786,4 @@ impl<B: Breaker> Bookkeeper<B> {
             res
         }
     }
-
-    /// Reserve space for the block pool.
-    ///
-    /// This will ensure the capacity is at least `needed` greater than the current length,
-    /// potentially reallocating the block pool.
-    fn reserve_more(&mut self, extra: usize) {
-        // Logging.
-        log!(bk.pool;bk.pool.len(), "Reserving {} past {}, currently has capacity {}.", extra,
-             bk.pool.len(), bk.pool.capacity());
-
-        let needed = bk.pool.len() + extra;
-        if needed > bk.pool.capacity() {
-            B::realloc_pool(self, needed);
-
-            // Check consistency.
-            bk.check();
-        }
-    }
-
-    /// Leave the allocator.
-    ///
-    /// A block should be "registered" through this function when it leaves the allocated (e.g., is
-    /// returned), these are used to keep track of the current heap usage, and memory leaks.
-    #[inline]
-    fn leave(&mut self, block: Block) -> Block {
-        // Update the number of bytes allocated.
-        #[cfg(feature = "debug_tools")]
-        {
-            self.allocated += block.size();
-        }
-
-        block
-    }
-
-    /// Enter the allocator.
-    ///
-    /// A block should be "registered" through this function when it enters the allocated (e.g., is
-    /// given as argument), these are used to keep track of the current heap usage, and memory
-    /// leaks.
-    #[inline]
-    fn enter(&mut self, block: Block) -> Block {
-        // Update the number of bytes allocated.
-        #[cfg(feature = "debug_tools")]
-        {
-            self.allocated -= block.size();
-        }
-
-        block
-    }
-
-    /// Perform consistency checks.
-    ///
-    /// This will check for the following conditions:
-    ///
-    /// 1. The list is sorted.
-    /// 2. No blocks are adjacent.
-    ///
-    /// This is NOOP in release mode.
-    fn check(&self) {
-        if cfg!(debug_assertions) {
-            // Logging.
-            log!(self.pool, "Checking...");
-
-            // Reverse iterator over the blocks.
-            let mut it = self.pool.iter().enumerate().rev();
-
-            if let Some((_, x)) = it.next() {
-                // Make sure there are no leading empty blocks.
-                assert!(!x.is_empty());
-
-                let mut next = x;
-                for (n, i) in it {
-                    // Check if sorted.
-                    assert!(next >= i, "The block pool is not sorted at index, {} ({:?} < {:?}).",
-                            n, next, i);
-                    // Make sure no blocks are adjacent.
-                    assert!(!i.left_to(next) || i.is_empty(), "Adjacent blocks at index, {} ({:?} and \
-                            {:?})", n, i, next);
-                    // Make sure an empty block has the same address as its right neighbor.
-                    assert!(!i.is_empty() || i == next, "Empty block not adjacent to right neighbor \
-                            at index {} ({:?} and {:?})", n, i, next);
-
-                    // Set the variable tracking the previous block.
-                    next = i;
-                }
-
-                // Check for trailing empty blocks.
-                assert!(!self.pool.last().unwrap().is_empty(), "Trailing empty blocks.");
-            }
-
-            // Logging...
-            log!(self.pool, "Check OK!");
-        }
-    }
-
-    /// Attach this allocator to the current thread.
-    ///
-    /// This will make sure this allocator's data  is freed to the
-    pub unsafe fn attach(&mut self) {
-        fn dtor(ptr: *mut Bookkeeper) {
-            let alloc = *ptr;
-
-            // Lock the global allocator.
-            let global_alloc = allocator::GLOBAL_ALLOCATOR.lock();
-
-            // TODO, we know this is sorted, so we could abuse that fact to faster insertion in the
-            // global allocator.
-
-            // Free everything in the allocator.
-            while let Some(i) = alloc.pool.pop() {
-                global_alloc.free(i);
-            }
-
-            // Deallocate the vector itself.
-            global_alloc.free(Block::from(alloc.pool));
-
-            // Gotta' make sure no memleaks are here.
-            #[cfg(feature = "debug_tools")]
-            alloc.assert_no_leak();
-        }
-
-        sys::register_thread_destructor(self as *mut Bookkeeper, dtor).unwrap();
-    }
-
-    /// Check for memory leaks.
-    ///
-    /// This will ake sure that all the allocated blocks have been freed.
-    #[cfg(feature = "debug_tools")]
-    fn assert_no_leak(&self) {
-        assert!(self.allocated == self.pool.capacity() * mem::size_of::<Block>(), "Not all blocks \
-                freed. Total allocated space is {} ({} free blocks).", self.allocated,
-                self.pool.len());
-    }
-}
-
-trait Breaker {
-    /// Allocate _fresh_ space.
-    ///
-    /// "Fresh" means that the space is allocated through the breaker.
-    ///
-    /// The returned pointer is guaranteed to be aligned to `align`.
-    fn alloc_fresh(bk: &mut Bookkeeper<Self>, size: usize, align: usize) -> Block;
-    /// Realloate the block pool to some specified capacity.
-    fn realloc_pool(bk: &mut Bookkeeper<Self>, cap: usize);
-}
-
-/// SBRK fresh allocator.
-///
-/// This will extend the data segment whenever new memory is needed. Since this includes leaving
-/// userspace, this shouldn't be used when other allocators are available (i.e. the bookkeeper is
-/// local).
-struct Sbrk;
-
-impl Breaker for Sbrk {
-    #[inline]
-    fn alloc_fresh(bk: &mut Bookkeeper<Sbrk>, size: usize, align: usize) -> Block {
-        // Obtain what you need.
-        let (alignment_block, res, excessive) = brk::get(size, align);
-
-        // Add it to the list. This will not change the order, since the pointer is higher than all
-        // the previous blocks.
-        bk.double_push(alignment_block, excessive);
-
-        res
-    }
-
-    #[inline]
-    fn realloc_pool(bk: &mut Bookkeeper<Sbrk>, extra: usize) {
-        // TODO allow BRK-free non-inplace reservations.
-        // TODO Enable inplace reallocation in this position.
-
-        // Reallocate the block pool.
-
-        // Make a fresh allocation.
-        let size = (needed +
-            cmp::min(bk.pool.capacity(), 200 + bk.pool.capacity() / 2)
-            // We add:
-            + 1 // block for the alignment block.
-            + 1 // block for the freed vector.
-            + 1 // block for the excessive space.
-        ) * mem::size_of::<Block>();
-        let (alignment_block, alloc, excessive) = brk::get(size, mem::align_of::<Block>());
-
-        // Refill the pool.
-        let old = bk.pool.refill(alloc);
-
-        // Double push the alignment block and the excessive space linearly (note that it is in
-        // fact in the end of the pool, due to BRK _extending_ the segment).
-        bk.double_push(alignment_block, excessive);
-
-        // Free the old vector.
-        bk.free(old);
-    }
-}
-
-/// Allocate fresh memory from the global allocator.
-struct GlobalAllocator;
-
-impl Breaker for GlobalAllocator {
-    #[inline]
-    fn alloc_fresh(bk: &mut Bookkeeper<GlobalAllocator>, size: usize, align: usize) -> Block {
-        /// Canonicalize the requested space.
-        ///
-        /// We request excessive space to the upstream allocator to avoid repeated requests and
-        /// lock contentions.
-        #[inline]
-        fn canonicalize_space(min: usize) -> usize {
-            // TODO tweak this.
-
-            // To avoid having mega-allocations allocate way to much space, we
-            // have a maximal extra space limit.
-            if min > 8192 { min } else {
-                // To avoid paying for short-living or little-allocating threads, we have no minimum.
-                // Instead we multiply.
-                min * 4
-                // This won't overflow due to the conditition of this branch.
-            }
-        }
-
-        // Get the block from the global allocator.
-        let (res, excessive) = allocator::GLOBAL_ALLOCATOR.lock()
-            .alloc(canonicalize_space(size), align)
-            .split(size);
-
-        // Free the excessive space to the current allocator. Note that you cannot simply push
-        // (which is the case for SBRK), due the block not necessarily being above all the other
-        // blocks in the pool. For this reason, we let `free` handle the search and so on.
-        bk.free(excessive);
-
-        res
-    }
-
-    #[inline]
-    fn realloc_pool(bk: &mut Bookkeeper<GlobalAllocator>, extra: usize) {
-        // TODO allow BRK-free non-inplace reservations.
-        // TODO Enable inplace reallocation in this position.
-
-        // Reallocate the block pool.
-
-        // Make a fresh allocation.
-        let size = (needed +
-            cmp::min(bk.pool.capacity(), 200 + bk.pool.capacity() / 2)
-            // We add:
-            + 1 // block for the alignment block.
-            + 1 // block for the freed vector.
-            + 1 // block for the excessive space.
-        ) * mem::size_of::<Block>();
-        let (alignment_block, alloc, excessive) = brk::get(size, mem::align_of::<Block>());
-
-        // Refill the pool.
-        let old = bk.pool.refill(alloc);
-
-        // Double push the alignment block and the excessive space linearly (note that it is in
-        // fact in the end of the pool, due to BRK _extending_ the segment).
-        bk.double_push(alignment_block, excessive);
-
-        // Free the old vector.
-        bk.free(old);
-    }
 }

+ 17 - 2
src/brk.rs

@@ -1,5 +1,7 @@
 use prelude::*;
 
+use core::cmp;
+
 /// Canonicalize a BRK request.
 ///
 /// Syscalls can be expensive, which is why we would rather accquire more memory than necessary,
@@ -36,9 +38,9 @@ fn canonicalize_space(min: usize) -> usize {
 /// The first block represents the aligner segment (that is the precursor aligning the middle
 /// block to `align`), the second one is the result and is of exactly size `size`. The last
 /// block is the excessive space.
-fn get(size: usize, align: usize) -> (Block, Block, Block) {
+pub fn get(size: usize, align: usize) -> (Block, Block, Block) {
     // Calculate the canonical size (extra space is allocated to limit the number of system calls).
-    let brk_size = canonicalize_brk(size) + align;
+    let brk_size = canonicalize_space(size) + align;
 
     // Use SBRK to allocate extra data segment. The alignment is used as precursor for our
     // allocated block. This ensures that it is properly memory aligned to the requested value.
@@ -53,3 +55,16 @@ fn get(size: usize, align: usize) -> (Block, Block, Block) {
 
     (alignment_block, res, excessive)
 }
+
+#[cfg(test)]
+mod test {
+    use super::*;
+
+    #[test]
+    fn test_ordered() {
+        let brk = get(20, 1);
+
+        assert!(brk.0 < brk.1);
+        assert!(brk.1 < brk.2);
+    }
+}

+ 20 - 90
src/cell.rs

@@ -1,81 +1,31 @@
-use core::cell::{UnsafeCell, Cell};
-use core::ops;
+use prelude::*;
 
-/// An "uni-cell".
+use core::cell::UnsafeCell;
+use core::mem;
+
+/// A move cell.
 ///
-/// This is a mutually exclusive container, essentially acting as a single-threaded mutex.
-pub struct UniCell<T> {
+/// This allows you to take ownership and replace the internal data with a new value. The
+/// functionality is similar to the one provided by [RFC #1659](https://github.com/rust-lang/rfcs/pull/1659).
+// TODO use that rfc ^
+pub struct MoveCell<T> {
     /// The inner data.
     inner: UnsafeCell<T>,
-    /// Is this data currently used?
-    used: Cell<bool>,
 }
 
-impl<T> UniCell<T> {
-    /// Create a new uni-cell with some inner data.
+impl<T> MoveCell<T> {
+    /// Create a new cell with some inner data.
     #[inline]
-    pub const fn new(data: T) -> UniCell<T> {
-        UniCell {
+    pub const fn new(data: T) -> MoveCell<T> {
+        MoveCell {
             inner: UnsafeCell::new(data),
-            used: Cell::new(false),
         }
     }
 
-    /// Get an reference to the inner data.
-    ///
-    /// This will return `Err(())` if the data is currently in use.
+    /// Replace the inner data and return the old.
     #[inline]
-    pub fn get(&self) -> Result<Ref<T>, ()> {
-        if self.used.get() {
-            None
-        } else {
-            // Mark it as used.
-            self.used.set(true);
-
-            Some(Ref {
-                cell: self,
-            })
-        }
-    }
-
-    /// Get the inner and mark the cell used forever.
-    pub fn into_inner(&self) -> Option<T> {
-        if self.used.get() {
-            None
-        } else {
-            // Mark it as used forever.
-            self.used.set(true);
-
-            Some(ptr::read(self.inner.get()))
-        }
-    }
-}
-
-/// An reference to the inner value of an uni-cell.
-pub struct Ref<T> {
-    cell: UniCell<T>,
-}
-
-impl<T> ops::Deref for Ref<T> {
-    type Target = T;
-
-    #[inline]
-    fn deref(&self) -> &T {
-        &*self.cell.inner.get()
-    }
-}
-
-impl<T> ops::DerefMut for Ref<T> {
-    #[inline]
-    fn deref_mut(&mut self) -> &mut T {
-        &mut *self.cell.inner.get()
-    }
-}
-
-impl<T> Drop for Ref<T> {
-    #[inline]
-    fn drop(&mut self) {
-        self.cell.used.set(false);
+    pub fn replace(&self, new: T) -> T {
+        mem::replace(unsafe { &mut *self.inner.get() }, new)
     }
 }
 
@@ -84,29 +34,9 @@ mod test {
     use super::*;
 
     #[test]
-    fn test_inner() {
-        assert_eq!(UniCell::new(101).get(), Ok(101));
-        assert_eq!(UniCell::new("heh").get(), Ok("heh"));
-    }
-
-    #[test]
-    fn test_double_get() {
-        let cell = UniCell::new(500);
-
-        assert_eq!(*cell.get().unwrap(), 500);
-
-        {
-            let tmp = cell.get();
-            assert!(cell.get().is_err());
-            {
-                let tmp = cell.get();
-                assert!(cell.get().is_err());
-            }
-            *tmp.unwrap() = 201;
-        }
-
-        assert_eq!(*cell.get().unwrap(), 201);
-        *cell.get().unwrap() = 100;
-        assert_eq!(*cell.get().unwrap(), 100);
+    fn test_cell() {
+        let cell = MoveCell::new(200);
+        assert_eq!(cell.replace(300), 200);
+        assert_eq!(cell.replace(4), 300);
     }
 }

+ 45 - 3
src/fail.rs

@@ -1,13 +1,17 @@
 //! General error handling.
 
+use prelude::*;
+
 use core::sync::atomic::{self, AtomicPtr};
 use core::{mem, intrinsics};
 
+use tls;
+
 /// The global OOM handler.
 static OOM_HANDLER: AtomicPtr<()> = AtomicPtr::new(default_oom_handler as *mut ());
 tls! {
     /// The thread-local OOM handler.
-    static THREAD_OOM_HANDLER: Option<fn() -> !> = None;
+    static THREAD_OOM_HANDLER: MoveCell<Option<fn() -> !>> = MoveCell::new(None);
 }
 
 /// The default OOM handler.
@@ -33,7 +37,7 @@ fn default_oom_handler() -> ! {
 /// The rule of thumb is that this should be called, if and only if unwinding (which allocates)
 /// will hit the same error.
 pub fn oom() -> ! {
-    if let Some(handler) = THREAD_OOM_HANDLER.get().unwrap() {
+    if let Some(handler) = THREAD_OOM_HANDLER.get().replace(None) {
         // There is a local allocator available.
         handler();
     } else {
@@ -53,7 +57,45 @@ pub fn set_oom_handler(handler: fn() -> !) {
 }
 
 /// Override the OOM handler for the current thread.
+///
+/// # Panics
+///
+/// This might panic if a thread OOM handler already exists.
 #[inline]
 pub fn set_thread_oom_handler(handler: fn() -> !) {
-    *THREAD_OOM_HANDLER.get().unwrap() = handler;
+    let mut thread_alloc = THREAD_OOM_HANDLER.get();
+    let out = thread_alloc.replace(Some(handler));
+
+    debug_assert!(out.is_none());
+}
+
+#[cfg(test)]
+mod test {
+    use super::*;
+
+    #[test]
+    #[should_panic]
+    fn test_panic_oom() {
+        fn panic() -> ! {
+            panic!("cats are not cute.");
+        }
+
+        set_oom_handler(panic);
+        oom();
+    }
+
+    #[test]
+    #[should_panic]
+    fn test_panic_thread_oom() {
+        fn infinite() -> ! {
+            loop {}
+        }
+        fn panic() -> ! {
+            panic!("cats are not cute.");
+        }
+
+        set_oom_handler(infinite);
+        set_thread_oom_handler(infinite);
+        oom();
+    }
 }

+ 81 - 0
src/lazy_init.rs

@@ -0,0 +1,81 @@
+//! `LazyStatic` like initialization.
+
+use core::{mem, ptr, intrinsics};
+
+/// The initialization state
+enum State<F, T> {
+    /// The data is uninitialized, initialization is pending.
+    ///
+    /// The inner closure contains the initialization function.
+    Uninitialized(F),
+    /// The data is initialized, and ready for use.
+    Initialized(T),
+}
+
+/// A lazily initialized container.
+pub struct LazyInit<F, T> {
+    /// The internal state.
+    state: State<F, T>,
+}
+
+impl<F: FnMut() -> T, T> LazyInit<F, T> {
+    /// Create a new to-be-initialized container.
+    ///
+    /// The closure will be executed when initialization is required.
+    #[inline]
+    pub const fn new(init: F) -> LazyInit<F, T> {
+        LazyInit {
+            state: State::Uninitialized(init),
+        }
+    }
+
+    /// Get a mutable reference to the inner value.
+    ///
+    /// If it is uninitialize, it will be initialized and then returned.
+    #[inline]
+    pub fn get(&mut self) -> &mut T {
+        let mut inner;
+
+        let res = match self.state {
+            State::Initialized(ref mut x) => {
+                return x;
+            },
+            State::Uninitialized(ref mut f) => {
+                inner = f();
+            },
+        };
+
+        self.state = State::Initialized(inner);
+
+        if let State::Initialized(ref mut x) = self.state {
+            x
+        } else {
+            // TODO find a better way.
+            unreachable!();
+        }
+    }
+}
+
+#[cfg(test)]
+mod test {
+    use super::*;
+
+    use core::cell::Cell;
+
+    #[test]
+    fn test_init() {
+        let mut lazy = LazyInit::new(|| 300);
+
+        assert_eq!(*lazy.get(), 300);
+        *lazy.get() = 400;
+        assert_eq!(*lazy.get(), 400);
+    }
+
+    fn test_laziness() {
+        let mut is_called = Cell::new(false);
+        let mut lazy = LazyInit::new(|| is_called.set(true));
+        assert!(!is_called.get());
+        lazy.get();
+        assert!(is_called.get());
+    }
+}

+ 13 - 2
src/lib.rs

@@ -2,6 +2,12 @@
 //!
 //! This crates define the user space allocator for Redox, which emphasizes performance and memory
 //! efficiency.
+//!
+//! # Ralloc seems to reimplement everything. Why?
+//!
+//! Memory allocators cannot depend on libraries or primitives, which allocates. This is a
+//! relatively strong condition, which means that you are forced to rewrite primitives and make
+//! sure no allocation ever happens.
 
 #![cfg_attr(feature = "allocator", allocator)]
 #![cfg_attr(feature = "clippy", feature(plugin))]
@@ -10,12 +16,16 @@
 #![no_std]
 
 #![feature(allocator, const_fn, core_intrinsics, stmt_expr_attributes, drop_types_in_const,
-           nonzero, optin_builtin_traits, type_ascription, question_mark, try_from)]
+           nonzero, optin_builtin_traits, type_ascription, question_mark, try_from, thread_local,
+           linkage)]
 #![warn(missing_docs, cast_precision_loss, cast_sign_loss, cast_possible_wrap,
         cast_possible_truncation, filter_map, if_not_else, items_after_statements,
         invalid_upcast_comparisons, mutex_integer, nonminimal_bool, shadow_same, shadow_unrelated,
         single_match_else, string_add, string_add_assign, wrong_pub_self_convention)]
 
+#[macro_use]
+extern crate unborrow;
+
 #[cfg(feature = "libc_write")]
 #[macro_use]
 mod write;
@@ -32,6 +42,7 @@ mod bookkeeper;
 mod brk;
 mod cell;
 mod fail;
+mod lazy_init;
 mod leak;
 mod prelude;
 mod ptr;
@@ -39,6 +50,6 @@ mod sync;
 mod sys;
 mod vec;
 
-pub use allocator::{lock, Allocator};
+pub use allocator::{alloc, free, realloc, realloc_inplace};
 pub use fail::{set_oom_handler, set_thread_oom_handler};
 pub use sys::sbrk;

+ 0 - 95
src/micro.rs

@@ -1,95 +0,0 @@
-//! Micro slots for caching small allocations.
-
-// TODO needs tests and documentation.
-
-use prelude::*;
-
-use core::{marker, mem};
-
-const CACHE_LINE_SIZE: usize = 128;
-const CACHE_LINES: usize = 32;
-
-/// A "microcache".
-///
-/// A microcache consists of some number of equal sized slots, whose state is stored as bitflags.
-pub struct MicroCache {
-    free: u32,
-    lines: [CacheLine; CACHE_LINES],
-}
-
-impl MicroCache {
-    pub const fn new() -> MicroCache {
-        MicroCache {
-            free: !0,
-            lines: [CacheLine::new(); CACHE_LINES],
-        }
-    }
-
-    pub fn alloc(&mut self, size: usize, align: usize) -> Result<Block, ()> {
-        if size <= CACHE_LINE_SIZE && self.free != 0 {
-            let ind = self.free.trailing_zeros();
-            let line = &mut self.lines[ind as usize];
-            let res = unsafe { line.take(size) };
-
-            if res.aligned_to(align) {
-                self.free ^= 1u32.wrapping_shl(ind);
-
-                return Ok(res);
-            } else {
-                line.reset();
-            }
-        }
-
-        Err(())
-    }
-
-    pub fn free(&mut self, mut block: Block) -> Result<(), Block> {
-        let res = block.pop();
-        let ptr: Pointer<u8> = block.into();
-        let ind = (*ptr as usize - &self.lines as *const CacheLine as usize) / mem::size_of::<Block>();
-
-        if let Some(line) = self.lines.get_mut(ind) {
-            line.used -= res.size();
-            if line.used == 0 {
-                debug_assert!(self.free & 1u32.wrapping_shl(ind as u32) == 0, "Freeing a block \
-                              already marked as free.");
-                self.free ^= 1u32.wrapping_shl(ind as u32);
-            }
-
-            Ok(())
-        } else {
-            Err(res)
-        }
-    }
-}
-
-#[derive(Clone, Copy)]
-struct CacheLine {
-    /// The cache line's data.
-    ///
-    /// We use `u32` as a hack to be able to derive `Copy`.
-    data: [u32; CACHE_LINE_SIZE / 4],
-    used: usize,
-    _static: marker::PhantomData<&'static mut [u8]>,
-}
-
-impl CacheLine {
-    pub const fn new() -> CacheLine {
-        CacheLine {
-            data: [0; CACHE_LINE_SIZE / 4],
-            used: 0,
-            _static: marker::PhantomData,
-        }
-    }
-
-    fn reset(&mut self) {
-        self.used = 0;
-    }
-
-    unsafe fn take(&mut self, size: usize) -> Block {
-        debug_assert!(self.used == 0, "Block not freed!");
-
-        self.used = size;
-        Block::from_raw_parts(Pointer::new(&mut self.data[0] as *mut u32 as *mut u8), size)
-    }
-}

+ 5 - 1
src/prelude.rs

@@ -1,6 +1,10 @@
 //! Frequently used imports.
 
+// TODO remove all this?
+
 pub use block::Block;
-pub use cell::UniCell;
+pub use cell::MoveCell;
+pub use lazy_init::LazyInit;
 pub use leak::Leak;
 pub use ptr::Pointer;
+pub use vec::Vec;

+ 6 - 0
src/ptr.rs

@@ -70,6 +70,12 @@ impl<T> Pointer<T> {
     }
 }
 
+impl<T> Default for Pointer<T> {
+    fn default() -> Pointer<T> {
+        Pointer::empty()
+    }
+}
+
 unsafe impl<T: Send> Send for Pointer<T> {}
 unsafe impl<T: Sync> Sync for Pointer<T> {}
 

+ 11 - 4
src/symbols.rs

@@ -1,31 +1,37 @@
 //! Rust allocation symbols.
 
+use allocator;
+
 /// Rust allocation symbol.
+#[linkage = "external"]
 #[no_mangle]
 #[inline]
 pub extern fn __rust_allocate(size: usize, align: usize) -> *mut u8 {
-    lock().alloc(size, align)
+    allocator::alloc(size, align)
 }
 
 /// Rust deallocation symbol.
+#[linkage = "external"]
 #[no_mangle]
 #[inline]
 pub unsafe extern fn __rust_deallocate(ptr: *mut u8, size: usize, _align: usize) {
-    lock().free(ptr, size);
+    allocator::free(ptr, size);
 }
 
 /// Rust reallocation symbol.
+#[linkage = "external"]
 #[no_mangle]
 #[inline]
 pub unsafe extern fn __rust_reallocate(ptr: *mut u8, old_size: usize, size: usize, align: usize) -> *mut u8 {
-    lock().realloc(ptr, old_size, size, align)
+    allocator::realloc(ptr, old_size, size, align)
 }
 
 /// Rust reallocation inplace symbol.
+#[linkage = "external"]
 #[no_mangle]
 #[inline]
 pub unsafe extern fn __rust_reallocate_inplace(ptr: *mut u8, old_size: usize, size: usize, _align: usize) -> usize {
-    if lock().realloc_inplace(ptr, old_size, size).is_ok() {
+    if allocator::realloc_inplace(ptr, old_size, size).is_ok() {
         size
     } else {
         old_size
@@ -33,6 +39,7 @@ pub unsafe extern fn __rust_reallocate_inplace(ptr: *mut u8, old_size: usize, si
 }
 
 /// Get the usable size of the some number of bytes of allocated memory.
+#[linkage = "external"]
 #[no_mangle]
 #[inline]
 pub extern fn __rust_usable_size(size: usize, _align: usize) -> usize {

+ 13 - 3
src/sys.rs

@@ -2,6 +2,8 @@
 
 extern crate ralloc_shim as shim;
 
+use core::mem;
+
 #[cfg(not(feature = "unsafe_no_brk_lock"))]
 use sync;
 
@@ -29,7 +31,7 @@ pub unsafe fn sbrk(n: isize) -> Result<*mut u8, ()> {
     if brk as usize == !0 {
         Err(())
     } else {
-        Ok(brk)
+        Ok(brk as *mut u8)
     }
 }
 
@@ -42,11 +44,19 @@ pub fn yield_now() {
 ///
 /// This will add a thread destructor to _the current thread_, which will be executed when the
 /// thread exits.
+///
+/// The argument to the destructor is a pointer to the so-called "load", which is the data
+/// shipped with the destructor.
 // TODO I haven't figured out a safe general solution yet. Libstd relies on devirtualization,
 // which, when missed, can make it quite expensive.
-pub fn register_thread_destructor<T>(primitive: *mut T, dtor: fn(*mut T)) -> Result<(), ()> {
+pub fn register_thread_destructor<T>(load: *mut T, dtor: extern fn(*mut T)) -> Result<(), ()> {
+    // Check if thread dtors are supported.
     if shim::thread_destructor::is_supported() {
-        shim::thread_destructor::register(primitive, dtor);
+        unsafe {
+            // This is safe due to sharing memory layout.
+            shim::thread_destructor::register(load as *mut u8, mem::transmute(dtor));
+        }
+
         Ok(())
     } else {
         Err(())

+ 48 - 33
src/tls.rs

@@ -1,46 +1,63 @@
+use prelude::*;
+
 use core::{ops, marker};
 
-/// Add `Sync` to an arbitrary type.
-///
-/// This primitive is used to get around the `Sync` requirement in `static`s (even thread local
-/// ones! see rust-lang/rust#35035). Due to breaking invariants, creating a value of such type is
-/// unsafe, and care must be taken upon usage.
-///
-/// In general, this should only be used when you know it won't be shared across threads (e.g. the
-/// value is stored in a thread local variable).
-pub struct Syncify<T>(T);
+/// A thread-local container.
+pub struct Cell<T> {
+    /// The inner data.
+    inner: T,
+}
 
-impl<T> Syncify<T> {
-    /// Create a new `Syncify` wrapper.
+impl<T> Cell<T> {
+    /// Create a new `Cell` wrapper.
     ///
     /// # Safety
     ///
-    /// This is invariant-breaking and thus unsafe.
-    const unsafe fn new(inner: T) -> Syncify<T> {
-        Syncify(T)
+    /// This is invariant-breaking (assumes thread-safety) and thus unsafe.
+    pub const unsafe fn new(inner: T) -> Cell<T> {
+        Cell { inner: inner }
+    }
+
+    /// Get a reference to the inner value.
+    ///
+    /// Due to the variable being thread-local, one should never transfer it across thread
+    /// boundaries. The newtype returned ensures that.
+    pub fn get(&'static self) -> Ref<T> {
+        Ref::new(&self.inner)
     }
 }
 
-impl<T> ops::Deref for Syncify<T> {
-    type Target = T;
+unsafe impl<T> marker::Sync for Cell<T> {}
 
-    fn deref(&self) -> Syncify<T> {
-        &self.0
+/// A reference to a thread-local variable.
+///
+/// The purpose of this is to block sending it across thread boundaries.
+pub struct Ref<T: 'static> {
+    inner: &'static T,
+}
+
+impl<T> Ref<T> {
+    /// Create a new thread-bounded reference.
+    ///
+    /// One might wonder why this is safe, and the answer is simple: this type doesn't guarantee
+    /// that the internal pointer is from the current thread, it just guarantees that _future
+    /// access_ through this struct is done in the current thread.
+    pub fn new(x: &'static T) -> Ref<T> {
+        Ref {
+            inner: x,
+        }
     }
 }
 
-impl<T> ops::DerefMut for Syncify<T> {
-    fn deref_mut(&mut self) -> Syncify<T> {
-        &mut self.0
-        // If you read this, you are reading a note from a desperate programmer, who are really
-        // waiting for a upstream fix, cause holy shit. Why the heck would you have a `Sync`
-        // bound on thread-local variables. These are entirely single-threaded, and there is no
-        // reason for assuming anything else. Now that we're at it, have the world been destroyed
-        // yet?
+impl<T> ops::Deref for Ref<T> {
+    type Target = T;
+
+    fn deref(&self) -> &T {
+        self.inner
     }
 }
 
-unsafe impl<T> marker::Sync for Syncify<T> {}
+impl<T> !Send for Ref<T> {}
 
 /// Declare a thread-local static variable.
 ///
@@ -51,12 +68,10 @@ unsafe impl<T> marker::Sync for Syncify<T> {}
 /// For this reason, in contrast to other `static`s in Rust, this need not thread-safety, which is
 /// what this macro "fixes".
 macro_rules! tls {
-    (static $name:ident: $type:ty = $val:expr) => { tls!(#[] static $name: $type = $val) };
-    (#[$($attr:meta),*], static $name:ident: $type:ty = $val:expr) => {{
-        use tls::Syncify;
-
+    (static $name:ident: $ty:ty = $val:expr;) => { tls! { #[] static $name: $ty = $val; } };
+    (#[$($attr:meta),*] static $name:ident: $ty:ty = $val:expr;) => {
         $(#[$attr])*
         #[thread_local]
-        static $name: $type = unsafe { Syncify::new($val) };
-    }}
+        static $name: tls::Cell<$ty> = unsafe { tls::Cell::new($val) };
+    }
 }

+ 20 - 21
src/vec.rs

@@ -8,6 +8,7 @@ use core::{slice, ops, mem, ptr};
 ///
 /// This does not perform allocation nor reallaction, thus these have to be done manually.
 /// Moreover, no destructors are called, making it possible to leak memory.
+// NOTE  ^^^^^^^  This derivation should be carefully reviewed when this struct is changed.
 pub struct Vec<T: Leak> {
     /// A pointer to the start of the buffer.
     ptr: Pointer<T>,
@@ -22,18 +23,6 @@ pub struct Vec<T: Leak> {
 }
 
 impl<T: Leak> Vec<T> {
-    /// Create a new empty vector.
-    ///
-    /// This won't allocate a buffer, thus it will have a capacity of zero.
-    #[inline]
-    pub const fn new() -> Vec<T> {
-        Vec {
-            ptr: Pointer::empty(),
-            len: 0,
-            cap: 0,
-        }
-    }
-
     /// Create a vector from a block.
     ///
     /// # Safety
@@ -41,7 +30,6 @@ impl<T: Leak> Vec<T> {
     /// This is unsafe, since it won't initialize the buffer in any way, possibly breaking type
     /// safety, memory safety, and so on. Thus, care must be taken upon usage.
     #[inline]
-    #[cfg(test)]
     pub unsafe fn from_raw_parts(block: Block, len: usize) -> Vec<T> {
         Vec {
             cap: block.size() / mem::size_of::<T>(),
@@ -65,9 +53,11 @@ impl<T: Leak> Vec<T> {
 
         // Make some assertions.
         assert!(self.len <= new_cap, "Block not large enough to cover the vector.");
+        assert!(block.aligned_to(mem::align_of::<T>()), "Block not aligned.");
+
         self.check(&block);
 
-        let old = mem::replace(self, Vec::new());
+        let old = mem::replace(self, Vec::default());
 
         // Update the fields of `self`.
         self.cap = new_cap;
@@ -156,6 +146,17 @@ impl<T: Leak> Vec<T> {
     }
 }
 
+// TODO remove this in favour of `derive` when rust-lang/rust#35263 is fixed.
+impl<T: Leak> Default for Vec<T> {
+    fn default() -> Vec<T> {
+        Vec {
+            ptr: Pointer::empty(),
+            cap: 0,
+            len: 0,
+        }
+    }
+}
+
 /// Cast this vector to the respective block.
 impl<T: Leak> From<Vec<T>> for Block {
     fn from(from: Vec<T>) -> Block {
@@ -185,8 +186,6 @@ impl<T: Leak> ops::DerefMut for Vec<T> {
 
 #[cfg(test)]
 mod test {
-    use super::*;
-
     use prelude::*;
 
     #[test]
@@ -228,11 +227,11 @@ mod test {
         assert_eq!(&*vec, b".aaaaaaaaaaaaaaabc_____________@");
         assert_eq!(vec.capacity(), 32);
 
-        for _ in 32 { vec.pop().unwrap(); }
+        for _ in 0..32 { vec.pop().unwrap(); }
 
-        vec.pop().unwrap_err();
-        vec.pop().unwrap_err();
-        vec.pop().unwrap_err();
-        vec.pop().unwrap_err();
+        assert!(vec.pop().is_none());
+        assert!(vec.pop().is_none());
+        assert!(vec.pop().is_none());
+        assert!(vec.pop().is_none());
     }
 }

+ 5 - 7
tests/manual.rs

@@ -7,10 +7,8 @@ use std::ptr;
 #[test]
 fn manual() {
     util::multiply(|| {
-        let mut alloc = ralloc::Allocator::new();
-
-        let ptr1 = alloc.alloc(30, 3);
-        let ptr2 = alloc.alloc(500, 20);
+        let ptr1 = ralloc::alloc(30, 3);
+        let ptr2 = ralloc::alloc(500, 20);
 
         assert_eq!(0, ptr1 as usize % 3);
         assert_eq!(0, ptr2 as usize % 20);
@@ -31,7 +29,7 @@ fn manual() {
             assert_eq!(*ptr2, 0);
             assert_eq!(*ptr2.offset(15), 15);
 
-            let ptr1 = alloc.realloc(ptr1, 30, 300, 3);
+            let ptr1 = ralloc::realloc(ptr1, 30, 300, 3);
             for i in 0..300 {
                 util::acid(|| {
                     *ptr1.offset(i) = i as u8;
@@ -41,8 +39,8 @@ fn manual() {
             assert_eq!(*ptr1.offset(200), 200);
 
             util::acid(|| {
-                alloc.free(ptr1, 30);
-                alloc.free(ptr2, 500);
+                ralloc::free(ptr1, 30);
+                ralloc::free(ptr2, 500);
             });
         }
     });

+ 6 - 10
tests/partial_free.rs

@@ -7,9 +7,7 @@ use std::ptr;
 #[test]
 fn partial_free() {
     util::multiply(|| {
-        let mut alloc = ralloc::Allocator::new();
-
-        let buf = alloc.alloc(63, 3);
+        let buf = ralloc::alloc(63, 3);
 
         unsafe {
             util::acid(|| {
@@ -18,12 +16,12 @@ fn partial_free() {
             });
 
             util::acid(|| {
-                alloc.free(buf.offset(8), 75);
+                ralloc::free(buf.offset(8), 75);
                 *buf = 5;
             });
 
             util::acid(|| {
-                alloc.free(buf, 4);
+                ralloc::free(buf, 4);
                 *buf.offset(4) = 3;
             });
 
@@ -35,9 +33,7 @@ fn partial_free() {
 #[test]
 fn partial_free_double() {
     util::multiply(|| {
-        let mut alloc = ralloc::Allocator::new();
-
-        let buf = alloc.alloc(64, 4);
+        let buf = ralloc::alloc(64, 4);
 
         unsafe {
             util::acid(|| {
@@ -45,7 +41,7 @@ fn partial_free_double() {
             });
 
             util::acid(|| {
-                alloc.free(buf.offset(32), 32);
+                ralloc::free(buf.offset(32), 32);
                 *buf = 5;
             });
 
@@ -53,7 +49,7 @@ fn partial_free_double() {
 
             util::acid(|| {
                 *buf = 0xAA;
-                alloc.free(buf, 32);
+                ralloc::free(buf, 32);
             });
         }
     });

+ 4 - 5
tests/partial_realloc.rs

@@ -7,8 +7,7 @@ use std::ptr;
 #[test]
 fn partial_realloc() {
     util::multiply(|| {
-        let mut alloc = ralloc::Allocator::new();
-        let buf = alloc.alloc(63, 3);
+        let buf = ralloc::alloc(63, 3);
 
         unsafe {
             util::acid(|| {
@@ -16,12 +15,12 @@ fn partial_realloc() {
                 *buf = 4;
             });
 
-            alloc.realloc(buf.offset(8), 75, 0, 23);
+            ralloc::realloc(buf.offset(8), 75, 0, 23);
             *buf = 5;
 
-            *alloc.realloc(buf, 4, 10, 2) = 10;
+            *ralloc::realloc(buf, 4, 10, 2) = 10;
 
-            alloc.free(buf, 4);
+            ralloc::free(buf, 4);
         }
     });
 }

+ 1 - 1
tests/util/mod.rs

@@ -43,7 +43,7 @@ fn spawn_double<F: Fn() + Sync + Send>(func: F) {
 pub fn multiply<F: Fn() + Sync + Send + 'static>(func: F) {
     spawn_double(|| spawn_double(|| acid(|| func())));
 
-    ralloc::lock().debug_assert_no_leak();
+    // TODO assert no leaks.
 }
 
 /// Wrap a block in acid tests.