This patch cleans the initialization of dma contiguous framework. The
all-in-one dma_declare_contiguous() function is now separated into
dma_contiguous_reserve_area() which only steals the the memory from
memblock allocator and dma_contiguous_add_device() function, which
assigns given device to the specified reserved memory area. This improves
the flexibility in defining contiguous memory areas and assigning device
to them, because now it is possible to assign more than one device to
the given contiguous memory area. Such split in initialization procedure
is also required for upcoming device tree support.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Kyungmin Park <kyungmin.park@samsung.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Tomasz Figa <t.figa@samsung.com>
This commit changes the CMA early initialization code to use phys_addr_t
for representing physical addresses instead of unsigned long.
Without this change, among other things, dma_declare_contiguous() simply
discards any memory regions whose address is not representable as unsigned
long.
This is a problem on 32-bit PAE machines where unsigned long is 32-bit
but physical address space is larger.
Signed-off-by: Vitaly Andrianov <vitalya@ti.com>
Signed-off-by: Cyril Chemparathy <cyril@ti.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Use the definition from linux/sizes.h instead.
Signed-off-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
The dma_alloc_from_contiguous() function returns either a valid pointer
to a page structure or NULL, the error code set when pageno >= cma->count
is not used at all and can be safely removed.
This commit also changes the function to avoid goto and have only one exit
path and one place where mutex is unlocked.
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
[fixed compilation break caused by missing semicolon]
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Contiguous Memory Allocator requires each of its regions to be aligned
in such a way that it is possible to change migration type for all
pageblocks holding it and then isolate page of largest possible order from
the buddy allocator (which is MAX_ORDER-1). This patch relaxes alignment
requirements by one order, because MAX_ORDER alignment is not really
needed.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Michal Nazarewicz <mina86@mina86.com>
The Contiguous Memory Allocator is a set of helper functions for DMA
mapping framework that improves allocations of contiguous memory chunks.
CMA grabs memory on system boot, marks it with MIGRATE_CMA migrate type
and gives back to the system. Kernel is allowed to allocate only movable
pages within CMA's managed memory so that it can be used for example for
page cache when DMA mapping do not use it. On
dma_alloc_from_contiguous() request such pages are migrated out of CMA
area to free required contiguous block and fulfill the request. This
allows to allocate large contiguous chunks of memory at any time
assuming that there is enough free memory available in the system.
This code is heavily based on earlier works by Michal Nazarewicz.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com>
Signed-off-by: Michal Nazarewicz <mina86@mina86.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Tested-by: Rob Clark <rob.clark@linaro.org>
Tested-by: Ohad Ben-Cohen <ohad@wizery.com>
Tested-by: Benjamin Gaignard <benjamin.gaignard@linaro.org>
Tested-by: Robert Nelson <robertcnelson@gmail.com>
Tested-by: Barry Song <Baohua.Song@csr.com>