It's less error prone to have function symbols exported immediately
after the function rather than in metag_ksyms.c. Move each EXPORT_SYMBOL
in metag_ksyms.c for symbols defined in mm/init.c into mm/init.c.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Convert hugetlb_get_unmapped_area_new_pmd() to use vm_unmapped_area()
rather than searching the virtual address space itself. This fixes the
following errors in linux-next due to the specified members being
removed after other architectures have already been converted:
arch/metag/mm/hugetlbpage.c: In function 'hugetlb_get_unmapped_area_new_pmd':
arch/metag/mm/hugetlbpage.c:199: error: 'struct mm_struct' has no member named 'cached_hole_size'
arch/metag/mm/hugetlbpage.c:200: error: 'struct mm_struct' has no member named 'free_area_cache'
arch/metag/mm/hugetlbpage.c:215: error: 'struct mm_struct' has no member named 'cached_hole_size'
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Michel Lespinasse <walken@google.com>
Various file systems indirectly use metag_code_cache_flush_all(), so
when they're built as modules we get build errors like the following:
ERROR: "metag_code_cache_flush_all" [fs/xfs/xfs.ko] undefined!
Therefore export this function to modules to fix the errors. This was
hit by a randconfig build.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Add boot time check for whether LNKGET/LNKSET go through or around the
cache. Depending on the configuration an info message (no harm), warning
(technically wrong but no harm), or big WARN (expect failure in either
kernel or userland) may be emitted if the behaviour is not as expected:
Configuration Hardware Response
------------------------------------------ -------- --------
AROUND_CACHE through pr_info
!AROUND_CACHE && ATOMICITY_LNKGET around WARN (kernel)
" && !ATOMICITY_LNKGET && SMP around WARN (user)
" " && !SMP around pr_warn
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Meta has instructions for accessing:
- bytes - GETB (1 byte)
- words - GETW (2 bytes)
- doublewords - GETD (4 bytes)
- longwords - GETL (8 bytes)
All accesses must be aligned. Unaligned accesses can be detected and
made to fault on Meta2, however it isn't possible to fix up unaligned
writes so we don't bother fixing up reads either.
This patch adds metag memory handling code including:
- I/O memory (io.h, ioremap.c): Actually any virtual memory can be
accessed with these helpers. A part of the non-MMUable address space
is used for memory mapped I/O. The ioremap() function is implemented
one to one for non-MMUable addresses.
- User memory (uaccess.h, usercopy.c): User memory is directly
accessible from privileged code.
- Kernel memory (maccess.c): probe_kernel_write() needs to be
overwridden to use the I/O functions when doing a simple aligned
write to non-writecombined memory, otherwise the write may be split
by the generic version.
Note that due to the fact that a portion of the virtual address space is
non-MMUable, and therefore always maps directly to the physical address
space, metag specific I/O functions are made available (metag_in32,
metag_out32 etc). These cast the address argument to a pointer so that
they can be used with raw physical addresses. These accessors are only
to be used for accessing fixed core Meta architecture registers in the
non-MMU region, and not for any SoC/peripheral registers.
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Add memory management files for metag.
Meta's 32bit virtual address space is split into two halves:
- local (0x08000000-0x7fffffff): traditionally local to a hardware
thread and incoherent between hardware threads. Each hardware thread
has it's own local MMU table. On Meta2 the local space can be
globally coherent (GCOn) if the cache partitions coincide.
- global (0x88000000-0xffff0000): coherent and traditionally global
between hardware threads. On Meta2, each hardware thread has it's own
global MMU table.
The low 128MiB of each half is non-MMUable and maps directly to the
physical address space:
- 0x00010000-0x07ffffff: contains Meta core registers and maps SoC bus
- 0x80000000-0x87ffffff: contains low latency global core memories
Linux usually further splits the local virtual address space like this:
- 0x08000000-0x3fffffff: user mappings
- 0x40000000-0x7fffffff: kernel mappings
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Add cache and TLB handling code for metag, including the required
callbacks used by MM switches and DMA operations. Caches can be
partitioned between the hardware threads and the global space, however
this is usually configured by the bootloader so Linux doesn't make any
changes to this configuration. TLBs aren't configurable, so only need
consideration to flush them.
On Meta1 the L1 cache was VIVT which required a full flush on MM switch.
Meta2 has a VIPT L1 cache so it doesn't require the full flush on MM
switch. Meta2 can also have a writeback L2 with hardware prefetch which
requires some special handling. Support is optional, and the L2 can be
detected and initialised by Linux.
Signed-off-by: James Hogan <james.hogan@imgtec.com>