Pull drm fixes from Dave Airlie:
"This contains a bunch of amdgpu fixes, and some i915 regression fixes.
It also contains some fixes for an older regression with some EDID
changes and some 6bpc panels.
Then there are the lockdep, cirrus and rcar-du regression fixes from
this window"
* tag 'drm-fixes-for-4.8-rc2' of git://people.freedesktop.org/~airlied/linux:
drm/cirrus: Fix NULL pointer dereference when registering the fbdev
drm/edid: Set 8 bpc color depth for displays with "DFP 1.x compliant TMDS".
drm/i915/dp: Revert "drm/i915/dp: fall back to 18 bpp when sink capability is unknown"
drm/edid: Add 6 bpc quirk for display AEO model 0.
drm: Paper over locking inversion after registration rework
drm: rcar-du: Link HDMI encoder with bridge
drm/ttm: Wait for a BO to become idle before unbinding it from GTT
drm/i915/fbdev: Check for the framebuffer before use
drm/amdgpu: update golden setting of polaris10
drm/amdgpu: update golden setting of stoney
drm/amdgpu: update golden setting of polaris11
drm/amdgpu: update golden setting of carrizo
drm/amdgpu: update golden setting of iceland
drm/amd/amdgpu: change pptable output format from ASCII to binary
drm/amdgpu/ci: add mullins to default case for smc ucode
drm/amdgpu/gmc7: add missing mullins case
drm/i915: Never fully mask the the EI up rps interrupt on SNB/IVB
drm/i915: Wait up to 3ms for the pcu to ack the cdclk change request on SKL
Commit b195d5e2bf ("ipr: Wait to do async scan until scsi host is
initialized") fixed async scan for ipr, but broke sync scan for ipr.
This fixes sync scan back up.
Signed-off-by: Brian King <brking@linux.vnet.ibm.com>
Reported-and-tested-by: Michael Ellerman <mpe@ellerman.id.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
To distinguish non-slab pages charged to kmemcg we mark them PageKmemcg,
which sets page->_mapcount to -512. Currently, we set/clear PageKmemcg
in __alloc_pages_nodemask()/free_pages_prepare() for any page allocated
with __GFP_ACCOUNT, including those that aren't actually charged to any
cgroup, i.e. allocated from the root cgroup context. To avoid overhead
in case cgroups are not used, we only do that if memcg_kmem_enabled() is
true. The latter is set iff there are kmem-enabled memory cgroups
(online or offline). The root cgroup is not considered kmem-enabled.
As a result, if a page is allocated with __GFP_ACCOUNT for the root
cgroup when there are kmem-enabled memory cgroups and is freed after all
kmem-enabled memory cgroups were removed, e.g.
# no memory cgroups has been created yet, create one
mkdir /sys/fs/cgroup/memory/test
# run something allocating pages with __GFP_ACCOUNT, e.g.
# a program using pipe
dmesg | tail
# remove the memory cgroup
rmdir /sys/fs/cgroup/memory/test
we'll get bad page state bug complaining about page->_mapcount != -1:
BUG: Bad page state in process swapper/0 pfn:1fd945c
page:ffffea007f651700 count:0 mapcount:-511 mapping: (null) index:0x0
flags: 0x1000000000000000()
To avoid that, let's mark with PageKmemcg only those pages that are
actually charged to and hence pin a non-root memory cgroup.
Fixes: 4949148ad4 ("mm: charge/uncharge kmemcg from generic page allocator paths")
Reported-and-tested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Vladimir Davydov <vdavydov@virtuozzo.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Patch 19a469a587 ("drivers/perf: arm-pmu: Handle per-interrupt
affinity mask") added support for partitionned PPI setups, but
inadvertently broke setups using SPIs without the "interrupt-affinity"
property (which is the case for UP platforms).
This patch restore the broken functionnality by testing whether the
interrupt is percpu or not instead of relying on the using_spi flag
that really means "SPI *and* interrupt-affinity property".
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reported-by: Geert Uytterhoeven <geert@linux-m68k.org>
Tested-by: Geert Uytterhoeven <geert@linux-m68k.org>
Fixes: 19a469a587 ("drivers/perf: arm-pmu: Handle per-interrupt affinity mask")
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
arm_pmu_mutex is never held long and we don't want to sleep while the
lock is being held as it's executed in the context of hotplug notifiers.
So it can be converted to a simple spinlock instead.
Without this patch we get the following warning:
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:620
in_atomic(): 1, irqs_disabled(): 128, pid: 0, name: swapper/2
no locks held by swapper/2/0.
irq event stamp: 381314
hardirqs last enabled at (381313): _raw_spin_unlock_irqrestore+0x7c/0x88
hardirqs last disabled at (381314): cpu_die+0x28/0x48
softirqs last enabled at (381294): _local_bh_enable+0x28/0x50
softirqs last disabled at (381293): irq_enter+0x58/0x78
CPU: 2 PID: 0 Comm: swapper/2 Not tainted 4.7.0 #12
Call trace:
dump_backtrace+0x0/0x220
show_stack+0x24/0x30
dump_stack+0xb4/0xf0
___might_sleep+0x1d8/0x1f0
__might_sleep+0x5c/0x98
mutex_lock_nested+0x54/0x400
arm_perf_starting_cpu+0x34/0xb0
cpuhp_invoke_callback+0x88/0x3d8
notify_cpu_starting+0x78/0x98
secondary_start_kernel+0x108/0x1a8
This patch converts the mutex to spinlock to eliminate the above
warnings. This constraints pmu->reset to be non-blocking call which is
the case with all the ARM PMU backends.
Cc: Stephen Boyd <sboyd@codeaurora.org>
Fixes: 37b502f121 ("arm/perf: Fix hotplug state machine conversion")
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
pathbase is the base inode; set it to 0 if we've got no path.
Coverity-id: 146348
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
ceph_file_layout::pool_id is now s64. rbd_add_get_pool_id() and
ceph_pg_poolid_by_name() both return an int, so it's bogus anyway.
Signed-off-by: Ilya Dryomov <idryomov@gmail.com>
Reviewed-by: Alex Elder <elder@linaro.org>
Powerpc has Global Entry Point and Local Entry Point for functions. LEP
catches call from both the GEP and the LEP. Symbol table of ELF contains
GEP and Offset from which we can calculate LEP, but debuginfo does not
have LEP info.
Currently, perf prioritize symbol table over dwarf to probe on LEP for
ppc64le. But when user tries to probe with function parameter, we fall
back to using dwarf(i.e. GEP) and when function called via LEP, probe
will never hit.
For example:
$ objdump -d vmlinux
...
do_sys_open():
c0000000002eb4a0: e8 00 4c 3c addis r2,r12,232
c0000000002eb4a4: 60 00 42 38 addi r2,r2,96
c0000000002eb4a8: a6 02 08 7c mflr r0
c0000000002eb4ac: d0 ff 41 fb std r26,-48(r1)
$ sudo ./perf probe do_sys_open
$ sudo cat /sys/kernel/debug/tracing/kprobe_events
p:probe/do_sys_open _text+3060904
$ sudo ./perf probe 'do_sys_open filename:string'
$ sudo cat /sys/kernel/debug/tracing/kprobe_events
p:probe/do_sys_open _text+3060896 filename_string=+0(%gpr4):string
For second case, perf probed on GEP. So when function will be called via
LEP, probe won't hit.
$ sudo ./perf record -a -e probe:do_sys_open ls
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.195 MB perf.data ]
To resolve this issue, let's not prioritize symbol table, let perf
decide what it wants to use. Perf is already converting GEP to LEP when
it uses symbol table. When perf uses debuginfo, let it find LEP offset
form symbol table. This way we fall back to probe on LEP for all cases.
After patch:
$ sudo ./perf probe 'do_sys_open filename:string'
$ sudo cat /sys/kernel/debug/tracing/kprobe_events
p:probe/do_sys_open _text+3060904 filename_string=+0(%gpr4):string
$ sudo ./perf record -a -e probe:do_sys_open ls
[ perf record: Woken up 1 times to write data ]
[ perf record: Captured and wrote 0.197 MB perf.data (11 samples) ]
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1470723805-5081-2-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Instead of inline code, introduce function to post process kernel
probe trace events.
Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com>
Cc: Balbir Singh <bsingharora@gmail.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1470723805-5081-1-git-send-email-ravi.bangoria@linux.vnet.ibm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Due to:
1e61f78baf ("x86/cpufeature: Make sure DISABLED/REQUIRED macros are updated")
No changes to tools using those headers (tools/arch/x86/lib/mem{set,cpu}_64.S)
seems necessary.
Detected by the tools build header drift checker:
$ make -C tools/perf O=/tmp/build/perf
make: Entering directory '/home/acme/git/linux/tools/perf'
BUILD: Doing 'make -j4' parallel build
GEN /tmp/build/perf/common-cmds.h
Warning: tools/arch/x86/include/asm/disabled-features.h differs from kernel
Warning: tools/arch/x86/include/asm/required-features.h differs from kernel
Warning: tools/arch/x86/include/asm/cpufeatures.h differs from kernel
CC /tmp/build/perf/util/probe-finder.o
CC /tmp/build/perf/builtin-help.o
<SNIP>
^C$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-ja75m7zk8j0jkzmrv16i5ehw@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The way we're using kernel headers in tools/ now, with a copy that is
made to the same path prefixed by "tools/" plus checking if that copy
got stale, i.e. if the kernel counterpart changed, helps in keeping
track with new features that may be useful for tools to exploit.
For instance, looking at all the changes to bpf.h since it was last
copied to tools/include brings this to toolers' attention:
Need to investigate this one to check how to run a program via perf, setting up
a BPF event, that will take advantage of the way perf already calls clang/LLVM,
sets up the event and runs the workload in a single command line, helping in
debugging such semi cooperative programs:
96ae522795 ("bpf: Add bpf_probe_write_user BPF helper to be called in tracers")
This one needs further investigation about using the feature it improves
in 'perf trace' to do some tcpdumpin' mixed with syscalls, tracepoints,
probe points, callgraphs, etc:
555c8a8623 ("bpf: avoid stack copy and use skb ctx for event output")
Add tracing just packets that are related to some container to that mix:
4a482f34af ("cgroup: bpf: Add bpf_skb_in_cgroup_proto")
4ed8ec521e ("cgroup: bpf: Add BPF_MAP_TYPE_CGROUP_ARRAY")
Definetely needs to have example programs accessing task_struct from a bpf proggie
started from 'perf trace':
606274c5ab ("bpf: introduce bpf_get_current_task() helper")
Core networking related, XDP:
6ce96ca348 ("bpf: add XDP_TX xdp_action for direct forwarding")
6a773a15a1 ("bpf: add XDP prog type for early driver filter")
13c5c240f7 ("bpf: add bpf_get_hash_recalc helper")
d2485c4242 ("bpf: add bpf_skb_change_type helper")
6578171a7f ("bpf: add bpf_skb_change_proto helper")
Changes detected by the tools build system:
$ make -C tools/perf O=/tmp/build/perf install-bin
make: Entering directory '/home/acme/git/linux/tools/perf'
BUILD: Doing 'make -j4' parallel build
Warning: tools/include/uapi/linux/bpf.h differs from kernel
INSTALL GTK UI
CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o
<SNIP>
$
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Alexei Starovoitov <ast@fb.com>
Cc: Brenden Blanco <bblanco@plumgrid.com>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: David Ahern <dsahern@gmail.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Martin KaFai Lau <kafai@fb.com>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Sargun Dhillon <sargun@sargun.me>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/n/tip-difq4ts1xvww6eyfs9e7zlft@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
There were changes related to the deprecation of the "pcommit"
instruction:
fd1d961dd6 ("x86/insn: remove pcommit")
dfa169bbee ("Revert "KVM: x86: add pcommit support"")
No need to update anything in the tools, as "pcommit" wasn't being
listed on the VMX_EXIT_REASONS in the tools/perf/arch/x86/util/kvm-stat.c
file.
Just grab fresh copies of these files to silence the file cache
coherency detector:
$ make -C tools/perf O=/tmp/build/perf install-bin
make: Entering directory '/home/acme/git/linux/tools/perf'
BUILD: Doing 'make -j4' parallel build
Warning: tools/arch/x86/include/asm/cpufeatures.h differs from kernel
Warning: tools/arch/x86/include/uapi/asm/vmx.h differs from kernel
INSTALL GTK UI
<SNIP>
#
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Ross Zwisler <ross.zwisler@linux.intel.com>
Cc: Wang Nan <wangnan0@huawei.com>
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>
Link: http://lkml.kernel.org/n/tip-07pmcc1ysydhyyxbmp1vt0l4@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The 'perf probe' tool detects a variable's type and use the detected
type to add a new probe. Then, kprobes prints its variable in
hexadecimal format if the variable is unsigned and prints in decimal if
it is signed.
We sometimes want to see unsigned variable in decimal format (i.e.
sector_t or size_t). In that case, we need to investigate the variable's
size manually to specify just signedness.
This patch add signedness casting support. By specifying "s" or "u" as a
type, perf-probe will investigate variable size as usual and use the
specified signedness.
E.g. without this:
$ perf probe -a 'submit_bio bio->bi_iter.bi_sector'
Added new event:
probe:submit_bio (on submit_bio with bi_sector=bio->bi_iter.bi_sector)
You can now use it in all perf tools, such as:
perf record -e probe:submit_bio -aR sleep 1
$ cat trace_pipe|head
dbench-9692 [003] d..1 971.096633: submit_bio: (submit_bio+0x0/0x140) bi_sector=0x3a3d00
dbench-9692 [003] d..1 971.096685: submit_bio: (submit_bio+0x0/0x140) bi_sector=0x1a3d80
dbench-9692 [003] d..1 971.096687: submit_bio: (submit_bio+0x0/0x140) bi_sector=0x3a3d80
...
// need to investigate the variable size
$ perf probe -a 'submit_bio bio->bi_iter.bi_sector:s64'
Added new event:
probe:submit_bio (on submit_bio with bi_sector=bio->bi_iter.bi_sector:s64)
You can now use it in all perf tools, such as:
perf record -e probe:submit_bio -aR sleep 1
With this:
// just use "s" to cast its signedness
$ perf probe -v -a 'submit_bio bio->bi_iter.bi_sector:s'
Added new event:
probe:submit_bio (on submit_bio with bi_sector=bio->bi_iter.bi_sector:s)
You can now use it in all perf tools, such as:
perf record -e probe:submit_bio -aR sleep 1
$ cat trace_pipe|head
dbench-9689 [001] d..1 1212.391237: submit_bio: (submit_bio+0x0/0x140) bi_sector=128
dbench-9689 [001] d..1 1212.391252: submit_bio: (submit_bio+0x0/0x140) bi_sector=131072
dbench-9697 [006] d..1 1212.398611: submit_bio: (submit_bio+0x0/0x140) bi_sector=30208
This commit also update perf-probe.txt to describe "types". Most parts
are based on existing documentation: Documentation/trace/kprobetrace.txt
Committer note:
Testing using 'perf trace':
# perf probe -a 'submit_bio bio->bi_iter.bi_sector'
Added new event:
probe:submit_bio (on submit_bio with bi_sector=bio->bi_iter.bi_sector)
You can now use it in all perf tools, such as:
perf record -e probe:submit_bio -aR sleep 1
# trace --no-syscalls --ev probe:submit_bio
0.000 probe:submit_bio:(ffffffffac3aee00) bi_sector=0xc133c0)
3181.861 probe:submit_bio:(ffffffffac3aee00) bi_sector=0x6cffb8)
3181.881 probe:submit_bio:(ffffffffac3aee00) bi_sector=0x6cffc0)
3184.488 probe:submit_bio:(ffffffffac3aee00) bi_sector=0x6cffc8)
<SNIP>
4717.927 probe:submit_bio:(ffffffffac3aee00) bi_sector=0x4dc7a88)
4717.970 probe:submit_bio:(ffffffffac3aee00) bi_sector=0x4dc7880)
^C[root@jouet ~]#
Now, using this new feature:
[root@jouet ~]# perf probe -a 'submit_bio bio->bi_iter.bi_sector:s'
Added new event:
probe:submit_bio (on submit_bio with bi_sector=bio->bi_iter.bi_sector:s)
You can now use it in all perf tools, such as:
perf record -e probe:submit_bio -aR sleep 1
[root@jouet ~]# trace --no-syscalls --ev probe:submit_bio
0.000 probe:submit_bio:(ffffffffac3aee00) bi_sector=7145704)
0.017 probe:submit_bio:(ffffffffac3aee00) bi_sector=7145712)
0.019 probe:submit_bio:(ffffffffac3aee00) bi_sector=7145720)
2.567 probe:submit_bio:(ffffffffac3aee00) bi_sector=7145728)
5631.919 probe:submit_bio:(ffffffffac3aee00) bi_sector=0)
5631.941 probe:submit_bio:(ffffffffac3aee00) bi_sector=8)
5631.945 probe:submit_bio:(ffffffffac3aee00) bi_sector=16)
5631.948 probe:submit_bio:(ffffffffac3aee00) bi_sector=24)
^C#
With callchains:
# trace --no-syscalls --ev probe:submit_bio/max-stack=10/
0.000 probe:submit_bio:(ffffffffac3aee00) bi_sector=50662544)
submit_bio+0xa8200001 ([kernel.kallsyms])
submit_bh+0xa8200013 ([kernel.kallsyms])
jbd2_journal_commit_transaction+0xa8200691 ([kernel.kallsyms])
kjournald2+0xa82000ca ([kernel.kallsyms])
kthread+0xa82000d8 ([kernel.kallsyms])
ret_from_fork+0xa820001f ([kernel.kallsyms])
0.023 probe:submit_bio:(ffffffffac3aee00) bi_sector=50662552)
submit_bio+0xa8200001 ([kernel.kallsyms])
submit_bh+0xa8200013 ([kernel.kallsyms])
jbd2_journal_commit_transaction+0xa8200691 ([kernel.kallsyms])
kjournald2+0xa82000ca ([kernel.kallsyms])
kthread+0xa82000d8 ([kernel.kallsyms])
ret_from_fork+0xa820001f ([kernel.kallsyms])
0.027 probe:submit_bio:(ffffffffac3aee00) bi_sector=50662560)
submit_bio+0xa8200001 ([kernel.kallsyms])
submit_bh+0xa8200013 ([kernel.kallsyms])
jbd2_journal_commit_transaction+0xa8200691 ([kernel.kallsyms])
kjournald2+0xa82000ca ([kernel.kallsyms])
kthread+0xa82000d8 ([kernel.kallsyms])
ret_from_fork+0xa820001f ([kernel.kallsyms])
2.593 probe:submit_bio:(ffffffffac3aee00) bi_sector=50662568)
submit_bio+0xa8200001 ([kernel.kallsyms])
submit_bh+0xa8200013 ([kernel.kallsyms])
journal_submit_commit_record+0xa82001ac ([kernel.kallsyms])
jbd2_journal_commit_transaction+0xa82012e8 ([kernel.kallsyms])
kjournald2+0xa82000ca ([kernel.kallsyms])
kthread+0xa82000d8 ([kernel.kallsyms])
ret_from_fork+0xa820001f ([kernel.kallsyms])
^C#
Signed-off-by: Naohiro Aota <naohiro.aota@hgst.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Hemant Kumar <hemant@linux.vnet.ibm.com>
Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1470710408-23515-1-git-send-email-naohiro.aota@hgst.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The symbols used in the tick_stop tracepoint were not being converted
properly into integers in the trace_stop format file. Instead we had this:
print fmt: "success=%d dependency=%s", REC->success,
__print_symbolic(REC->dependency, { 0, "NONE" },
{ (1 << TICK_DEP_BIT_POSIX_TIMER), "POSIX_TIMER" },
{ (1 << TICK_DEP_BIT_PERF_EVENTS), "PERF_EVENTS" },
{ (1 << TICK_DEP_BIT_SCHED), "SCHED" },
{ (1 << TICK_DEP_BIT_CLOCK_UNSTABLE), "CLOCK_UNSTABLE" })
User space tools have no idea how to parse "TICK_DEP_BIT_SCHED" or the other
symbols used to do the bit shifting. The reason is that the conversion was
done with using the TICK_DEP_MASK_* symbols which are just macros that
convert to the BIT shift itself (with the exception of NONE, which was
converted properly, because it doesn't use bits, and is defined as zero).
The TICK_DEP_BIT_* needs to be denoted by TRACE_DEFINE_ENUM() in order to
have this properly converted for user space tools to parse this event.
Cc: stable@vger.kernel.org
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Fixes: e6e6cc22e0 ("nohz: Use enum code for tick stop failure tracing message")
Reported-by: Luiz Capitulino <lcapitulino@redhat.com>
Tested-by: Luiz Capitulino <lcapitulino@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
When we don't have a tracee (i.e. we're attaching to a task or CPU),
counters can still be running after our workload finishes, and can still
be running as we read their values. As we read events one-by-one, there
can be arbitrary skew between values of events, even within a group.
This means that ratios within an event group are not reliable.
This skew can be seen if measuring a group of identical events, e.g:
# perf stat -a -C0 -e '{cycles,cycles}' sleep 1
To avoid this, we must stop groups from counting before we read the
values of any constituent events. This patch adds and makes use of a new
disable_counters() helper, which disables group leaders (and thus each
group as a whole). This mirrors the use of enable_counters() for
starting event groups in the absence of a tracee.
Closing a group leader splits the group, and without a disabled group
leader the newly split events will begin counting. Thus to ensure counts
are reliable we must defer closing group leaders until all counts have
been read. To do so this patch removes the event closing logic from the
read_counters() helper, explicitly closes the events using
perf_evlist__close(), which also aids legibility.
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1470747869-3567-1-git-send-email-mark.rutland@arm.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
If module is "module" then dso->short_name is "[module]". Substring
comparing is't enough: "raid10" matches to "[raid1]". This patch also
checks terminating zero in module name.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Acked-by: Masami Hiramatsu <mhiramat@kernel.org>
Link: http://lkml.kernel.org/r/147039975648.715620.12985971832789032159.stgit@buzz
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Adjust map->reloc offset for the unmapped address when finding
alternative symbol address from map, because KASLR can relocate the
kernel symbol address.
The same adjustment has been done when finding appropriate kernel symbol
address from map which was introduced by commit f90acac757 ("perf
probe: Find given address from offline dwarf")
Reported-by: Arnaldo Carvalho de Melo <acme@kernel.org>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu@linaro.org>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/20160806192948.e366f3fbc4b194de600f8326@kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
When we use libtraceevent to format trace event fields into printable
strings to use in hist entries it is important to trim it from the
default 4 KiB it starts with to what is really used, to reduce the
memory footprint, so use realloc(seq.buffer, seq.len + 1) when returning
the seq.buffer formatted with the fields contents.
Reported-and-Tested-by: Wang Nan <wangnan0@huawei.com>
Cc: Adrian Hunter <adrian.hunter@intel.com>
Cc: David Ahern <dsahern@gmail.com>
Cc: Jiri Olsa <jolsa@kernel.org>
Cc: Namhyung Kim <namhyung@kernel.org>
Link: http://lkml.kernel.org/n/tip-t3hl7uxmilrkigzmc90rlhk2@git.kernel.org
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
This adds the 'bpf-output' field to the perf script usage message, and docs.
Signed-off-by: Brendan Gregg <bgregg@netflix.com>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Wang Nan <wangnan0@huawei.com>
Link: http://lkml.kernel.org/r/1470192469-11910-4-git-send-email-bgregg@netflix.com
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The recent commit 599d0c954f ("mm, vmscan: move LRU lists to node"),
changed memory management code so that show_mem() is no longer safe to
call prior to setup_per_cpu_pageset(), as pgdat->per_cpu_nodestats will
still be NULL. This causes an oops on metag due to the call to
show_mem() from mem_init():
node_page_state_snapshot(...) + 0x48
pgdat_reclaimable(struct pglist_data * pgdat = 0x402517a0)
show_free_areas(unsigned int filter = 0) + 0x2cc
show_mem(unsigned int filter = 0) + 0x18
mem_init()
mm_init()
start_kernel() + 0x204
This wasn't a problem before with zone_reclaimable() as zone_pcp_init()
was already setting zone->pageset to &boot_pageset, via setup_arch() and
paging_init(), which happens before mm_init():
zone_pcp_init(...)
free_area_init_core(...) + 0x138
free_area_init_node(int nid = 0, ...) + 0x1a0
free_area_init_nodes(...) + 0x440
paging_init(unsigned long mem_end = 0x4fe00000) + 0x378
setup_arch(char ** cmdline_p = 0x4024e038) + 0x2b8
start_kernel() + 0x54
No other arches appear to call show_mem() during boot, and it doesn't
really add much value to the log, so lets just drop it from mem_init().
Signed-off-by: James Hogan <james.hogan@imgtec.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
Cc: linux-metag@vger.kernel.org
There only ever have been two host implementations of the old
s390-virtio (pre-ccw) transport: the experimental kuli userspace,
and qemu. As qemu switched its default to ccw with 2.4 (with most
users having used ccw well before that) and removed the old transport
entirely in 2.6, s390-virtio probably hasn't been in active use for
quite some time and is therefore likely to bitrot.
Let's start the slow march towards removing the code by deprecating
it.
Note that this also deprecates the early virtio console code, which
has been causing trouble in the guest without being wired up in any
relevant hypervisor code.
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Reviewed-by: Dong Jia Shi <bjsdjshi@linux.vnet.ibm.com>
Reviewed-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
In case the registration of the hvc tty never happens AND the kernel
thinks that hvc0 is the preferred console we should keep the early
printk function to avoid a kernel panic due to code being removed.
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Jing Liu <liujbjl@linux.vnet.ibm.com>
Signed-off-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
We do a lot of memory allocation in function init_vq, and don't handle
the allocation failure properly. Then this function will return 0,
although initialization fails due to lacking memory. At that moment,
kernel will panic in guest machine, if virtio is used to drive disk.
To fix this bug, we should take care of allocation failure, and return
correct value to let caller know what happen.
Tested-by: Chao Fan <fanc.fnst@cn.fujitsu.com>
Signed-off-by: Minfei Huang <mnghuan@gmail.com>
Signed-off-by: Minfei Huang <minfei.hmf@alibaba-inc.com>
Reviewed-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Stash the packet length in a local variable before handing over
ownership of the packet to virtio_transport_recv_pkt() or
virtio_transport_free_pkt().
This patch solves the use-after-free since pkt is no longer guaranteed
to be alive.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
On error, virtqueue_add calls START_USE but not
END_USE. Thankfully that's normally empty anyway,
but might not be when debugging. Fix it up.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
When using the indirect buffers feature, 'desc' is allocated in
virtqueue_add() but isn't freed before leaving on a ring full error,
causing a memory leak.
For example, it seems rather clear that this can trigger
with virtio net if mergeable buffers are not used.
Cc: stable@vger.kernel.org
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Enable the hard limit of cpu count by set boot options nr_cpus=x
on arm64, and make a minor change about message when total number
of cpu exceeds the limit.
Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Reported-by: Shiyuan Hu <hushiyuan@huawei.com>
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
The generic allocation code may sometimes decide to assign a prefetchable
64-bit BAR to the M32 window. In fact it may also decide to allocate
a 64-bit non-prefetchable BAR to the M64 one ! So using the resource
flags as a test to decide which window was used for PE allocation is
just wrong and leads to insane PE numbers.
Instead, compare the addresses to figure it out.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
[mpe: Rename the function as agreed by Ben & Gavin]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
When machine check occurs with MSR(RI=0), it means MC interrupt is
unrecoverable and kernel goes down to panic path. But the console
message still shows it as recovered. This patch fixes the MCE console
messages.
Fixes: 36df96f8ac ("powerpc/book3s: Decode and save machine check event.")
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The tick_nohz_stop_sched_tick() routine is not properly
canceling the sched timer when nothing is pending, because
get_next_timer_interrupt() is no longer returning KTIME_MAX in
that case. This causes periodic interrupts when none are needed.
When determining the next interrupt time, we first use
__next_timer_interrupt() to get the first expiring timer in the
timer wheel. If no timer is found, we return the base clock value
plus NEXT_TIMER_MAX_DELTA to indicate there is no timer in the
timer wheel.
Back in get_next_timer_interrupt(), we set the "expires" value
by converting the timer wheel expiry (in ticks) to a nsec value.
But we don't want to do this if the timer wheel expiry value
indicates no timer; we want to return KTIME_MAX.
Prior to commit 500462a9de ("timers: Switch to a non-cascading
wheel") we checked base->active_timers to see if any timers
were active, and if not, we didn't touch the expiry value and so
properly returned KTIME_MAX. Now we don't have active_timers.
To fix this, we now just check the timer wheel expiry value to
see if it is "now + NEXT_TIMER_MAX_DELTA", and if it is, we don't
try to compute a new value based on it, but instead simply let the
KTIME_MAX value in expires remain.
Fixes: 500462a9de "timers: Switch to a non-cascading wheel"
Signed-off-by: Chris Metcalf <cmetcalf@mellanox.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: John Stultz <john.stultz@linaro.org>
Link: http://lkml.kernel.org/r/1470688147-22287-1-git-send-email-cmetcalf@mellanox.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Bharat Kumar Gogada reported issues with the generic MSI code, where the
end-point ended up with garbage in its MSI configuration (both for the vector
and the message).
It turns out that the two MSI paths in the kernel are doing slightly different
things:
generic MSI: disable MSI -> allocate MSI -> enable MSI -> setup EP
PCI MSI: disable MSI -> allocate MSI -> setup EP -> enable MSI
And it turns out that end-points are allowed to latch the content of the MSI
configuration registers as soon as MSIs are enabled. In Bharat's case, the
end-point ends up using whatever was there already, which is not what you
want.
In order to make things converge, we introduce a new MSI domain flag
(MSI_FLAG_ACTIVATE_EARLY) that is unconditionally set for PCI/MSI. When set,
this flag forces the programming of the end-point as soon as the MSIs are
allocated.
A consequence of this is that we have an extra activate in irq_startup, but
that should be without much consequence.
tglx:
- Several people reported a VMWare regression with PCI/MSI-X passthrough. It
turns out that the patch also cures that issue.
- We need to have a look at the MSI disable interrupt path, where we write
the msg to all zeros without disabling MSI in the PCI device. Is that
correct?
Fixes: 52f518a3a7 "x86/MSI: Use hierarchical irqdomains to manage MSI interrupts"
Reported-and-tested-by: Bharat Kumar Gogada <bharat.kumar.gogada@xilinx.com>
Reported-and-tested-by: Foster Snowhill <forst@forstwoof.ru>
Reported-by: Matthias Prager <linux@matthiasprager.de>
Reported-by: Jason Taylor <jason.taylor@simplivity.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Acked-by: Bjorn Helgaas <bhelgaas@google.com>
Cc: linux-pci@vger.kernel.org
Cc: stable@vger.kernel.org
Link: http://lkml.kernel.org/r/1468426713-31431-1-git-send-email-marc.zyngier@arm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
The recent commit 63a72284b1 ("powerpc/pci: Assign fixed PHB number
based on device-tree properties"), added code to read a 64-bit property
from the device tree, and if not found read a 32-bit property (reg).
There was a bug in the 32-bit case, on big endian machines, due to the
use of the 64-bit value to read the 32-bit property. The cast of &prop
means we end up writing to the high 32-bit of prop, leaving the low
32-bits containing whatever junk was on the stack.
If that junk value was non-zero, and < MAX_PHBS, we would end up using
it as the PHB id. This results in users seeing what appear to be random
PHB ids.
Fix it by reading into a u32 property and then assigning that to the
u64 value, letting the CPU do the correct conversions for us.
Fixes: 63a72284b1 ("powerpc/pci: Assign fixed PHB number based on device-tree properties")
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This is a very minor/trivial fix for the output of PCI address on EEH
logs. The PCI address on "OF node" field currently is using ":" as a
separator for the function, but the usual separator is ".". This patch
changes the separator to dot, so the PCI address is printed as usual.
Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com>
Reviewed-by: Gavin Shan <gwshan@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Make native_irq_wait() static and use NULL rather than 0 to initialise
phb->cfg_data in cxl_pci_vphb_add() to remove sparse warnings.
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Reviewed-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Commit f67a6722d6 ("cxl: Workaround PE=0 hardware limitation in
Mellanox CX4") added a "min_pe" field to struct cxl_service_layer_ops,
to allow us to work around a Mellanox CX-4 hardware limitation.
When allocating the PE number in cxl_context_init(), we read from
ctx->afu->adapter->native->sl_ops->min_pe to get the minimum PE number.
Unsurprisingly, in a PowerVM guest ctx->afu->adapter->native is NULL,
and guests don't have a cxl_service_layer_ops struct anywhere.
Move min_pe from struct cxl_service_layer_ops to struct cxl so it's
accessible in both native and PowerVM environments. For the Mellanox
CX-4, set the min_pe value in set_sl_ops().
Fixes: f67a6722d6 ("cxl: Workaround PE=0 hardware limitation in Mellanox CX4")
Reported-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com>
Acked-by: Ian Munsie <imunsie@au1.ibm.com>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch fixes a regression introduced by commit b810253bd9 ("cxl:
Add mechanism for delivering AFU driver specific events").
It changes the type u8 to __u8 in the uapi header cxl.h, because the
former is a kernel internal type, and may not be defined in userland
build environments, in particular when cross-compiling libcxl on x86_64
linux machines (RHEL6.7 and Ubuntu 16.04).
This patch also changes the size of the field data_size, and makes it
constant, to support 32-bit userland applications running on big-endian
ppc64 kernels transparently.
mpe: This is an ABI change, however the ABI was only added during the
4.8 merge window so has never been part of a released kernel - therefore
we give ourselves permission to change it.
Fixes: b810253bd9 ("cxl: Add mechanism for delivering AFU driver specific events")
Signed-off-by: Philippe Bergheaud <felix@linux.vnet.ibm.com>
Reviewed-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Some powerpc builds fail with the following buld error.
In file included from ./arch/powerpc/include/asm/mmu_context.h:11:0,
from arch/powerpc/kernel/vdso.c:28:
arch/powerpc/include/asm/cputhreads.h: In function 'get_tensr':
arch/powerpc/include/asm/cputhreads.h:101:2: error:
implicit declaration of function 'cpu_has_feature'
Fixes: b92a226e52 ("powerpc: Move cpu_has_feature() to a separate file")
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch fixes the following warning:
arch/powerpc/platforms/pseries/hotplug-memory.c:323:29: error: 'lmb_to_memblock' defined but not used [-Werror=unused-function]
static struct memory_block *lmb_to_memblock(struct of_drconf_cell *lmb)
^~~~~~~~~~~~~~~
The only consumer of this function is 'dlpar_remove_lmb', which is
enabled with CONFIG_MEMORY_HOTREMOVE, so move it into the same
ifdef block.
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The current implementation of MCE early handling modifies CR0/1 registers
without saving its old values. Fix this by moving early check for
powersaving mode to machine_check_handle_early().
The power architecture 2.06 or later allows the possibility of getting
machine check while in nap/sleep/winkle. The last bit of HSPRG0 is set
to 1, if thread is woken up from winkle. Hence, clear the last bit of
HSPRG0 (r13) before MCE handler starts using it as paca pointer.
Also, the current code always puts the thread into nap state irrespective
of whatever idle state it woke up from. Fix that by looking at
paca->thread_idle_state and put the thread back into same state where it
came from.
Fixes: 1c51089f77 ("powerpc/book3s: Return from interrupt if coming from evil context.")
Reported-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Reviewed-by: Shreyas B. Prabhu <shreyas@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The ELP HD USB Camera (05a3:9420) needs this quirk for suppressing
the unsupported sample rate inquiry.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=98481
Cc: <stable@vger.kernel.org>
Signed-off-by: Vittorio Gambaletta <linuxbugs@vittgam.net>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
VF0610 does not support reading the sample rate which leads to many
lines of "cannot get freq at ep 0x82". This patch adds the USB ID
(0x041E:4080) to snd_usb_get_sample_rate_quirk() list.
Signed-off-by: Piotr Karasinski <peter.karasinski@gmail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Move IDLE_STATE_ENTER_SEQ macro to cpuidle.h so that MCE handler changes
in subsequent patch can use it.
No functionality change.
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
The function pnv_restore_hyp_resource() loads the TOC into r2 from
the invalid PACA pointer before fixing r13 value. This do not affect
POWER ISA 3.0 but it does have an impact on POWER ISA 2.07 or less
leading CPU to get stuck forever.
login: [ 471.830433] Processor 120 is stuck.
This can be easily reproducible using following steps:
- Turn off SMT
$ ppc64_cpu --smt=off
- offline/online any online cpu (Thread 0 of any core which is online)
$ echo 0 > /sys/devices/system/cpu/cpu<num>/online
$ echo 1 > /sys/devices/system/cpu/cpu<num>/online
For POWER ISA 2.07 or less, the last bit of HSPRG0 is set indicating
that thread is waking up from winkle. Hence, the last bit of HSPRG0(r13)
needs to be clear before accessing it as PACA to avoid loading invalid
values from invalid PACA pointer.
Fix this by loading TOC after r13 register is corrected.
Fixes: bcef83a00d ("powerpc/powernv: Add platform support for stop instruction")
Signed-off-by: Mahesh Salgaonkar <mahesh@linux.vnet.ibm.com>
Acked-by: Vaidyanathan Srinivasan <svaidy@linux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
Commit fd141d1a99 ("powerpc/powernv/pci: Rework accessing the TCE
invalidate register") broke TCE invalidation on IODA2/PHB3 for real
mode.
This makes invalidate work again.
Fixes: fd141d1a99 ("powerpc/powernv/pci: Rework accessing the TCE invalidate register")
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
We should return -ENOMEM if alloc_spu_gang() fails.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This sets the type of the interrupt appropriately. We set it as follow:
- If not mapped from the device-tree, we use edge. This is the case
of the virtual interrupts and PCI MSIs for example.
- If mapped from the device-tree and #interrupt-cells is 2 (PAPR
compliant), we use the second cell to set the appropriate type
- If mapped from the device-tree and #interrupt-cells is 1 (current
OPAL on P8 does that), we assume level sensitive since those are
typically going to be the PSI LSIs which are level sensitive.
Additionally, we mark the interrupts requested via the opal_interrupts
property all level. This is a bit fishy but the best we can do until we
fix OPAL to properly expose them with a complete descriptor. It is also
correct for the current HW anyway as OPAL interrupts are currently PCI
error and PSI interrupts which are level.
Finally now that edge interrupts are properly identified, we can enable
CONFIG_HARDIRQS_SW_RESEND which will make the core re-send them if
they occur while masked, which some drivers rely upon.
This fixes issues with lost interrupts on some Mellanox adapters.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
This patch utilises the GENERIC_CPU_AUTOPROBE infrastructure
to automatically load the crc32c-vpmsum module if the CPU supports
it.
Signed-off-by: Anton Blanchard <anton@samba.org>
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
cirrus_modeset_init() is initializing/registering the emulated fbdev
and, since commit c61b93fe51 ("drm/atomic: Fix remaining places where
!funcs->best_encoder is valid"), DRM internals can access/test some of
the fields in mode_config->funcs as part of the fbdev registration
process.
Make sure dev->mode_config.funcs is properly set to avoid dereferencing
a NULL pointer.
Reported-by: Mike Marshall <hubcap@omnibond.com>
Reported-by: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Boris Brezillon <boris.brezillon@free-electrons.com>
Fixes: c61b93fe51 ("drm/atomic: Fix remaining places where !funcs->best_encoder is valid")
Signed-off-by: Dave Airlie <airlied@redhat.com>