In commit "drm/i915: Wait for PSR exit before checking for vblank
evasion", the idea was to limit the PSR IDLE checks when PSR is
actually supported. While CAN_PSR does do that check, it doesn't
applies on a per-crtc basis. crtc_state->has_psr is a more granular
check that only applies to pipe(s) that have PSR enabled.
Without the has_psr check, we end up waiting on the eDP transcoder's
PSR_STATUS register irrespective of whether the pipe being updated is
driving it or not.
v2: Remove unnecessary parantheses, make checkpatch happy.
v3: Move the has_psr check to intel_psr_wait_for_idle and commit
message changes (DK).
v4: Derive dev_priv from intel_crtc_state (DK)
v5: Commit message changes to reflect the HW behavior (DK)
Fixes: a608987970 ("drm/i915: Wait for PSR exit before checking for vblank evasion")
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Tarun Vyas <tarun.vyas@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180712053323.26266-1-tarun.vyas@intel.com
Support for Burst read in HW is added for HDCP2.2 compliance
requirement.
This patch enables the burst read for all the gmbus read of more than
511Bytes, on capable platforms.
v2:
Extra line is removed.
v3:
Macro is added for detecting the BURST_READ Support [Jani]
Runtime detection of the need for burst_read [Jani]
Calculation enhancement.
v4:
GMBUS0 reg val is passed from caller [ville]
Removed a extra var [ville]
Extra brackets are removed [ville]
Implemented the handling of 512Bytes Burst Read.
v5:
Burst read max length is fixed at 767Bytes [Ville]
v6:
Collecting the received reviewed-by.
Signed-off-by: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/1530192889-5789-3-git-send-email-ramalingam.c@intel.com
GMBUS HW supports 511Bytes as Max Bytes per single RD/WR op. Instead of
enabling the 511Bytes per RD/WR cycle on legacy platforms for no
absolute ROIs, this change allows the max bytes per op upto 511Bytes
from Gen9 onwards.
v2:
No Change.
v3:
Inline function for max_xfer_size and renaming of the macro.[Jani]
v4:
Extra brackets removed [ville]
Commit msg is modified.
v5:
Collecting the Reviewed-By received.
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/1530192889-5789-2-git-send-email-ramalingam.c@intel.com
Since:
0d4b78b3d2 ("drm/i915/guc: Assert we have the doorbell before setting it up")
We have asserts in GuC doorbell related functions, which is a good thing.
Unfortunately, we were using those to check whether GuC FW is refusing
to allocate invalid doorbell - which makes the test fail.
Well, it would make the test WARN, except we fumbled cleanup ordering
and eat the BUG_ON instead.
Let's keep the asserts and use the internal implementation in the test.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107186
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michel Thierry <michel.thierry@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180712112013.3253-1-chris@chris-wilson.co.uk
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Let's reorder things so that we can do onion teardown rather than double
goto.
References: b96f6ebfd0 ("drm/i915: Correctly handle error path in i915_gem_init_hw")
Signed-off-by: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180712124810.25241-1-michal.winiarski@intel.com
If we fail the module load, we may try and cleanup before we even
allocate the GuC clients. KISS in order to try and re-enable
drv_module_reload for BAT.
Testcase: igt/drv_module_reload/basic-reload-inject
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michał Winiarski <michal.winiarski@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180712105830.20390-1-chris@chris-wilson.co.uk
The kernel recently gained an augmented rbtree with the purpose of
cacheing the leftmost element of the rbtree, a frequent optimisation to
avoid calls to rb_first() which is also employed by the
execlists->queue. Switch from our open-coded cache to the library.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180629075348.27358-9-chris@chris-wilson.co.uk
Since live_workarounds poke around the w/a registers and checks to see
if they survive across a reset, we are prone to fouling the machine and
leaving it in a non-recoverable state. Wrap the probe inside a timeout
to abort the test if the reset fails.
v2: Include GEM_TRACE on declaring wedged.
v3: Add a few includes to make the header look standalone.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107188
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180711122952.18448-1-chris@chris-wilson.co.uk
Add a mutex into struct i915_address_space to be used while operating on
the vma and their lists for a particular vm. As this may be called from
the shrinker, we taint the mutex with fs_reclaim so that from the start
lockdep warns us if we are caught holding the mutex across an
allocation. (With such small steps we will eventually rid ourselves of
struct_mutex recursion!)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180711073608.20286-2-chris@chris-wilson.co.uk
Now that our stolen memory is already reserved by the x86 subsystem
(since commit "x86/gpu: reserve ICL's graphics stolen memory"), make
use of it.
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: x86@kernel.org
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180504203252.28048-2-paulo.r.zanoni@intel.com
On unwinding following a critical failure inside GEM init, we also need
to be sure to flush the workers before unloading the module.
Testcase: igt/drv_module_reload/basic-reload-inject
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180710094421.16223-1-chris@chris-wilson.co.uk
In the next patch, we will make a fairly minor change to flush
outstanding resets before suspend. In order to keep churn to a minimum
in that functional patch, we fix up the comments and coding style now.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709130208.11730-7-chris@chris-wilson.co.uk
Across a reset, the seqno (and thus hangcheck) should restart and the
hangcheck naturally progress, for when it does not, we want to declare an
emergency. Currently, we only detect if reset and reinit fails, but we
do not detect if the call to reinit succeeds but the HW is fried - as we
are resetting hangcheck on initialisation the engine. Remove that and
rely on the natural progress to reset the hangcheck timer.
References: e21b141376 ("drm/i915: Mark the hangcheck as idle when unparking the engines")
References: 1fd00c0fae ("drm/i915: Declare the driver wedged if hangcheck makes no progress")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709130208.11730-2-chris@chris-wilson.co.uk
In our swizzling selftests, we cannot predict the physical address of
the target page (at least not simply!) and so skip bit17 swizzles.
However, there are two bit17 swizzle modes and we only skipped one, with
the second being observed on the lab gdg causing the test to fail,
as soon as we hit a page with bit17 set in its address.
Testcase: igt/drv_selftest/live_objects #gdg
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709194915.5789-1-chris@chris-wilson.co.uk
Be pessimistic and presume that we actually allocate every page we
exercise via the mock_gtt (e.g. for gvt). In which case we have to keep
our working set under the available physical memory to prevent oom.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180710080424.7821-1-chris@chris-wilson.co.uk
Error messages are intended to be addressed to the user; be clear,
succinct, instructive and unambiguous. Adding the function name to
that message does not add any information the user requires and in
the process makes the message less clear.
E.g.
[ 245.539711] i915 0000:00:02.0: [drm:i915_gem_init [i915]] Failed to initialize GPU, declaring it wedged!
becomes
[ 245.539711] i915 0000:00:02.0: Failed to initialize GPU, declaring it wedged!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709134858.12446-1-chris@chris-wilson.co.uk
This helps initramfs builder and other tools to know the full dependencies
of i915 and have gvt module loaded with i915.
v2: add condition and change to pre-dependency (Chris)
v3: move declaration to gvt.c. (Chris)
v4: remove xengt (Zhenyu)
Signed-off-by: Hang Yuan <hang.yuan@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In igt_flush_test() we install a background timer in order to ensure
that the wait completes within a certain time. We can now tell the wait
that it has to complete within a timeout, and so no longer need the
background timer.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709122044.7028-3-chris@chris-wilson.co.uk
With a broken GPU we expect it to fail during the initial
GPU setup where do a couple of context switches to record the defaults.
This is a task that takes a few milliseconds even on the slowest of
devices, but we may have to wait 60s for hangcheck to give in and
declare the machine inoperable. In this a case where any gpu hang is
unacceptable, both from a timeliness and practical standpoint.
We can therefore set a timeout on our wait-for-idle that is shorter than
the hangcheck (which may be up to 60s for a declaring a wedged driver)
and so detect the broken GPU much more quickly during driver load (and
so prevent stalling userspace for ages).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709122044.7028-2-chris@chris-wilson.co.uk
Usually we have no idea about the upper bound we need to wait to catch
up with userspace when idling the device, but in a few situations we
know the system was idle beforehand and can provide a short timeout in
order to very quickly catch a failure, long before hangcheck kicks in.
In the following patches, we will use the timeout to curtain two overly
long waits, where we know we can expect the GPU to complete within a
reasonable time or declare it broken.
In particular, with a broken GPU we expect it to fail during the initial
GPU setup where do a couple of context switches to record the defaults.
This is a task that takes a few milliseconds even on the slowest of
devices, but we may have to wait 60s for hangcheck to give in and
declare the machine inoperable. In this a case where any gpu hang is
unacceptable, both from a timeliness and practical standpoint.
The other improvement is that in selftests, we do not need to arm an
independent timer to inject a wedge, as we can just limit the timeout on
the wait directly.
v2: Include the timeout parameter in the trace.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180709122044.7028-1-chris@chris-wilson.co.uk
BXT supports EDP. However since GVT-g only simulate DP monitor
to guest and handles EDP_PSR_IMR and EDP_PSR_IIR as default MMIO
r/w. If guest r/w these IMR/IIR, GVT-g won't simulate the real
HW behavior and below warning is printed:
--------
Interrupt register 0x64838 is not zero: 0xffffffff
WARNING: CPU: 1 PID: 1 at drivers/gpu/drm/i915/i915_irq.c:161
gen3_assert_iir_is_zero+0x34/0xa0
Call Trace:
gen8_de_irq_postinstall+0xad/0x330
gen8_irq_postinstall+0x23/0x80
drm_irq_install+0xb5/0x130
i915_driver_load+0xafd/0xf70
--------
Since GVT-g won't simulate EDP to guest, always set EDP_PSR_IMR
and EDP_PSR_IIR IMR/IIR to 0.
Signed-off-by: Colin Xu <colin.xu@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Now GVTg supports shadowing both 2M/64K huge gtt pages. So let's turn on
the cap info bit VGT_CAPS_HUGE_GTT.
v2: Split changes in i915 side into a separated patch.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Don't forget to free allocated spt if shadowing failed.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
If the guest update the 64K gtt entry before changing IPS bit of PDE, we
need to re-shadow the whole page table. Because we have ignored all
updates to unused entries.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This add 2M huge gtt support for GVTg. Unlike 64K gtt entry, we can
shadow 2M guest entry with real huge gtt. But before that, we have to
check memory physical continuous, alignment and if it is supported on
the host. We can get all supported page sizes from
intel_device_info.page_sizes.
Finally we must split the 2M page into smaller pages if we cannot
satisfy guest Huge Page.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
To support huge gtt, we need to support huge pages in kvmgt first.
This patch adds a 'size' param to the intel_gvt_mpt::dma_map_guest_page
API and implements it in kvmgt.
v2: rebase.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Finally, this add the first huge gtt support for GVTg - 64K pages. Since
64K page and 4K page cannot be mixed on the same page table, so we always
split a 64K entry into small 4K page. And when unshadow guest 64K entry,
we need ensure all the shadowed entries in shadow page table also get
cleared.
For page table which has 64K gtt entry, only PTE#0, PTE#16, PTE#32, ...
PTE#496 are used. Unused PTEs update should be ignored.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
64K PTE is special, only PTE#0, PTE#16, PTE#32, ... PTE#496 are used in
the page table.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We need a interface to allocate a pure shadow page which doesn't have
a guest page associated with. Such shadow page is used to shadow 2M
huge gtt entry.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add clear_pse operation in case we need to split huge gtt into small pages.
v2: correct description.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This add a software PTE flag on the Ignored bit of PTE. It will be used
to identify splited 64K shadow entries.
v2: fix mask definition.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
This change help us detect the real entry type per PSE and IPS setting.
For 64K entry, we also need to check reg GEN8_GAMW_ECO_DEV_RW_IA.
v2: Extend IPS mmio control to Gen10. (Matthew Auld)
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
The register RENDER_HWS_PGA_GEN7 is renamed to GEN8_GAMW_ECO_DEV_RW_IA
from GEN8 which can control IPS enabling.
v3: MMIO control for IPS is not removed from gen9 but gen10 (Matthew Auld)
v2: IPS of all engines must be enabled together for gen9.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add three IPS operation functions to test/set/clear IPS in PDE.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add a new entry type GTT_TYPE_PPGTT_PTE_64K_ENTRY. 64K entry is very
different from 2M/1G entry. 64K entry is controlled by IPS bit in upper
PDE. To leverage the current logic, I take IPS bit as 'PSE' for PTE
level. Which means, 64K entries can also processed by get_pse_type().
v2: Make it bisectable.
Signed-off-by: Changbin Du <changbin.du@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
In the next patch, we will want a third distinct class of timeline that
may overlap with the current pair of client and engine timeline classes.
Rather than use the ad hoc markup of SINGLE_DEPTH_NESTING, initialise
the different timeline classes with an explicit subclass.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706210710.16251-1-chris@chris-wilson.co.uk
Inside the mock GEM device, we try to grab the runtime pm for the fake
device to prevent it from ever suspending. However, if CONFIG_PM is not
set, trying to obtain the wakref returns an error which we WARN about.
Suppress the expected warning.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706205947.11209-1-chris@chris-wilson.co.uk
clflush is an unserialised instruction and the IA manual strongly advises
you to serialise it with a mb. To be cautious, apply one before and one
after, so that it is serialised with both writes and reads without
worrying too much about the required direction.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706174926.4712-1-chris@chris-wilson.co.uk
Using a VMA on more than one timeline concurrently is the exception
rather than the rule (using it concurrently on multiple engines). As we
expect to only use one active tracker, store the most recently used
tracker inside the i915_vma itself and only fallback to the rbtree if
we need a second or more concurrent active trackers.
v2: Comments on how we overwrite any existing last_active cache.
v3: __list_del_entry() before list_replace_init() is confusing and, much
more important, entirely redundant.
v4: Note that both last_active and the rbtree may be simultaneously
tracking this timeline, albeit with different requests, and so the vma
may be retired twice for the same timeline.
v5: No, that list_del is required!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706123157.9645-1-chris@chris-wilson.co.uk
In the next patch, we will want to be able to use more flexible request
timelines that can hop between engines. From the vma pov, we can then
not rely on the binding of this request to an engine and so can not
ensure that different requests are ordered through a per-engine
timeline, and so we must track activity of all timelines. (We track
activity on the vma itself to prevent unbinding from HW before the HW
has finished accessing it.)
v2: Switch to a rbtree for 32b safety (since using u64 as a radixtree
index is fraught with aliasing of unsigned longs).
v3: s/lookup_active/active_instance/ because we can never agree on names
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706103947.15919-5-chris@chris-wilson.co.uk
Handling such a late error in request construction is tricky, but to
accommodate future patches which may allocate here, we potentially could
err. To handle the error after already adjusting global state to track
the new request, we must finish and submit the request. But we don't
want to use the request as not everything is being tracked by it, so we
opt to cancel the commands inside the request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180706103947.15919-3-chris@chris-wilson.co.uk