Commit Graph

1138 Commits

Author SHA1 Message Date
Lei Zhang 911b9960ba [TableGen] Fix discrepancy between parameter meaning and code logic
The parameter to emitStandaloneParamBuilder() was renamed from hasResultType to
isAllSameType, which is the opposite boolean value. The logic should be changed
to make them consistent.

Also re-ordered some methods in Operator. And few other tiny improvements.

PiperOrigin-RevId: 234478316
2019-03-29 16:30:41 -07:00
Uday Bondhugula f97c1c5b06 Misc. updates/fixes to analysis utils used for DMA generation; update DMA
generation pass to make it drop certain assumptions, complete TODOs.

- multiple fixes for getMemoryFootprintBytes
  - pass loopDepth correctly from getMemoryFootprintBytes()
  - use union while computing memory footprints

- bug fixes for addAffineForOpDomain
  - take into account loop step
  - add domains of other loop IVs in turn that might have been used in the bounds

- dma-generate: drop assumption of "non-unit stride loops being tile space loops
  and skipping those and recursing to inner depths"; DMA generation is now purely
  based on available fast mem capacity and memory footprint's calculated

- handle memory region compute failures/bailouts correctly from dma-generate

- loop tiling cleanup/NFC

- update some debug and error messages to use emitNote/emitError in
  pipeline-data-transfer pass - NFC

PiperOrigin-RevId: 234245969
2019-03-29 16:30:26 -07:00
MLIR Team 58aa383e60 Support fusing producer loop nests which write to a memref which is live out, provided that the write region of the consumer loop nest to the same memref is a super set of the producer's write region.
PiperOrigin-RevId: 234240958
2019-03-29 16:30:11 -07:00
Alex Zinenko ecd403c0e8 EDSC: properly construct FunctionTypes
The existing implementation of makeFunctionType in EDSC contains a bug: the
array of input types is overwritten using output types passed as arguments and
the array of output types is never filled in.  This leads to all sorts of
incorrect memory behavior.  Fill in the array of output types using the proper
argument.

PiperOrigin-RevId: 234177221
2019-03-29 16:29:56 -07:00
Alex Zinenko 4bb31f7377 ExecutionEngine: provide utils for running CLI-configured LLVM passes
A recent change introduced a possibility to run LLVM IR transformation during
JIT-compilation in the ExecutionEngine.  Provide helper functions that
construct IR transformers given either clang-style optimization levels or a
list passes to run.  The latter wraps the LLVM command line option parser to
parse strings rather than actual command line arguments.  As a result, we can
run either of

    mlir-cpu-runner -O3 input.mlir
    mlir-cpu-runner -some-mlir-pass -llvm-opts="-llvm-pass -other-llvm-pass"

to combine different transformations.  The transformer builder functions are
provided as a separate library that depends on LLVM pass libraries unlike the
main execution engine library.  The library can be used for integrating MLIR
execution engine into external frameworks.

PiperOrigin-RevId: 234173493
2019-03-29 16:29:41 -07:00
MLIR Team 8f5f2c765d LoopFusion: perform a series of loop interchanges to increase the loop depth at which slices of producer loop nests can be fused into constumer loop nests.
*) Adds utility to LoopUtils to perform loop interchange of two AffineForOps.
*) Adds utility to LoopUtils to sink a loop to a specified depth within a loop nest, using a series of loop interchanges.
*) Computes dependences between all loads and stores in the loop nest, and classifies each loop as parallel or sequential.
*) Computes loop interchange permutation required to sink sequential loops (and raise parallel loop nests) while preserving relative order among them.
*) Checks each dependence against the permutation to make sure that dependences would not be violated by the loop interchange transformation.
*) Calls loop interchange in LoopFusion pass on consumer loop nests before fusing in producers, sinking loops with loop carried dependences deeper into the consumer loop nest.
*) Adds and updates related unit tests.

PiperOrigin-RevId: 234158370
2019-03-29 16:29:26 -07:00
Lei Zhang 081299333b [TableGen] Rename Operand to Value to prepare sharing between operand and result
We specify op operands and results in TableGen op definition using the same syntax.
They should be modelled similarly in TableGen driver wrapper classes.

PiperOrigin-RevId: 234153332
2019-03-29 16:29:11 -07:00
Alex Zinenko ffc9043604 LLVM dialect conversion and target: support indirect calls
Add support for converting MLIR `call_indirect` instructions to the LLVM IR
dialect.  In LLVM IR, the same instruction is used for direct and indirect
calls.  In the dialect, we have `llvm.call` and `llvm.call0` to work around the
absence of the void type in MLIR.  For direct calls, the callee is stored as
instruction attribute.  Use the same pair of instructions for indirect calls by
omitting the callee attribute.  In the MLIR to LLVM IR translator, check the
presence of attribute to decide whether to construct a direct or an indirect
call using different LLVM IR Builder functions.

Add support for converting constants of function type to the LLVM IR dialect
and for translating them to the LLVM IR proper.  The `llvm.constant` operation
works similarly to other types: its attribute has MLIR function type but the
value it produces has LLVM IR function type wrapped in the dialect type.  While
lowering, look up the pointer to the converted function in the corresponding
mapping.

PiperOrigin-RevId: 234132351
2019-03-29 16:28:56 -07:00
Alex Zinenko d7aa700ccb Dialect conversion: decouple function signature conversion from type conversion
Function types are built-in in MLIR and affect the validity of the IR itself.
However, advanced target dialects such as the LLVM IR dialect may include
custom function types.  Until now, dialect conversion was expecting function
types not to be converted to the custom type: although the signatures was
allowed to change, the outer type must have been an mlir::FunctionType.  This
effectively prevented dialect conversion from creating instructions that
operate on values of the custom function type.

Dissociate function signature conversion from general type conversion.
Function signature conversion must still produce an mlir::FunctionType and is
used in places where built-in types are required to make IR valid.  General
type conversion is used for SSA values, including function and block arguments
and function results.

Exercise this behavior in the LLVM IR dialect conversion by converting function
types to LLVM IR function pointer types.  The pointer to a function is chosen
to provide consistent lowering of higher-order functions: while it is possible
to have a value of function type, it is not possible to create a function type
accepting a returning another function type.

PiperOrigin-RevId: 234124494
2019-03-29 16:28:41 -07:00
MLIR Team affb2193cc Update direction vector computation to use FlatAffineConstraints::getLower/UpperBounds.
Update FlatAffineConstraints::getLower/UpperBounds to project to the identifier for which bounds are being computed. This change enables computing bounds on an identifier which were previously dependent on the bounds of another identifier.

PiperOrigin-RevId: 234017514
2019-03-29 16:28:25 -07:00
Uday Bondhugula 6b7a49dd6a Add -tile-sizes command line option for loop tiling; clean up cl options for
for dma-generate, loop-unroll.

- add -tile-sizes command line option for loop tiling to specify different tile
  sizes for loops in a band

- clean up command line options for loop-unroll, dma-generate (remove
  cl::hidden)

PiperOrigin-RevId: 234006232
2019-03-29 16:28:10 -07:00
Lei Zhang 93d8f14c0f [TFLite] Fuse AddOp into preceding convolution ops
If we see an add op adding a constant value to a convolution op with constant
bias, we can fuse the add into the convolution op by constant folding the
bias and the add op's constant operand.

This CL also removes dangling RewriterGen check that prevents us from using
nested DAG nodes in result patterns, which is already supported.

PiperOrigin-RevId: 233989654
2019-03-29 16:27:55 -07:00
Lei Zhang eb3f8dcb93 [TableGen] Use deduced result types for build() of suitable ops
For ops with the SameOperandsAndResultType trait, we know that all result types
should be the same as the first operand's type. So we can generate a build()
method without requiring result types as parameters and also invoke this method
when constructing such ops during expanding rewrite patterns.

Similarly for ops have broadcast behavior, we can define build() method to use
the deduced type as the result type. So we can also calling into this build()
method when constructing ops in RewriterGen.

PiperOrigin-RevId: 233988307
2019-03-29 16:27:40 -07:00
Alex Zinenko f2c93f0995 EDSC: fix unused-wariable warning when compiling without assertions
In LowerEDSCTestPass, there are two range-for loops that only do assertions on
the loop variable.  With assertions disabled, the variable becomes unused and
triggers a warning promoted to error.  Cast it to void in the loop to supress
the warning.

PiperOrigin-RevId: 233936171
2019-03-29 16:27:25 -07:00
Alex Zinenko 50700b8122 Reimplement LLVM IR translation to use the MLIR LLVM IR dialect
Original implementation of the translation from MLIR to LLVM IR operated on the
Standard+BuiltIn dialect, with a later addition of the SuperVector dialect.
This required the translation to be aware of a potetially large number of other
dialects as the infrastructure extended.  With the recent introduction of the
LLVM IR dialect into MLIR, the translation can be switched to only translate
the LLVM IR dialect, and the translation of the operations becomes largely
mechanical.

The reimplementation of the translator follows the lines of the original
translator in function and basic block conversion.  In particular, block
arguments are converted to LLVM IR PHI nodes, which are connected to their
sources after all blocks of a function had been converted.  Thanks to LLVM IR
types being wrapped in the MLIR LLVM dialect type, type conversion is
simplified to only convert function types, all other types are simply
unwrapped.  Individual instructions are constructed using the LLVM IRBuilder,
which has a great potential for being table-generated from the LLVM IR dialect
operation definitions.

The input of the test/Target/llvmir.mlir is updated to use the MLIR LLVM IR
dialect.  While it is now redundant with the dialect conversion test, the point
of the exercise is to guarantee exactly the same LLVM IR is emitted.  (Only the
name of the allocation function is changed from `__mlir_alloc` to `alloc` in
the CHECK lines.)  It will be simplified in a follow-up commit.

PiperOrigin-RevId: 233842306
2019-03-29 16:27:10 -07:00
Jacques Pienaar 388fb3751e Add pattern constraints.
Enable matching pattern only if constraint is met. Start with type constraints and more general C++ constraints.

PiperOrigin-RevId: 233830768
2019-03-29 16:26:53 -07:00
Alex Zinenko bc184cff3f EDSC: unify Expr storage
EDSC expressions evolved to have different types of underlying storage.
Separate classes are used for unary, binary, ternary and variadic expressions.
The latter covers all the needs of the three special cases.  Remove these
special cases and use a single ExprStorage class everywhere while maintaining
the same APIs at the Expr level (ExprStorage is an internal implementation
class).

This is step 1/n to converging EDSC expressions and Ops and making EDSCs
support custom operations.

PiperOrigin-RevId: 233704912
2019-03-29 16:26:37 -07:00
Alex Zinenko 465746f262 LLVM IR Dialect: port DimOp lowering from the translator
DimOp is converted to a constant LLVM IR dialect operation for static
dimensions and to an access to the dynamic size info stored in the memref
descriptor for the dynamic dimensions.  This behavior is consistent with the
existing mlir-translator.

This completes the porting of MLIR -> LLVM lowering to the dialect conversion
infrastructure.

PiperOrigin-RevId: 233665634
2019-03-29 16:26:23 -07:00
River Riddle 2f11f86846 Add langref descriptions for the attribute values supported in MLIR.
PiperOrigin-RevId: 233661338
2019-03-29 16:26:08 -07:00
Uday Bondhugula 00860662a2 Generate dealloc's for alloc's of pipeline-data-transfer
- for the DMA transfers being pipelined through double buffering, generate
  deallocs for the double buffers being alloc'ed

This change is along the lines of cl/233502632. We initially wanted to experiment with
scoped allocation - so the deallocation's were usually not necessary; however, they are
needed even with scoped allocations in some situations - for eg. when the enclosing loop
gets unrolled. The dealloc serves as an end of lifetime marker.

PiperOrigin-RevId: 233653463
2019-03-29 16:25:53 -07:00
River Riddle 4755774d16 Make IndexType a standard type instead of a builtin. This also cleans up some unnecessary factory methods on the Type class.
PiperOrigin-RevId: 233640730
2019-03-29 16:25:38 -07:00
Alex Zinenko 8de7f6c471 LLVM IR Dialect: add select op and lower standard select to it
This is a similar one-to-one mapping.

PiperOrigin-RevId: 233621006
2019-03-29 16:25:23 -07:00
Alex Zinenko 0e59e5c49b EDSC: move Expr and Stmt construction operators to a namespace
In the current state, edsc::Expr and edsc::Stmt overload operators to construct
other Exprs and Stmts.  This includes some unconventional overloads of the
`operator==` to create a comparison expression and of the `operator!` to create
a negation expression.  This situation could lead to unpleasant surprises where
the code does not behave like expected.  Make all Expr and Stmt construction
operators free functions and move them to the `edsc::op` namespace.  Callers
willing to use these operators must explicitly include them with the `using`
declaration.  This can be done in some local scope.

Additionally, we currently emit signed comparisons for order-comparison
operators.  With namespaces, we can later introduce two sets of operators in
different namespace, e.g. `edsc::op::sign` and `edsc::op::unsign` to clearly
state which kind of comparison is implied.

PiperOrigin-RevId: 233578674
2019-03-29 16:25:08 -07:00
Alex Zinenko ed81ddc865 EDSC: support 'for' loops with dynamic bounds
The existing implementation in EDSC of 'for' loops in MLIREmitter is
unnecessarily restricted to constant bounds.  The underlying AffineForOp can be
constructed from (a list of) Values and AffineMaps instead of constants.  Its
verifier will check that the "affine provenance" conditions, i.e. that the
values used in the loop conditions are defined in such a way that they can be
analyzed by affine passes, are respected.  One can use non-constant values in
affine loop bounds in conjunction with a single-dimensional identity affine
map.  Implement this in MLIREmitter while maintaining the special case for
constant bounds that leads to significantly simpler generated IR when
applicable.

Test this change using the EDSC lowering test pass to inject code emitted from
EDSC into functions with predefined names.

PiperOrigin-RevId: 233578220
2019-03-29 16:24:53 -07:00
Tatiana Shpeisman 2e6cd60d3b Add dialect-specific decoding for opaque constants.
Associates opaque constants with a particular dialect. Adds general mechanism to register dialect-specific hooks defined in external components. Adds hooks to decode opaque tensor constant and extract an element of an opaque tensor constant.

This CL does not change the existing mechanism for registering constant folding hook yet. One thing at a time.

PiperOrigin-RevId: 233544757
2019-03-29 16:24:38 -07:00
Jacques Pienaar 4b88e7a245 Fix incorrect type in iterator.
PiperOrigin-RevId: 233542711
2019-03-29 16:24:23 -07:00
Uday Bondhugula 8b3f841daf Generate dealloc's for the alloc's of dma-generate.
- for the DMA buffers being allocated (and their tags), generate corresponding deallocs
- minor related update to replaceAllMemRefUsesWith and PipelineDataTransfer pass

Code generation for DMA transfers was being done with the initial simplifying
assumption that the alloc's would map to scoped allocations, and so no
deallocations would be necessary. Drop this assumption to generalize. Note that
even with scoped allocations, unrolling loops that have scoped allocations
could create a series of allocations and exhaustion of fast memory. Having a
end of lifetime marker like a dealloc in fact allows creating new scopes if
necessary when lowering to a backend and still utilize scoped allocation.
DMA buffers created by -dma-generate are guaranteed to have either
non-overlapping lifetimes or nested lifetimes.

PiperOrigin-RevId: 233502632
2019-03-29 16:24:08 -07:00
Uday Bondhugula f5eed89df0 Fix + cleanup for getMemRefRegion()
- determine symbols for the memref region correctly

- this wasn't exposed earlier since we didn't have any test cases where the
  portion of the nest being DMAed for was non-hyperrectangular (i.e., bounds of
  one IV  depending on other IVs within that part)

PiperOrigin-RevId: 233493872
2019-03-29 16:23:53 -07:00
Jacques Pienaar 7897257265 Add binary broadcastable builder.
* Add common broadcastable binary adder in TF ops and use for a few ops;
  - Adding Sub, Mul here
* Change the prepare lowering to use TF variants;
* Add some more legalization patterns;

PiperOrigin-RevId: 233310952
2019-03-29 16:23:38 -07:00
Lei Zhang de0fffdb5f [TFLite] Add rewrite pattern to fuse conv ops with Relu6 op
* Fixed tfl.conv_2d and tfl.depthwise_conv_2d to have fused activation
  function attribute
* Fixed RewriterGen crash: trying to get attribute match template when
  the matcher is unspecified (UnsetInit)

PiperOrigin-RevId: 233241755
2019-03-29 16:23:23 -07:00
Lei Zhang a9cee4fc8c [TableGen] Support nested DAG nodes in result result op arguments
This CL allowed developers to write result ops having nested DAG nodes as their
arguments. Now we can write

```
def : Pat<(...), (AOp (BOp, ...), AOperand)>
```
PiperOrigin-RevId: 233207225
2019-03-29 16:23:08 -07:00
Lei Zhang a57b398906 [TableGen] Assign created ops to variables and rewrite with PatternRewriter::replaceOp()
Previously we were using PatternRewrite::replaceOpWithNewOp() to both create the new op
inline and rewrite the matched op. That does not work well if we want to generate multiple
ops in a sequence. To support that, this CL changed to assign each newly created op to a
separate variable.

This CL also refactors how PatternEmitter performs the directive dispatch logic.

PiperOrigin-RevId: 233206819
2019-03-29 16:22:53 -07:00
Alex Zinenko d7e6b33e93 Convert MemRefCastOp to the LLVM IR dialect
Add support for converting `memref_cast` operations into the LLVM IR dialect.
This goes beyond want is currently implemented in the MLIR standard ops to LLVM
IR translation, but follows the general principles of the memref descriptors.
A memref cast creates a new descriptor containing the same buffer pointer but a
potentially different number of dynamic sizes (as many as dynamic dimensions in
the target memref type).  The lowering copies the buffer pointer to the new
descriptor and inserts dynamic sizes to it.  If the size is static in the
source type, a constant value is inserted as the dynamic size, otherwise a
dynamic value is copied from the source descriptor, taking into account the
difference in dynamic size positions in the descriptor.

PiperOrigin-RevId: 233082035
2019-03-29 16:22:38 -07:00
River Riddle 366ebcf6aa Remove the restriction that only registered terminator operations may terminate a block and have block operands. This allows for any operation to hold block operands. It also introduces the notion that unregistered operations may terminate a block. As such, the 'isTerminator' api on Instruction has been split into 'isKnownTerminator' and 'isKnownNonTerminator'.
PiperOrigin-RevId: 233076831
2019-03-29 16:22:23 -07:00
Alex Zinenko f5b99275d2 Cleanups in ExecutionEngine.
Make sure the module is always passed to the optimization layer.
Drop unused default argument for the IR transformation and remove the function
that was only used in this default argument.  The transformation wrapper
constructor already checks for the null function, so the caller can just pass
`{}` if they don't want any transformation (no callers currently need this).

PiperOrigin-RevId: 233068817
2019-03-29 16:22:08 -07:00
Alex Zinenko 4c35bbbb51 Port load/store op translation to LLVM IR dialect lowering
Implement the lowering of memref load and store standard operations into the
LLVM IR dialect.  This largely follows the existing mechanism in
MLIR-to-LLVM-IR translation for the sake of compatibility.  A memref value is
transformed into a memref descriptor value which holds the pointer to the
underlying data buffer and the dynamic memref sizes.  The data buffer is
contiguous.  Accesses to multidimensional memrefs are linearized in row-major
form.  In linear address computation, statically known sizes are used as
constants while dynamic sizes are extracted from the memref descriptor.

PiperOrigin-RevId: 233043846
2019-03-29 16:21:53 -07:00
Uday Bondhugula c419accea3 Automated rollback of changelist 232728977.
PiperOrigin-RevId: 232944889
2019-03-29 16:21:38 -07:00
Smit Hinsu c201e6ef05 Handle dynamic shapes in Broadcastable op trait
That allows TensorFlow Add and Div ops to use Broadcastable op trait instead of
more restrictive SameValueType op trait.

That in turn allows TensorFlow ops to be registered by defining GET_OP_LIST and
including the generated ops file. Currently, tf-raise-control-flow pass tests
are using dynamic shapes in tf.Add op and AddOp can't be registered without
supporting the dynamic shapes.

TESTED with unit tests

PiperOrigin-RevId: 232927998
2019-03-29 16:21:23 -07:00
River Riddle 13a45c7194 Add verification for AffineApply/AffineFor/AffineIf dimension and symbol operands. This also allows a DimOp to be a valid dimension identifier if its operand is a valid dimension identifier.
PiperOrigin-RevId: 232923468
2019-03-29 16:21:08 -07:00
Jacques Pienaar 351eed0dd1 Add tf.LeakyRelu.
* Add tf.LeakyRelu op definition + folders (well one is really canonicalizer)
* Change generated error message to use attribute description instead;
* Change the return type of F32Attr to be APFloat - internally it is already
  stored as APFloat so let the caller decides if they want to convert it or
  not. I could see varying opinions here though :) (did not change i32attr
  similarly)

PiperOrigin-RevId: 232923358
2019-03-29 16:20:53 -07:00
Alex Zinenko 36c0516c78 Disallow zero dimensions in vectors and memrefs
Aggregate types where at least one dimension is zero do not fully make sense as
they cannot contain any values (their total size is zero).  However, TensorFlow
and XLA support tensors with zero sizes, so we must support those too.  This is
relatively safe since, unlike vectors and memrefs, we don't have first-class
element accessors for MLIR tensors.

To support sparse element attributes of vector types that have no non-zero
elements, make sure that index and value element attributes have tensor type so
that we never need to create a zero vector type internally.  Note that this is
already consistent with the inline documentation of the sparse elements
attribute.  Users of the sparse elements attribute should not rely on the
storage schema anyway.

PiperOrigin-RevId: 232896707
2019-03-29 16:20:38 -07:00
Alex Zinenko 99b19c1d20 Disallow hexadecimal literals in type declarations
Existing IR syntax is ambiguous in type declarations in presence of zero sizes.
In particular, `0x1` in the type size can be interpreted as either a
hexadecimal literal corresponding to 1, or as two distinct decimal literals
separated by an `x` for sizes.  Furthermore, the shape `<0xi32>` fails lexing
because it is expected to be an integer literal.

Fix the lexer to treat `0xi32` as an integer literal `0` followed by a bare
identifier `xi32` (look one character ahead and early return instead of
erroring out).

Disallow hexadecimal literals in type declarations and forcibly split the token
into multiple parts while parsing the type.  Note that the splitting trick has
been already present to separate the element type from the preceding `x`
character.

PiperOrigin-RevId: 232880373
2019-03-29 16:20:22 -07:00
River Riddle a886625813 Modify the canonicalizations of select and muli to use the fold hook.
This also extends the greedy pattern rewrite driver to add the operands of folded operations back to the worklist.

PiperOrigin-RevId: 232878959
2019-03-29 16:20:06 -07:00
Alex Zinenko 8093f17a66 ExecutionEngine: provide a hook for LLVM IR passes
The current ExecutionEngine flow generates the LLVM IR from MLIR and
JIT-compiles it as is without any transformation.  It thus misses the
opportunity to perform optimizations supported by LLVM or collect statistics
about the module.  Modify the Orc JITter to perform transformations on the LLVM
IR.  Accept an optional LLVM module transformation function when constructing
the ExecutionEngine and use it while JIT-compiling.  This prevents MLIR
ExecutionEngine from depending on LLVM passes; its clients should depend on the
passes they require.

PiperOrigin-RevId: 232877060
2019-03-29 16:19:49 -07:00
Uday Bondhugula 4ba8c9147d Automated rollback of changelist 232717775.
PiperOrigin-RevId: 232807986
2019-03-29 16:19:33 -07:00
River Riddle 99fee0b181 When canonicalizing only erase the operation after calling the 'fold' hook if replacement results were supplied. This fixes a bug where the operation would always get erased, even if it was modified in place.
PiperOrigin-RevId: 232757964
2019-03-29 16:19:17 -07:00
River Riddle fd2d7c857b Rename the 'if' operation in the AffineOps dialect to 'affine.if' and namespace
the AffineOps dialect with 'affine'.

PiperOrigin-RevId: 232728977
2019-03-29 16:18:59 -07:00
Lei Zhang 888b9fa8a6 Add constant build() method not requiring result type
Instead, we deduce the result type from the given attribute.

This is in preparation for generating constant ops with TableGen.

PiperOrigin-RevId: 232723467
2019-03-29 16:18:44 -07:00
Stella Laurenzo c78d708487 Implement Quantization dialect and minimal UniformQuantizedType.
PiperOrigin-RevId: 232723240
2019-03-29 16:18:29 -07:00
Alex Zinenko e9493cf14d Port alloc/dealloc LLVM IR conversion into the LLVM IR dialect lowering
Implement the lowering of memref allocation and deallocation standard
operations into the LLVM IR dialect.  This largely follows the existing
mechanism in MLIR-to-LLVM-IR translation for the sake of compatibility.
A memref value is transformed into a memref descriptor value which holds the
pointer to the underlying data buffer and the dynamic memref sizes.  The buffer
is allocated using `malloc` and freed using `free`.  The lowering inserts
declarations of these functions if necessary.  Memref descriptors are values of
the LLVM IR structure type wrapped into an MLIR LLVM dialect type.  The pointer
to the buffer and the individual sizes are accessed using `extractvalue` and
`insertvalue` LLVM IR instructions.

PiperOrigin-RevId: 232719419
2019-03-29 16:18:14 -07:00