This commit defines a new SPIR-V dialect attribute for specifying
a SPIR-V target environment. It is a dictionary attribute containing
the SPIR-V version, supported extension list, and allowed capability
list. A SPIRVConversionTarget subclass is created to take in the
target environment and sets proper dynmaically legal ops by querying
the op availability interface of SPIR-V ops to make sure they are
available in the specified target environment. All existing conversions
targeting SPIR-V is changed to use this SPIRVConversionTarget. It
probes whether the input IR has a `spv.target_env` attribute,
otherwise, it uses the default target environment: SPIR-V 1.0 with
Shader capability and no extra extensions.
Differential Revision: https://reviews.llvm.org/D72256
This commit fixes shader ABI attributes to use `spv.` as the prefix
so that they match the dialect's namespace. This enables us to add
verification hooks in the SPIR-V dialect to verify them.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D72062
Lots of SPIR-V ops take enum attributes and certain enum cases
need extra capabilities or extensions to be available. This commit
extends to allow specifying availability spec on enum cases.
Extra utility functions are generated for the corresponding enum
classes to return the availability requirement. The availability
interface implemention for a SPIR-V op now goes over all enum
attributes to collect the availability requirements.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D71947
This allows us to include the definitions of these attributes in
other files without pulling in all dependencies for lowering.
Reviewed By: mravishankar
Differential Revision: https://reviews.llvm.org/D72054
SPIR-V has a few mechanisms to control op availability: version,
extension, and capabilities. These mechanisms are considered as
different availability classes.
This commit introduces basic definitions for modelling SPIR-V
availability classes. Specifically, an `Availability` class is
added to SPIRVBase.td, along with two subclasses: MinVersion
and MaxVersion for versioning. SPV_Op is extended to take a
list of `Availability`. Each `Availability` instance carries
information for generating op interfaces for the corresponding
availability class and also the concrete availability
requirements.
With the availability spec on ops, we can now auto-generate the
op interfaces of all SPIR-V availability classes and also
synthesize the op's implementations of these interfaces. The
interface generation is done via new TableGen backends
-gen-avail-interface-{decls|defs}. The op's implementation is
done via -gen-spirv-avail-impls.
Differential Revision: https://reviews.llvm.org/D71930
This means that in-place, or root, updates need to use explicit calls to `startRootUpdate`, `finalizeRootUpdate`, and `cancelRootUpdate`. The major benefit of this change is that it enables in-place updates in DialectConversion, which simplifies the FuncOp pattern for example. The major downside to this is that the cases that *may* modify an operation in-place will need an explicit cancel on the failure branches(assuming that they started an update before attempting the transformation).
PiperOrigin-RevId: 286933674
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.
PiperOrigin-RevId: 286930047
This will enable future commits to reimplement the internal implementation of OpResult without needing to change all of the existing users. This is part of a chain of commits optimizing the size of operation results.
PiperOrigin-RevId: 286919966
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ
This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.
PiperOrigin-RevId: 286844725
* Fixes use of anonymous namespace for static methods.
* Uses explicit qualifiers(mlir::) instead of wrapping the definition with the namespace.
PiperOrigin-RevId: 286222654
The inline interface uses two methods to check legality of inling:
1) Can a region be inlined into another.
2) Can an operation be inlined into another.
Setting the former to true, allows the inliner to use the second for
legality checks. Add this method to the SPIR-V dialect inlining
interface.
PiperOrigin-RevId: 286041734
This updates the lowering pipelines from the GPU dialect to lower-level
dialects (NVVM, SPIRV) to use the recently introduced gpu.func operation
instead of a standard function annotated with an attribute. In particular, the
kernel outlining is updated to produce gpu.func instead of std.func and the
individual conversions are updated to consume gpu.funcs and disallow standard
funcs after legalization, if necessary. The attribute "gpu.kernel" is preserved
in the generic syntax, but can also be used with the custom syntax on
gpu.funcs. The special kind of function for GPU allows one to use additional
features such as memory attribution.
PiperOrigin-RevId: 285822272
This cleans up the implementation of the various operation print methods. This is done via a combination of code cleanup, adding new streaming methods to the printer(e.g. operand ranges), etc.
PiperOrigin-RevId: 285285181
Add some convenience build methods to SPIR-V ops and update the
lowering to use these methods where possible.
For SPIRV::CompositeExtractOp move the method to deduce type of
element based on base and indices into a convenience function. Some
additional functionality needed to handle differences between parsing
and verification methods.
PiperOrigin-RevId: 284794404
The existing GPU to SPIR-V lowering created a spv.module for every
function with gpu.kernel attribute. A better approach is to lower the
module that the function lives in (which has the attribute
gpu.kernel_module) to a spv.module operation. This better captures the
host-device separation modeled by GPU dialect and simplifies the
lowering as well.
PiperOrigin-RevId: 284574688
This allows for users to provide operand_range and result_range in builder.create<> calls, instead of requiring an explicit copy into a separate data structure like SmallVector/std::vector.
PiperOrigin-RevId: 284360710
During lowering, spv.module might be within other modules (for example
gpu kernel module). Walk the module op to find spirv module to
serialize.
PiperOrigin-RevId: 284262550
The iterator should be erased before adding a new entry
into blockMergeInfo to avoid iterator invalidation.
Closestensorflow/mlir#299
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/299 from denis0x0D:sandbox/reoder_erase 983be565809aa0aadfc7e92962e4d4b282f63c66
PiperOrigin-RevId: 284173235
A CompositeInsertOp operation make a copy of a composite object,
while modifying one part of it.
Closestensorflow/mlir#292
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/292 from denis0x0D:sandbox/composite_insert 2200962b9057bda53cd2f2866b461e2797196380
PiperOrigin-RevId: 284036551
For serialization, when we have nested ops, the inner loop will create multiple
SPIR-V blocks. If the outer loop has block arguments (which corresponds to
OpPhi instructions), we defer the handling of OpPhi's parent block handling
until we serialized all blocks and then fix it up with the result <id>. These two
cases happening together was generating invalid SPIR-V blob because we
previously assume the parent block to be the block containing the terminator.
That is not true anymore when the block contains structured control flow ops.
If that happens, it should be fixed to use the structured control flow op's
merge block.
For deserialization, we record a map from header blocks to their corresponding
merge and continue blocks during the initial deserialization and then use the
info to construct spv.selection/spv.loop. The existing implementation will also
fall apart when we have nested loops. If so, we clone all blocks for the outer
loop, including the ones for the inner loop, to the spv.loop's region. So the map
for header blocks' merge info need to be updated; otherwise we are operating
on already deleted blocks.
PiperOrigin-RevId: 283949230
This CL also did the following cleanup:
- Moved the test for spv.SubgroupBallotKHR to its own file
- Wrapped generated canonicalization patterns in anonymous namespace
- Updated header comments in SPVOps.td
PiperOrigin-RevId: 283650091
The SPIR-V lowering used nested !spv.arrays to represented
multi-dimensional arrays, with the hope that in-conjunction with the
layout annotations, the shape and layout of memref can be represented
directly. It is unclear though how portable this representation will
end up being. It will rely on driver compilers implementing complex
index computations faithfully. A more portable approach is to use
linearized arrays to represent memrefs and explicitly instantiate all
the index computation in SPIR-V. This gives added benefit that we can
further optimize the generated code in MLIR before generating the
SPIR-V binary.
PiperOrigin-RevId: 283571167
Updated comments and used static instead of anonymous namspace
to hide functions to be consistent with the existing codebase.
PiperOrigin-RevId: 282847784
Adding zero and multiplying one can be common when generating code
for index calculation.
This CL also sorted canonicalize.mlir to alphabetical order.
PiperOrigin-RevId: 282828055
These changes to SPIR-V lowering while adding support for lowering
SUbViewOp, but are not directly related.
- Change the lowering of MemRefType to
!spv.ptr<!spv.struct<!spv.array<...>[offset]>, ..>
This is consistent with the Vulkan spec.
- To enable testing a simple pattern of lowering functions is added to
ConvertStandardToSPIRVPass. This is just used to convert the type of
the arguments of the function. The added function lowering itself is
not meant to be the way functions are eventually lowered into SPIR-V
dialect.
PiperOrigin-RevId: 282589644
To simplify the lowering into SPIR-V, while still respecting the ABI
requirements of SPIR-V/Vulkan, split the process into two
1) While lowering a function to SPIR-V (when the function is an entry
point function), allow specifying attributes on arguments and
function itself that describe the ABI of the function.
2) Add a pass that materializes the ABI described in the function.
Two attributes are needed.
1) Attribute on arguments of the entry point function that describe
the descriptor_set, binding, storage class, etc, of the
spv.globalVariable this argument will be replaced by
2) Attribute on function that specifies workgroup size, etc. (for now
only workgroup size).
Add the pass -spirv-lower-abi-attrs to materialize the ABI described
by the attributes.
This change makes the SPIRVBasicTypeConverter class unnecessary and is
removed, further simplifying the SPIR-V lowering path.
PiperOrigin-RevId: 282387587
This CL added necessary files and settings for using DRR to
write SPIR-V canonicalization patterns and also converted the
patterns for spv.Bitcast and spv.LogicalNot.
PiperOrigin-RevId: 282132786
Add a canonicalizer for `spirv::LogicalNotOp`.
Converts:
* spv.LogicalNot(spv.IEqual(...)) -> spv.INotEqual(...)
* spv.LogicalNot(spv.INotEqual(...)) -> spv.IEqual(...)
* spv.LogicalNot(spv.LogicalEqual(...)) -> spv.LogicalNotEqual(...)
* spv.LogicalNot(spv.LogicalNotEqual(...)) -> spv.LogicalEqual(...)
Also moved the test for spv.IMul to arithemtic tests.
Closestensorflow/mlir#256
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/256 from denis0x0D:sandbox/canon_logical_not 76ab5787b2c777f948c8978db061d99e76453d44
PiperOrigin-RevId: 282012356
Iterates each element to build the array. This includes a little refactor to
combine bool/int/float into a function, since they are similar. The only
difference is calling different function in the end.
PiperOrigin-RevId: 281210288
Convert chained `spirv::BitcastOp` operations into
one `spirv::BitcastOp` operation.
Closestensorflow/mlir#238
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/238 from denis0x0D:sandbox/canon_bitcast 4352ed4f81b959ec92f849c599e733b62a99c010
PiperOrigin-RevId: 281129234
This CL added op definitions for a few bit operations:
* OpBitFieldInsert
* OpBitFieldSExtract
* OpBitFieldUExtract
Closestensorflow/mlir#233
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/233 from denis0x0D:sandbox/bit_field_ops e7fd85b00d72d483d7992dc42b9cc4d673903455
PiperOrigin-RevId: 280691816
Refactoring the conversion from StandardOps/GPU dialect to SPIR-V
dialect:
1) Move the SPIRVTypeConversion and SPIRVOpLowering class into SPIR-V
dialect.
2) Add header files that expose functions to add patterns for the
dialects to SPIR-V lowering, as well as a pass that does the
dialect to SPIR-V lowering.
3) Make SPIRVOpLowering derive from OpLowering class.
PiperOrigin-RevId: 280486871
Since VariableOp is serialized during processBlock, we add two more fields,
`functionHeader` and `functionBody`, to collect instructions for a function.
After all the blocks have been processed, we append them to the `functions`.
Also, fix a bug in processGlobalVariableOp. The global variables should be
encoded into `typesGlobalValues`.
PiperOrigin-RevId: 280105366
During deserialization, the loop header block will be moved into the
spv.loop's region. If the loop header block has block arguments,
we need to make sure it is correctly carried over to the block where
the new spv.loop resides.
During serialization, we need to make sure block arguments from the
spv.loop's entry block are not silently dropped.
PiperOrigin-RevId: 280021777
This change allows for adding additional nested references to a SymbolRefAttr to allow for further resolving a symbol if that symbol also defines a SymbolTable. If a referenced symbol also defines a symbol table, a nested reference can be used to refer to a symbol within that table. Nested references are printed after the main reference in the following form:
symbol-ref-attribute ::= symbol-ref-id (`::` symbol-ref-id)*
Example:
module @reference {
func @nested_reference()
}
my_reference_op @reference::@nested_reference
Given that SymbolRefAttr is now more general, the existing functionality centered around a single reference is moved to a derived class FlatSymbolRefAttr. Followup commits will add support to lookups, rauw, etc. for scoped references.
PiperOrigin-RevId: 279860501
This CL added op definitions for a few bit operations:
* OpShiftLeftLogical
* OpShiftRightArithmetic
* OpShiftRightLogical
* OpBitCount
* OpBitReverse
* OpNot
Also moved the definition of spv.BitwiseAnd to follow the
lexicographical order.
Closestensorflow/mlir#215
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/215 from denis0x0D:sandbox/bit_ops d9b0852b689ac6c4879a9740b1740a2357f44d24
PiperOrigin-RevId: 279350470
MLIR translation tools can emit diagnostics and we want to be able to check if
it is indeed the case in tests. Reuse the source manager error handlers
provided for mlir-opt to support the verification in mlir-translate. This
requires us to change the signature of the functions that are registered to
translate sources to MLIR: it now takes a source manager instead of a memory
buffer.
PiperOrigin-RevId: 279132972
Many operations with regions add an additional 'attributes' prefix when printing the attribute dictionary to differentiate it from the region body. This leads to duplicated logic for detecting when to actually print the attribute dictionary.
PiperOrigin-RevId: 278747681
This simplifies the implementation quite a bit, and removes the need for explicit string munging. One change is made to some of the enum elements of SPV_DimAttr to ensure that they are proper identifiers; The string form is now prefixed with 'Dim'.
PiperOrigin-RevId: 278027132
Now that a proper parser is passed to these methods, there isn't a need to explicitly pass a source location. The source location can be recovered from the parser as necessary. This removes the need to explicitly decode an SMLoc in the case where we don't need to, which can be expensive.
This requires adding some basic nesting support to the parser for supporting nested parsers to allow for remapping source locations of the nested parsers to the top level parser for accurate diagnostics. This is due to the fact that the attribute and type parsers use different source buffers than the top level parser, as they may be represented in string form.
PiperOrigin-RevId: 278014858
These classes are functionally similar to the OpAsmParser/Printer classes and provide hooks for parsing attributes/tokens/types/etc. This change merely sets up the base infrastructure and updates the parser hooks, followups will add hooks as needed to simplify existing handrolled dialect parsers.
This has various different benefits:
*) Attribute/Type parsing is much simpler to define.
*) Dialect attributes/types that contain other attributes/types can now use aliases.
*) It provides a 'spec' with which we may use in the future to auto-generate parsers/printers.
*) Error messages emitted by attribute/type parsers can provide character exact locations rather than "beginning of the string"
PiperOrigin-RevId: 278005322
BitEnumAttr is a mechanism for modelling attributes whose value is
a bitfield. It should not be scoped to the SPIR-V dialect and can
be used by other dialects too.
This CL is mostly shuffling code around and adding tests and docs.
Functionality changes are:
* Fixed to use `getZExtValue()` instead of `getSExtValue()` when
getting the value from the underlying IntegerAttr for a case.
* Changed to auto-detect whether there is a case whose value is
all bits unset (i.e., zero). If so handle it specially in all
helper methods.
PiperOrigin-RevId: 277964926
This CL added op definitions for a few cast operations:
* OpConvertFToU
* OpConvertFToS
* OpConvertSToF
* OpConvertUToF
* OpUConvert
* OpSConvert
* OpFConvert
Also moved the definition of spv.Bitcast to the new file.
Closestensorflow/mlir#208 and tensorflow/mlir#174
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/208 from denis0x0D:sandbox/cast_ops 79bc9b37398aafddee6cf6beb301807988fe67f9
PiperOrigin-RevId: 277587891
This CL fixed gen_spirv_dialect.py to support nested delimiters when
chunking existing ODS entries in .td files and to allow ops without
correspondence in the spec. This is needed to pull in the definition
of OpUnreachable.
PiperOrigin-RevId: 277486465
This removes a bunch of special tailored DFS code in favor of the common
LLVM utility. Besides, we avoid recursion with system stack given that
llvm::depth_first_ext is iterator based and maintains its own stack.
PiperOrigin-RevId: 277272961
The SelectOp always has the same result type as its true/false
value. Add a builder method that uses the operand type to get the
result type.
PiperOrigin-RevId: 277217978
This CL adds another control flow instruction in SPIR-V: OpPhi.
It is modelled as block arguments to be idiomatic with MLIR.
See the rationale.md doc for "Block Arguments vs PHI nodes".
Serialization and deserialization is updated to convert between
block arguments and SPIR-V OpPhi instructions.
PiperOrigin-RevId: 277161545
Combine chained `spirv::AccessChainOp` operations into one
`spirv::AccessChainOp` operation.
Closestensorflow/mlir#198
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/198 from denis0x0D:sandbox/canon_access_chain 0cb87955a85511071143d62637ff939d0dabc2bd
PiperOrigin-RevId: 276609345
We will use block arguments as the way to model SPIR-V OpPhi in
the SPIR-V dialect.
This CL also adds a few useful helper methods to both ops to
get the block arguments.
Also added tests for branch weight (de)serialization.
PiperOrigin-RevId: 275960797
Add a canonicalization pattern for spv.selection operation.
Convert spv.selection operation to spv.Select based on
simple pattern.
Closestensorflow/mlir#183
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/183 from denis0x0D:sandbox/canon_select 43d04d923272dd60b9da39f70bdbc51a5168db62
PiperOrigin-RevId: 275312748
We just need to implement a few interface hooks to DialectInlinerInterface
and CallOpInterface to gain the benefits of an inliner. :)
Right now only supports some trivial cases:
* Inlining single block with spv.Return/spv.ReturnValue
* Inlining multi block with spv.Return
* Inlining spv.selection/spv.loop without return ops
More advanced cases will require block argument and Phi support.
PiperOrigin-RevId: 275151132
Makes the spv.module generated by the GPU to SPIR-V conversion SPIR-V
spec compliant (validated using spirv-val from Vulkan tools).
1) Separate out the VulkanLayoutUtils from
DecorateSPIRVCompositeTypeLayoutPass to make it reusable within the
Type converter in SPIR-V lowering infrastructure. This is used to
compute the layout of the !spv.struct used in global variable type
description.
2) Set the capabilities of the spv.module to Shader (needed for use of
Logical Memory Model, and the extensions to
SPV_KHR_storage_buffer_storage_class for use of Storage Buffer)
PiperOrigin-RevId: 275081486
The SpecId decoration is the handle for providing external specialization.
Similar to descriptor set and binding on global variables, we directly
bake it into assembly parsing and printing.
PiperOrigin-RevId: 274893879
The restriction that symbols can only have identifier names is arbitrary, and artificially limits the names that a symbol may have. This change adds support for parsing and printing symbols that don't fit in the 'bare-identifier' grammar by printing the reference in quotes, e.g. @"0_my_reference" can now be used as a symbol name.
PiperOrigin-RevId: 273644768
Add a pass to decorate the composite types used by
composite objects in the StorageBuffer, PhysicalStorageBuffer,
Uniform, and PushConstant storage classes with layout information.
Closestensorflow/mlir#156
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/156 from denis0x0D:sandbox/layout_info_decoration 7c50840fd38ca169a2da7ce9886b52b50c868b84
PiperOrigin-RevId: 273634140
Since MLIR integer types don't make a distinction between signed vs
unsigned integers, during deserialization of SPIR-V binaries, the
OpBitcast might result in a cast from/to the same type. Do not add a
spv.Bitcast operation to the spv.module in these cases.
PiperOrigin-RevId: 273381887
The SPIR-V spec recommends all OpUndef instructions be generated at
module level. For the SPIR-V dialect its better for UndefOp to produce
an SSA value for use with other instructions. If UndefOp is to be used
at module level, it cannot produce an SSA value (use of this SSA value
within FuncOp would need implicit capture). To satisfy needs of the
SPIR-V spec while making it simpler to represent UndefOp in the SPIR-V
dialect, the serialization is updated to create OpUndef instruction
at module scope.
PiperOrigin-RevId: 273355526
The structured selection/loop's entry block does not have arguments.
If the function's header block is also part of the structured control
flow, we cannot just simply erase it because it may contain arguments
matching the function signature and used by the cloned blocks. Instead,
turn it into a block only containing a spv.Branch op.
Also, we can directly emit instructions for the spv.selection header
block to the block containing the spv.selection op. This eliminates
unnecessary branches in the SPIR-V blob.
Added a test for nested spv.loop.
PiperOrigin-RevId: 273351424
Use `getParentOfType<FunctionOp>()` instead of `cast<FuncOp>(getParentOp())`
to avoid crash when return ops are used inside spv.selection/spv.loop.
PiperOrigin-RevId: 273006041
Adding support for OpUndef instruction. Updating the dialect
generation script to fix a few bugs in the instruction spec
generation.
PiperOrigin-RevId: 272975685
Add builder functions for spv._address_of, spv.EntryPoint,
spv.ExecutionMode and spv.Load to make it easier to create these
operations.
Fix a minor bug in printing of spv.EntryPoint
Add a utility function to get the attribute name associated with a
decoration.
PiperOrigin-RevId: 272952846
Similar to spv.loop, spv.selection is another op for modelling
SPIR-V structured control flow. It covers both OpBranchConditional
and OpSwitch with OpSelectionMerge.
Instead of having a `spv.SelectionMerge` op to directly model
selection merge instruction for indicating the merge target,
we use regions to delimit the boundary of the selection: the
merge target is the next op following the `spv.selection` op.
This way it's easier to discover all blocks belonging to
the selection and it plays nicer with the MLIR system.
PiperOrigin-RevId: 272475006
According to the SPIR-V spec:
"Length is the number of elements in the array. It must be at least 1."
Closestensorflow/mlir#160
COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/160 from denis0x0D:sandbox/array_len 0840dc0986ad0088a3aa7d5d8d3e97d489377ed9
PiperOrigin-RevId: 272094669
Add operations corresponding to OpLogicalAnd, OpLogicalNot,
OpLogicalEqual, OpLogicalNotEqual and OpLogicalOr instructions in
SPIR-V dialect. This needs changes to class hierarchy in SPIR-V
TableGen files to split SPIRVLogicalOp into SPIRVLogicalUnaryOp and
SPIRVLogicalBinaryOp. All derived classes of SPIRVLogicalOp are
updated accordingly.
Update the spirv dialect generation script to
1) Allow specifying base class to use for instruction spec generation
and file name to generate the specification in separately.
2) Use the existing descriptions for operations.
3) Update define_inst.sh to also invoke define_opcode.sh to also
define the corresponding SPIR-V instruction opcode enum.
PiperOrigin-RevId: 272014876
Support the OpBitcast instruction of SPIR-V using the spv.Bitcast
operation. The semantics implemented in the dialect differ from the
SPIR-V spec in that the dialect does not allow conversion to/from
pointer types from/to non-pointer types.
PiperOrigin-RevId: 271255957
1) Process and ignore the following debug instructions: OpSource,
OpSourceContinued, OpSourceExtension, OpString, OpModuleProcessed.
2) While processing OpTypeInt instruction, ignore the signedness
specification. Currently MLIR doesnt make a distinction between signed
and unsigned integer types.
3) Process and ignore BufferBlock decoration (similar to Buffer
decoration). StructType needs to be enhanced to track this attribute
since its needed for proper validation checks.
4) Report better error for unhandled instruction during
deserialization.
PiperOrigin-RevId: 271057060
According to SPIR-V spec, spirv::CompositeType includes
spirv::RuntimeArrayType. This allows using objects of
spirv::RuntimeArrayType with spirv::AccessChainOp.
PiperOrigin-RevId: 270809492
Sdd support in deserializer for OpMemberName instruction. For now
the name is just processed and not associated with the
spirv::StructType being built. That needs an enhancement to
spirv::StructTypes itself.
Add tests to check for errors reported during deserialization with
some refactoring to common out some utility functions.
PiperOrigin-RevId: 270794524