Unknown drm_i915_query_item id=5 Fossilize INFO: Enumerated GPU #0: Fossilize INFO: name: Intel(R) Arc(tm) A770M Graphics (DG2) Fossilize INFO: apiVersion: 1.3.251 Fossilize INFO: Chose GPU: Fossilize INFO: name: Intel(R) Arc(tm) A770M Graphics (DG2) Fossilize INFO: apiVersion: 1.3.251 Fossilize INFO: vendorID: 0x8086 Fossilize INFO: deviceID: 0x5690 Fossilize INFO: Enabling device extension: VK_KHR_8bit_storage Fossilize INFO: Enabling device extension: VK_KHR_16bit_storage Fossilize INFO: Enabling device extension: VK_KHR_bind_memory2 Fossilize INFO: Enabling device extension: VK_KHR_buffer_device_address Fossilize INFO: Enabling device extension: VK_KHR_copy_commands2 Fossilize INFO: Enabling device extension: VK_KHR_create_renderpass2 Fossilize INFO: Enabling device extension: VK_KHR_dedicated_allocation Fossilize INFO: Enabling device extension: VK_KHR_deferred_host_operations Fossilize INFO: Enabling device extension: VK_KHR_depth_stencil_resolve Fossilize INFO: Enabling device extension: VK_KHR_descriptor_update_template Fossilize INFO: Enabling device extension: VK_KHR_device_group Fossilize INFO: Enabling device extension: VK_KHR_draw_indirect_count Fossilize INFO: Enabling device extension: VK_KHR_driver_properties Fossilize INFO: Enabling device extension: VK_KHR_dynamic_rendering Fossilize INFO: Enabling device extension: VK_KHR_external_fence Fossilize INFO: Enabling device extension: VK_KHR_external_fence_fd Fossilize INFO: Enabling device extension: VK_KHR_external_memory Fossilize INFO: Enabling device extension: VK_KHR_external_memory_fd Fossilize INFO: Enabling device extension: VK_KHR_external_semaphore Fossilize INFO: Enabling device extension: VK_KHR_external_semaphore_fd Fossilize INFO: Enabling device extension: VK_KHR_format_feature_flags2 Fossilize INFO: Enabling device extension: VK_KHR_fragment_shading_rate Fossilize INFO: Enabling device extension: VK_KHR_get_memory_requirements2 Fossilize INFO: Enabling device extension: VK_KHR_image_format_list Fossilize INFO: Enabling device extension: VK_KHR_imageless_framebuffer Fossilize INFO: Enabling device extension: VK_KHR_incremental_present Fossilize INFO: Enabling device extension: VK_KHR_maintenance1 Fossilize INFO: Enabling device extension: VK_KHR_maintenance2 Fossilize INFO: Enabling device extension: VK_KHR_maintenance3 Fossilize INFO: Enabling device extension: VK_KHR_maintenance4 Fossilize INFO: Enabling device extension: VK_KHR_map_memory2 Fossilize INFO: Enabling device extension: VK_KHR_multiview Fossilize INFO: Enabling device extension: VK_KHR_pipeline_executable_properties Fossilize INFO: Enabling device extension: VK_KHR_pipeline_library Fossilize INFO: Enabling device extension: VK_KHR_push_descriptor Fossilize INFO: Enabling device extension: VK_KHR_relaxed_block_layout Fossilize INFO: Enabling device extension: VK_KHR_sampler_mirror_clamp_to_edge Fossilize INFO: Enabling device extension: VK_KHR_sampler_ycbcr_conversion Fossilize INFO: Enabling device extension: VK_KHR_separate_depth_stencil_layouts Fossilize INFO: Enabling device extension: VK_KHR_shader_atomic_int64 Fossilize INFO: Enabling device extension: VK_KHR_shader_clock Fossilize INFO: Enabling device extension: VK_KHR_shader_draw_parameters Fossilize INFO: Enabling device extension: VK_KHR_shader_float16_int8 Fossilize INFO: Enabling device extension: VK_KHR_shader_float_controls Fossilize INFO: Enabling device extension: VK_KHR_shader_integer_dot_product Fossilize INFO: Enabling device extension: VK_KHR_shader_non_semantic_info Fossilize INFO: Enabling device extension: VK_KHR_shader_subgroup_extended_types Fossilize INFO: Enabling device extension: VK_KHR_shader_subgroup_uniform_control_flow Fossilize INFO: Enabling device extension: VK_KHR_shader_terminate_invocation Fossilize INFO: Enabling device extension: VK_KHR_spirv_1_4 Fossilize INFO: Enabling device extension: VK_KHR_storage_buffer_storage_class Fossilize INFO: Enabling device extension: VK_KHR_swapchain Fossilize INFO: Enabling device extension: VK_KHR_swapchain_mutable_format Fossilize INFO: Enabling device extension: VK_KHR_synchronization2 Fossilize INFO: Enabling device extension: VK_KHR_timeline_semaphore Fossilize INFO: Enabling device extension: VK_KHR_uniform_buffer_standard_layout Fossilize INFO: Enabling device extension: VK_KHR_variable_pointers Fossilize INFO: Enabling device extension: VK_KHR_vulkan_memory_model Fossilize INFO: Enabling device extension: VK_KHR_workgroup_memory_explicit_layout Fossilize INFO: Enabling device extension: VK_KHR_zero_initialize_workgroup_memory Fossilize INFO: Enabling device extension: VK_EXT_4444_formats Fossilize INFO: Enabling device extension: VK_EXT_border_color_swizzle Fossilize INFO: Enabling device extension: VK_EXT_calibrated_timestamps Fossilize INFO: Enabling device extension: VK_EXT_color_write_enable Fossilize INFO: Enabling device extension: VK_EXT_conditional_rendering Fossilize INFO: Enabling device extension: VK_EXT_conservative_rasterization Fossilize INFO: Enabling device extension: VK_EXT_custom_border_color Fossilize INFO: Enabling device extension: VK_EXT_depth_clamp_zero_one Fossilize INFO: Enabling device extension: VK_EXT_depth_clip_control Fossilize INFO: Enabling device extension: VK_EXT_depth_clip_enable Fossilize INFO: Enabling device extension: VK_EXT_descriptor_indexing Fossilize INFO: Enabling device extension: VK_EXT_display_control Fossilize INFO: Enabling device extension: VK_EXT_dynamic_rendering_unused_attachments Fossilize INFO: Enabling device extension: VK_EXT_extended_dynamic_state Fossilize INFO: Enabling device extension: VK_EXT_extended_dynamic_state2 Fossilize INFO: Enabling device extension: VK_EXT_extended_dynamic_state3 Fossilize INFO: Enabling device extension: VK_EXT_external_memory_dma_buf Fossilize INFO: Enabling device extension: VK_EXT_external_memory_host Fossilize INFO: Enabling device extension: VK_EXT_fragment_shader_interlock Fossilize INFO: Enabling device extension: VK_EXT_global_priority Fossilize INFO: Enabling device extension: VK_EXT_global_priority_query Fossilize INFO: Enabling device extension: VK_EXT_graphics_pipeline_library Fossilize INFO: Enabling device extension: VK_EXT_host_query_reset Fossilize INFO: Enabling device extension: VK_EXT_image_2d_view_of_3d Fossilize INFO: Enabling device extension: VK_EXT_image_drm_format_modifier Fossilize INFO: Enabling device extension: VK_VALVE_mutable_descriptor_type Fossilize INFO: Enabling device extension: VK_EXT_image_sliced_view_of_3d Fossilize INFO: Enabling device extension: VK_EXT_image_view_min_lod Fossilize INFO: Enabling device extension: VK_EXT_index_type_uint8 Fossilize INFO: Enabling device extension: VK_EXT_inline_uniform_block Fossilize INFO: Enabling device extension: VK_EXT_line_rasterization Fossilize INFO: Enabling device extension: VK_EXT_load_store_op_none Fossilize INFO: Enabling device extension: VK_EXT_multi_draw Fossilize INFO: Enabling device extension: VK_EXT_mutable_descriptor_type Fossilize INFO: Enabling device extension: VK_EXT_non_seamless_cube_map Fossilize INFO: Enabling device extension: VK_EXT_pci_bus_info Fossilize INFO: Enabling device extension: VK_EXT_physical_device_drm Fossilize INFO: Enabling device extension: VK_EXT_pipeline_creation_cache_control Fossilize INFO: Enabling device extension: VK_EXT_pipeline_creation_feedback Fossilize INFO: Enabling device extension: VK_EXT_post_depth_coverage Fossilize INFO: Enabling device extension: VK_EXT_primitive_topology_list_restart Fossilize INFO: Enabling device extension: VK_EXT_primitives_generated_query Fossilize INFO: Enabling device extension: VK_EXT_private_data Fossilize INFO: Enabling device extension: VK_EXT_provoking_vertex Fossilize INFO: Enabling device extension: VK_EXT_queue_family_foreign Fossilize INFO: Enabling device extension: VK_EXT_robustness2 Fossilize INFO: Enabling device extension: VK_EXT_sample_locations Fossilize INFO: Enabling device extension: VK_EXT_sampler_filter_minmax Fossilize INFO: Enabling device extension: VK_EXT_scalar_block_layout Fossilize INFO: Enabling device extension: VK_EXT_separate_stencil_usage Fossilize INFO: Enabling device extension: VK_EXT_shader_atomic_float Fossilize INFO: Enabling device extension: VK_EXT_shader_atomic_float2 Fossilize INFO: Enabling device extension: VK_EXT_shader_demote_to_helper_invocation Fossilize INFO: Enabling device extension: VK_EXT_shader_module_identifier Fossilize INFO: Enabling device extension: VK_EXT_shader_stencil_export Fossilize INFO: Enabling device extension: VK_EXT_shader_subgroup_ballot Fossilize INFO: Enabling device extension: VK_EXT_shader_subgroup_vote Fossilize INFO: Enabling device extension: VK_EXT_shader_viewport_index_layer Fossilize INFO: Enabling device extension: VK_EXT_subgroup_size_control Fossilize INFO: Enabling device extension: VK_EXT_texel_buffer_alignment Fossilize INFO: Enabling device extension: VK_EXT_tooling_info Fossilize INFO: Enabling device extension: VK_EXT_transform_feedback Fossilize INFO: Enabling device extension: VK_EXT_vertex_attribute_divisor Fossilize INFO: Enabling device extension: VK_EXT_vertex_input_dynamic_state Fossilize INFO: Enabling device extension: VK_EXT_ycbcr_image_arrays Fossilize INFO: Enabling device extension: VK_GOOGLE_decorate_string Fossilize INFO: Enabling device extension: VK_GOOGLE_hlsl_functionality1 Fossilize INFO: Enabling device extension: VK_GOOGLE_user_type Fossilize INFO: Enabling device extension: VK_INTEL_shader_integer_functions2 Fossilize INFO: Enabling device extension: VK_NV_compute_shader_derivatives NIR (SSA form) for fragment shader: shader: MESA_SHADER_FRAGMENT source_sha1: {0x1da57d93, 0x547dbabd, 0x9f4fbbfa, 0x748d12cc, 0xac6d7518} stage: 4 next_stage: 0 num_ubos: 1 num_ssbos: 2 system_values_read: 0x00000000'00000000'00080000 subgroup_size: 0 divergence_analysis_run: true bit_sizes_float: 0x20 bit_sizes_int: 0x61 writes_memory: true origin_upper_left: true inputs: 0 outputs: 0 uniforms: 0 decl_var ssbo INTERP_MODE_NONE restrict Storage1 (~0, 0, 1) decl_var ubo INTERP_MODE_NONE block @0 (~0, 0, 3) decl_var ssbo INTERP_MODE_NONE restrict Storage0 @1 (~0, 0, 0) decl_function main (0 params) impl main { block block_0: /* preds: */ vec1 32 con ssa_0 = load_const (0x00000000 = 0.000000) vec1 32 con ssa_1 = load_const (0x00000002 = 0.000000) vec1 32 con ssa_2 = load_const (0x00000001 = 0.000000) vec1 32 con ssa_3 = load_const (0x00000010 = 0.000000) vec4 32 con ssa_4 = intrinsic load_ubo (ssa_1, ssa_3) (access=0, align_mul=1073741824, align_offset=16, range_base=0, range=-1) vec1 32 con ssa_5 = iand ssa_4.y, ssa_2 vec1 32 con ssa_6 = extract_u8 ssa_4.y, ssa_1 vec4 32 div ssa_7 = intrinsic load_frag_coord () () vec1 32 div ssa_8 = f2u32 ssa_7.y vec1 32 con ssa_9 = load_const (0x0000000d = 0.000000) vec1 32 div ssa_10 = ishl ssa_8, ssa_9 vec1 32 div ssa_11 = f2u32 ssa_7.x vec1 32 div ssa_12 = iadd ssa_10, ssa_11 vec1 32 div ssa_13 = imul ssa_12, ssa_4.x vec1 32 div ssa_14 = imul_32x16 ssa_12, ssa_6 vec1 32 div ssa_15 = iadd ssa_4.z, ssa_12 vec1 32 con ssa_16 = load_const (0x00000038 = 0.000000) vec1 64 con ssa_17 = intrinsic load_ubo (ssa_1, ssa_16) (access=0, align_mul=1073741824, align_offset=56, range_base=0, range=-1) vec1 32 div ssa_18 = ishl ssa_14, ssa_1 vec1 32 con ssa_19 = ine32 ssa_5, ssa_0 vec1 32 div ssa_20 = ult32 ssa_15, ssa_4.w /* succs: block_1 block_5 */ if ssa_20 { block block_1: /* preds: block_0 */ vec1 32 con ssa_21 = load_const (0x00000030 = 0.000000) vec1 64 con ssa_22 = intrinsic load_ubo (ssa_1, ssa_21) (access=0, align_mul=1073741824, align_offset=48, range_base=0, range=-1) vec1 32 con ssa_23 = load_const (0xfffffffc = -nan) vec1 32 div ssa_24 = iand ssa_13, ssa_23 vec1 32 con ssa_25 = load_const (0x00000024 = 0.000000) vec1 32 con ssa_26 = intrinsic load_ubo (ssa_1, ssa_25) (access=0, align_mul=1073741824, align_offset=36, range_base=0, range=-1) vec1 32 con ssa_27 = load_const (0x00000008 = 0.000000) vec1 32 con ssa_28 = load_const (0x00000007 = 0.000000) vec1 32 con ssa_29 = iand ssa_4.y, ssa_1 vec1 32 con ssa_30 = ishl ssa_29, ssa_28 vec1 32 con ssa_31 = load_const (0x7b000808 = 664776890994587263929995856502063104.000000) vec1 32 con ssa_32 = ior ssa_31, ssa_30 vec1 32 con ssa_33 = ishl ssa_5, ssa_27 vec1 32 con ssa_34 = load_const (0x00000020 = 0.000000) vec1 32 con ssa_35 = unpack_64_2x32_split_x ssa_22 vec1 32 con ssa_36 = unpack_64_2x32_split_y ssa_22 vec1 32 div ssa_37 = iadd ssa_35, ssa_24 vec1 32 div ssa_38 = ult32 ssa_37, ssa_35 vec1 32 div ssa_39 = b2i32 ssa_38 vec1 32 div ssa_40 = iadd ssa_39, ssa_36 vec1 64 div ssa_41 = pack_64_2x32_split ssa_37, ssa_40 vec1 32 con ssa_42 = unpack_64_2x32_split_x ssa_17 vec1 32 con ssa_43 = unpack_64_2x32_split_y ssa_17 vec1 32 div ssa_44 = iadd ssa_42, ssa_18 vec1 32 div ssa_45 = ult32 ssa_44, ssa_42 vec1 32 div ssa_46 = b2i32 ssa_45 vec1 32 div ssa_47 = iadd ssa_46, ssa_43 vec1 64 div ssa_48 = pack_64_2x32_split ssa_44, ssa_47 vec1 32 div ssa_49 = iadd3 ssa_3, ssa_18, ssa_42 vec1 32 div ssa_50 = ult32 ssa_49, ssa_42 vec1 32 div ssa_51 = b2i32 ssa_50 vec1 32 div ssa_52 = iadd ssa_51, ssa_43 vec1 64 div ssa_53 = pack_64_2x32_split ssa_49, ssa_52 vec1 32 div ssa_54 = iadd3 ssa_34, ssa_18, ssa_42 vec1 32 div ssa_55 = ult32 ssa_54, ssa_42 vec1 32 div ssa_56 = b2i32 ssa_55 vec1 32 div ssa_57 = iadd ssa_56, ssa_43 vec1 64 div ssa_58 = pack_64_2x32_split ssa_54, ssa_57 /* succs: block_2 block_3 */ if ssa_19 { block block_2: /* preds: block_1 */ vec4 32 div ssa_59 = intrinsic load_global (ssa_41) (access=0, align_mul=4, align_offset=0) vec1 32 div ssa_60 = imul ssa_59.y, ssa_26 vec1 32 div ssa_61 = iadd3 ssa_3, ssa_24, ssa_35 vec1 32 div ssa_62 = ult32 ssa_61, ssa_35 vec1 32 div ssa_63 = b2i32 ssa_62 vec1 32 div ssa_64 = iadd ssa_63, ssa_36 vec1 64 div ssa_65 = pack_64_2x32_split ssa_61, ssa_64 vec1 32 div ssa_66 = intrinsic load_global (ssa_65) (access=0, align_mul=4, align_offset=0) vec4 32 div ssa_67 = vec4 ssa_32, ssa_33, ssa_59.x, ssa_59.z intrinsic store_global (ssa_67, ssa_48) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec4 32 div ssa_68 = vec4 ssa_60, ssa_66, ssa_59.w, ssa_59.w intrinsic store_global (ssa_68, ssa_53) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec2 32 div ssa_69 = vec2 ssa_66, ssa_15 intrinsic store_global (ssa_69, ssa_58) (wrmask=xy /*3*/, access=0, align_mul=4, align_offset=0) /* succs: block_4 */ } else { block block_3: /* preds: block_1 */ vec4 32 div ssa_70 = intrinsic load_global (ssa_41) (access=0, align_mul=4, align_offset=0) vec1 32 div ssa_71 = imul ssa_70.y, ssa_26 vec4 32 div ssa_72 = vec4 ssa_32, ssa_33, ssa_70.x, ssa_70.z intrinsic store_global (ssa_72, ssa_48) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec4 32 div ssa_73 = vec4 ssa_71, ssa_70.w, ssa_0, ssa_70.z intrinsic store_global (ssa_73, ssa_53) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec2 32 div ssa_74 = vec2 ssa_70.w, ssa_15 intrinsic store_global (ssa_74, ssa_58) (wrmask=xy /*3*/, access=0, align_mul=4, align_offset=0) /* succs: block_4 */ } block block_4: /* preds: block_2 block_3 */ /* succs: block_12 */ } else { block block_5: /* preds: block_0 */ vec1 32 div ssa_75 = ieq32 ssa_15, ssa_4.w /* succs: block_6 block_7 */ if ssa_75 { block block_6: /* preds: block_5 */ vec1 32 con ssa_76 = load_const (0x00000020 = 0.000000) vec1 32 con ssa_77 = intrinsic load_ubo (ssa_1, ssa_76) (access=0, align_mul=1073741824, align_offset=32, range_base=0, range=-1) vec1 32 div ssa_78 = ult32 ssa_15, ssa_77 /* succs: block_8 */ } else { block block_7: /* preds: block_5 */ vec1 32 con ssa_79 = load_const (0x00000000 = 0.000000) /* succs: block_8 */ } block block_8: /* preds: block_6 block_7 */ vec1 32 div ssa_80 = phi block_6: ssa_78, block_7: ssa_79 /* succs: block_9 block_10 */ if ssa_80 { block block_9: /* preds: block_8 */ vec1 32 con ssa_81 = load_const (0x00000028 = 0.000000) vec1 64 con ssa_82 = intrinsic load_ubo (ssa_1, ssa_81) (access=0, align_mul=1073741824, align_offset=40, range_base=0, range=-1) vec1 32 con ssa_83 = load_const (0x18800101 = 0.000000) vec1 32 con ssa_84 = unpack_64_2x32_split_x ssa_82 vec1 32 con ssa_85 = unpack_64_2x32_split_y ssa_82 vec3 32 con ssa_86 = vec3 ssa_83, ssa_84, ssa_85 vec1 32 con ssa_87 = unpack_64_2x32_split_x ssa_17 vec1 32 con ssa_88 = unpack_64_2x32_split_y ssa_17 vec1 32 div ssa_89 = iadd ssa_87, ssa_18 vec1 32 div ssa_90 = ult32 ssa_89, ssa_87 vec1 32 div ssa_91 = b2i32 ssa_90 vec1 32 div ssa_92 = iadd ssa_91, ssa_88 vec1 64 div ssa_93 = pack_64_2x32_split ssa_89, ssa_92 intrinsic store_global (ssa_86, ssa_93) (wrmask=xyz /*7*/, access=0, align_mul=4, align_offset=0) /* succs: block_11 */ } else { block block_10: /* preds: block_8 */ /* succs: block_11 */ } block block_11: /* preds: block_9 block_10 */ /* succs: block_12 */ } block block_12: /* preds: block_4 block_11 */ /* succs: block_13 */ block block_13: } NIR (final form) for fragment shader: shader: MESA_SHADER_FRAGMENT source_sha1: {0x1da57d93, 0x547dbabd, 0x9f4fbbfa, 0x748d12cc, 0xac6d7518} stage: 4 next_stage: 0 num_ubos: 1 num_ssbos: 2 system_values_read: 0x00000000'00000000'00080000 subgroup_size: 0 divergence_analysis_run: true bit_sizes_float: 0x20 bit_sizes_int: 0x61 writes_memory: true origin_upper_left: true inputs: 0 outputs: 0 uniforms: 0 decl_var ssbo INTERP_MODE_NONE restrict Storage1 (~0, 0, 1) decl_var ubo INTERP_MODE_NONE block @0 (~0, 0, 3) decl_var ssbo INTERP_MODE_NONE restrict Storage0 @1 (~0, 0, 0) decl_function main (0 params) impl main { decl_reg vec1 32 div r0 block block_0: /* preds: */ vec1 32 con ssa_0 = load_const (0x00000000 = 0.000000) vec1 32 con ssa_1 = load_const (0x00000002 = 0.000000) vec1 32 con ssa_2 = load_const (0x00000001 = 0.000000) vec1 32 con ssa_3 = load_const (0x00000010 = 0.000000) vec4 32 con ssa_4 = intrinsic load_ubo (ssa_1, ssa_3) (access=0, align_mul=1073741824, align_offset=16, range_base=0, range=-1) vec1 32 con ssa_5 = iand ssa_4.y, ssa_2 vec1 32 con ssa_6 = extract_u8 ssa_4.y, ssa_1 vec4 32 div ssa_7 = intrinsic load_frag_coord () () vec1 32 div ssa_8 = f2u32 ssa_7.y vec1 32 con ssa_9 = load_const (0x0000000d = 0.000000) vec1 32 div ssa_10 = ishl ssa_8, ssa_9 vec1 32 div ssa_11 = f2u32 ssa_7.x vec1 32 div ssa_12 = iadd ssa_10, ssa_11 vec1 32 div ssa_13 = imul ssa_12, ssa_4.x vec1 32 div ssa_14 = imul_32x16 ssa_12, ssa_6 vec1 32 div ssa_15 = iadd ssa_4.z, ssa_12 vec1 32 con ssa_16 = load_const (0x00000038 = 0.000000) vec1 64 con ssa_17 = intrinsic load_ubo (ssa_1, ssa_16) (access=0, align_mul=1073741824, align_offset=56, range_base=0, range=-1) vec1 32 div ssa_18 = ishl ssa_14, ssa_1 vec1 32 div ssa_20 = ult32 ssa_15, ssa_4.w /* succs: block_1 block_5 */ if ssa_20 { block block_1: /* preds: block_0 */ vec1 32 con ssa_21 = load_const (0x00000030 = 0.000000) vec1 64 con ssa_22 = intrinsic load_ubo (ssa_1, ssa_21) (access=0, align_mul=1073741824, align_offset=48, range_base=0, range=-1) vec1 32 con ssa_23 = load_const (0xfffffffc = -nan) vec1 32 div ssa_24 = iand ssa_13, ssa_23 vec1 32 con ssa_25 = load_const (0x00000024 = 0.000000) vec1 32 con ssa_26 = intrinsic load_ubo (ssa_1, ssa_25) (access=0, align_mul=1073741824, align_offset=36, range_base=0, range=-1) vec1 32 con ssa_27 = load_const (0x00000008 = 0.000000) vec1 32 con ssa_28 = load_const (0x00000007 = 0.000000) vec1 32 con ssa_29 = iand ssa_4.y, ssa_1 vec1 32 con ssa_30 = ishl ssa_29, ssa_28 vec1 32 con ssa_31 = load_const (0x7b000808 = 664776890994587263929995856502063104.000000) vec1 32 con ssa_32 = ior ssa_31, ssa_30 vec1 32 con ssa_33 = ishl ssa_5, ssa_27 vec1 32 con ssa_34 = load_const (0x00000020 = 0.000000) vec1 32 con ssa_35 = unpack_64_2x32_split_x ssa_22 vec1 32 con ssa_36 = unpack_64_2x32_split_y ssa_22 vec1 32 div ssa_37 = iadd ssa_35, ssa_24 vec1 32 div ssa_38 = ult32 ssa_37, ssa_35 vec1 32 div ssa_39 = b2i32 ssa_38 vec1 32 div ssa_40 = iadd ssa_39, ssa_36 vec1 64 div ssa_41 = pack_64_2x32_split ssa_37, ssa_40 vec1 32 con ssa_42 = unpack_64_2x32_split_x ssa_17 vec1 32 con ssa_43 = unpack_64_2x32_split_y ssa_17 vec1 32 div ssa_44 = iadd ssa_42, ssa_18 vec1 32 div ssa_45 = ult32 ssa_44, ssa_42 vec1 32 div ssa_46 = b2i32 ssa_45 vec1 32 div ssa_47 = iadd ssa_46, ssa_43 vec1 64 div ssa_48 = pack_64_2x32_split ssa_44, ssa_47 vec1 32 div ssa_49 = iadd3 ssa_3, ssa_18, ssa_42 vec1 32 div ssa_50 = ult32 ssa_49, ssa_42 vec1 32 div ssa_51 = b2i32 ssa_50 vec1 32 div ssa_52 = iadd ssa_51, ssa_43 vec1 64 div ssa_53 = pack_64_2x32_split ssa_49, ssa_52 vec1 32 div ssa_54 = iadd3 ssa_34, ssa_18, ssa_42 vec1 32 div ssa_55 = ult32 ssa_54, ssa_42 vec1 32 div ssa_56 = b2i32 ssa_55 vec1 32 div ssa_57 = iadd ssa_56, ssa_43 vec1 64 div ssa_58 = pack_64_2x32_split ssa_54, ssa_57 vec1 32 div ssa_97 = ine32 ssa_5, ssa_0 /* succs: block_2 block_3 */ if ssa_97 { block block_2: /* preds: block_1 */ vec4 32 div ssa_59 = intrinsic load_global (ssa_41) (access=0, align_mul=4, align_offset=0) vec1 32 div ssa_60 = imul ssa_59.y, ssa_26 vec1 32 div ssa_61 = iadd3 ssa_3, ssa_24, ssa_35 vec1 32 div ssa_62 = ult32 ssa_61, ssa_35 vec1 32 div ssa_63 = b2i32 ssa_62 vec1 32 div ssa_64 = iadd ssa_63, ssa_36 vec1 64 div ssa_65 = pack_64_2x32_split ssa_61, ssa_64 vec1 32 div ssa_66 = intrinsic load_global (ssa_65) (access=0, align_mul=4, align_offset=0) vec4 32 div ssa_67 = vec4 ssa_32, ssa_33, ssa_59.x, ssa_59.z intrinsic store_global (ssa_67, ssa_48) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec4 32 div ssa_68 = vec4 ssa_60, ssa_66, ssa_59.w, ssa_59.w intrinsic store_global (ssa_68, ssa_53) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec2 32 div ssa_69 = vec2 ssa_66, ssa_15 intrinsic store_global (ssa_69, ssa_58) (wrmask=xy /*3*/, access=0, align_mul=4, align_offset=0) /* succs: block_4 */ } else { block block_3: /* preds: block_1 */ vec4 32 div ssa_70 = intrinsic load_global (ssa_41) (access=0, align_mul=4, align_offset=0) vec1 32 div ssa_71 = imul ssa_70.y, ssa_26 vec4 32 div ssa_72 = vec4 ssa_32, ssa_33, ssa_70.x, ssa_70.z intrinsic store_global (ssa_72, ssa_48) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec4 32 div ssa_73 = vec4 ssa_71, ssa_70.w, ssa_0, ssa_70.z intrinsic store_global (ssa_73, ssa_53) (wrmask=xyzw /*15*/, access=0, align_mul=4, align_offset=0) vec2 32 div ssa_74 = vec2 ssa_70.w, ssa_15 intrinsic store_global (ssa_74, ssa_58) (wrmask=xy /*3*/, access=0, align_mul=4, align_offset=0) /* succs: block_4 */ } block block_4: /* preds: block_2 block_3 */ /* succs: block_12 */ } else { block block_5: /* preds: block_0 */ vec1 32 div ssa_75 = ieq32 ssa_15, ssa_4.w /* succs: block_6 block_7 */ if ssa_75 { block block_6: /* preds: block_5 */ vec1 32 con ssa_76 = load_const (0x00000020 = 0.000000) vec1 32 con ssa_77 = intrinsic load_ubo (ssa_1, ssa_76) (access=0, align_mul=1073741824, align_offset=32, range_base=0, range=-1) div r0 = ult32 ssa_15, ssa_77 /* succs: block_8 */ } else { block block_7: /* preds: block_5 */ vec1 32 con ssa_79 = load_const (0x00000000 = 0.000000) div r0 = mov ssa_79 /* succs: block_8 */ } block block_8: /* preds: block_6 block_7 */ /* succs: block_9 block_10 */ if r0 { block block_9: /* preds: block_8 */ vec1 32 con ssa_81 = load_const (0x00000028 = 0.000000) vec1 64 con ssa_82 = intrinsic load_ubo (ssa_1, ssa_81) (access=0, align_mul=1073741824, align_offset=40, range_base=0, range=-1) vec1 32 con ssa_83 = load_const (0x18800101 = 0.000000) vec1 32 con ssa_84 = unpack_64_2x32_split_x ssa_82 vec1 32 con ssa_85 = unpack_64_2x32_split_y ssa_82 vec3 32 con ssa_86 = vec3 ssa_83, ssa_84, ssa_85 vec1 32 con ssa_87 = unpack_64_2x32_split_x ssa_17 vec1 32 con ssa_88 = unpack_64_2x32_split_y ssa_17 vec1 32 div ssa_89 = iadd ssa_87, ssa_18 vec1 32 div ssa_90 = ult32 ssa_89, ssa_87 vec1 32 div ssa_91 = b2i32 ssa_90 vec1 32 div ssa_92 = iadd ssa_91, ssa_88 vec1 64 div ssa_93 = pack_64_2x32_split ssa_89, ssa_92 intrinsic store_global (ssa_86, ssa_93) (wrmask=xyz /*7*/, access=0, align_mul=4, align_offset=0) /* succs: block_11 */ } else { block block_10: /* preds: block_8 */ /* succs: block_11 */ } block block_11: /* preds: block_9 block_10 */ /* succs: block_12 */ } block block_12: /* preds: block_4 block_11 */ /* succs: block_13 */ block block_13: } Native code for unnamed fragment shader (null) (sha1 41288017ccb8236d96bfeca98d5dc3515d0c8ec6) SIMD8 shader: 132 instructions. 0 loops. 818 cycles. 0:0 spills:fills, 11 sends, scheduled with mode top-down. Promoted 0 constants. Compacted 2112 to 2016 bytes (5%) START B0 (106 cycles) add(16) g9<1>UW g1.4<2,8,0>UW 0x01000100V { align1 WE_all 1H }; add(16) g10<1>UW g1.5<2,8,0>UW 0x01010000V { align1 WE_all 1H }; mov(8) g55<1>F g4.5<0,1,0>F { align1 1Q compacted }; and(8) g12<1>UD g4.5<0,1,0>UD 0x00000001UD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; mov(8) g7<1>F g9<16,8,2>UW { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; mov(8) g8<1>F g10<16,8,2>UW { align1 1Q }; mov(8) g13<1>UD g55.2<32,8,4>UB { align1 1Q F@3 }; mov(8) g20.1<2>F g5.7<0,1,0>F { align1 1Q }; mov(8) g16<1>UD g7<8,8,1>F { align1 1Q F@3 }; mov(8) g14<1>UD g8<8,8,1>F { align1 1Q F@2 }; mov(8) g20<2>F g5.6<0,1,0>F { align1 1Q F@1 compacted }; shl(8) g15<1>D g14<8,8,1>D 0x0000000dUD { align1 1Q I@1 }; add(8) g17<1>D g15<1,1,0>D g16<1,1,0>D { align1 1Q I@1 compacted }; mul(8) g18<1>D g17<8,8,1>D g4.8<0,1,0>UW { align1 1Q I@1 }; mul(8) g84<1>D g17<8,8,1>D g4.9<0,1,0>UW { align1 1Q }; mul(8) g19<1>D g17<8,8,1>D g13<16,8,2>W { align1 1Q I@7 }; add(8) g63<1>D g4.6<0,1,0>D g17<1,1,0>D { align1 1Q compacted }; add(8) g18.1<2>UW g18.1<16,8,2>UW g84<16,8,2>UW { align1 1Q I@3 }; shl(8) g22<1>D g19<8,8,1>D 0x00000002UD { align1 1Q I@3 }; cmp.l.f0.0(8) null<1>UD g63<8,8,1>UD g4.7<0,1,0>UD { align1 1Q I@3 }; (+f0.0) if(8) JIP: LABEL1 UIP: LABEL0 { align1 1Q }; END B0 ->B1 ->B5 and(8) g25<1>UD g18<8,8,1>UD 0xfffffffcUD { align1 1Q I@4 }; and(8) g26<1>UD g4.5<0,1,0>UD 0x00000002UD { align1 1Q compacted }; shl(8) g7<1>D g12<8,8,1>D 0x00000008UD { align1 1Q }; mov(8) g32<1>UD g20<8,4,2>UD { align1 1Q F@1 }; add(8) g33<1>D g20<8,4,2>D g22<1,1,0>D { align1 1Q I@7 compacted }; mov(8) g23.1<2>F g5.5<0,1,0>F { align1 1Q }; shl(8) g27<1>D g26<8,8,1>D 0x00000007UD { align1 1Q I@4 }; add3(8) g36<1>D g32<8,8,1>D g22<8,8,1>D 16W { align1 1Q I@3 }; add3(8) g39<1>D g32<8,8,1>D g22<8,8,1>D 32W { align1 1Q }; mov(8) g66<2>UD g33<4,4,1>UD { align1 1Q I@4 }; mov(8) g23<2>F g5.4<0,1,0>F { align1 1Q F@1 compacted }; or(8) g6<1>UD g27<8,8,1>UD 0x7b000808UD { align1 1Q I@4 }; mov(8) g72<2>UD g36<4,4,1>UD { align1 1Q I@4 }; mov(8) g78<2>UD g39<4,4,1>UD { align1 1Q I@4 }; mov(8) g28<1>UD g23<8,4,2>UD { align1 1Q F@1 }; add(8) g29<1>D g23<8,4,2>D g25<1,1,0>D { align1 1Q compacted }; cmp.l.f0.0(8) g30<1>UD g29<8,8,1>UD g23<8,4,2>UD { align1 1Q I@1 }; mov(8) g64<2>UD g29<4,4,1>UD { align1 1Q }; cmp.l.f0.0(8) g34<1>UD g33<8,8,1>UD g20<8,4,2>UD { align1 1Q }; cmp.l.f0.0(8) g37<1>UD g36<8,8,1>UD g20<8,4,2>UD { align1 1Q }; cmp.l.f0.0(8) g40<1>UD g39<8,8,1>UD g20<8,4,2>UD { align1 1Q }; cmp.nz.f0.0(8) null<1>D g12<8,8,1>D 0D { align1 1Q }; add(8) g31<1>D -g30<8,8,1>D g23.1<8,4,2>D { align1 1Q I@6 }; add(8) g35<1>D -g34<8,8,1>D g20.1<8,4,2>D { align1 1Q I@5 }; add(8) g38<1>D -g37<8,8,1>D g20.1<8,4,2>D { align1 1Q I@5 }; add(8) g41<1>D -g40<8,8,1>D g20.1<8,4,2>D { align1 1Q I@5 }; mov(8) g64.1<2>UD g31<4,4,1>UD { align1 1Q I@4 }; mov(8) g66.1<2>UD g35<4,4,1>UD { align1 1Q I@4 }; mov(8) g72.1<2>UD g38<4,4,1>UD { align1 1Q I@4 }; mov(8) g78.1<2>UD g41<4,4,1>UD { align1 1Q I@4 }; (+f0.0) if(8) JIP: LABEL3 UIP: LABEL2 { align1 1Q }; END B1 ->B2 ->B3 START B2 <-B1 (200 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; send(8) g42UD g64UD nullUD 0x0440f582 0x00000000 ugm MsgDesc: ( load_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 4, src0_len = 2, src1_len = 0 flat ) base_offset 0 { align1 1Q $0 }; add3(8) g46<1>D g28<8,8,1>D g25<8,8,1>D 16W { align1 1Q }; cmp.l.f0.0(8) g47<1>UD g46<8,8,1>UD g23<8,4,2>UD { align1 1Q I@1 }; mov(8) g56<2>UD g46<4,4,1>UD { align1 1Q }; add(8) g48<1>D -g47<8,8,1>D g23.1<8,4,2>D { align1 1Q I@2 }; mov(8) g56.1<2>UD g48<4,4,1>UD { align1 1Q I@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; send(8) g59UD g56UD nullUD 0x04101582 0x00000000 ugm MsgDesc: ( load_cmask, a64, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 2, src1_len = 0 flat ) base_offset 0 { align1 1Q $1 }; mul(8) g58<1>D g43<8,8,1>D g5.2<0,1,0>UW { align1 1Q $0.dst }; mul(8) g85<1>D g43<8,8,1>D g5.3<0,1,0>UW { align1 1Q }; mov(8) g8<1>D g42<8,8,1>D { align1 1Q $0.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.dst }; mov(8) g9<1>D g44<8,8,1>D { align1 1Q F@6 }; add(8) g58.1<2>UW g58.1<16,8,2>UW g85<16,8,2>UW { align1 1Q I@3 }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; (+f1.0) send(8) nullUD g66UD g6UD 0x0400f586 0x00000100 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 2, src1_len = 4 flat ) base_offset 0 { align1 1Q $2 }; mov(8) g60<1>D g45<8,8,1>D { align1 1Q $0.dst }; mov(8) g61<1>D g45<8,8,1>D { align1 1Q }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(8) nullUD g72UD g58UD 0x0400f586 0x00000100 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 2, src1_len = 4 flat ) base_offset 0 { align1 1Q $3 }; mov(8) g62<1>D g59<8,8,1>D { align1 1Q $3.src }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(8) nullUD g78UD g62UD 0x04003586 0x00000080 ugm MsgDesc: ( store_cmask, a64, d32, xy, L1STATE_L3MOCS dst_len = 0, src0_len = 2, src1_len = 2 flat ) base_offset 0 { align1 1Q $4 }; else(8) JIP: LABEL2 UIP: LABEL2 { align1 1Q }; END B2 ->B3 ->B4 START B3 <-B1 <-B2 (192 cycles) LABEL3: sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; send(8) g49UD g64UD nullUD 0x0440f582 0x00000000 ugm MsgDesc: ( load_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 4, src0_len = 2, src1_len = 0 flat ) base_offset 0 { align1 1Q $0 }; mov(8) g68<1>D g6<8,8,1>D { align1 1Q $2.src }; mov(8) g69<1>D g7<8,8,1>D { align1 1Q $2.src }; mul(8) g74<1>D g50<8,8,1>D g5.2<0,1,0>UW { align1 1Q $0.dst }; mul(8) g86<1>D g50<8,8,1>D g5.3<0,1,0>UW { align1 1Q }; mov(8) g70<1>D g49<8,8,1>D { align1 1Q $0.dst }; mov(8) g71<1>D g51<8,8,1>D { align1 1Q $0.dst }; add(8) g74.1<2>UW g74.1<16,8,2>UW g86<16,8,2>UW { align1 1Q I@3 }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; (+f1.0) send(8) nullUD g66UD g68UD 0x0400f586 0x00000100 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 2, src1_len = 4 flat ) base_offset 0 { align1 1Q $2 }; mov(8) g75<1>D g52<8,8,1>D { align1 1Q $0.dst }; mov(8) g76<1>D 0D { align1 1Q }; mov(8) g77<1>D g51<8,8,1>D { align1 1Q }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(8) nullUD g72UD g74UD 0x0400f586 0x00000100 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 2, src1_len = 4 flat ) base_offset 0 { align1 1Q $3 }; mov(8) g80<1>D g52<8,8,1>D { align1 1Q }; mov(8) g81<1>D g63<8,8,1>D { align1 1Q $4.src }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(8) nullUD g78UD g80UD 0x04003586 0x00000080 ugm MsgDesc: ( store_cmask, a64, d32, xy, L1STATE_L3MOCS dst_len = 0, src0_len = 2, src1_len = 2 flat ) base_offset 0 { align1 1Q $4 }; END B3 ->B4 START B4 <-B3 <-B2 (16 cycles) LABEL2: endif(8) JIP: LABEL4 { align1 1Q }; LABEL4: else(8) JIP: LABEL0 UIP: LABEL0 { align1 1Q }; END B4 ->B5 ->B11 START B5 <-B0 <-B4 (24 cycles) LABEL1: cmp.z.f0.0(8) null<1>D g63<8,8,1>D g4.7<0,1,0>D { align1 1Q $4.src }; (+f0.0) if(8) JIP: LABEL6 UIP: LABEL5 { align1 1Q }; END B5 ->B6 ->B7 START B6 <-B5 (10 cycles) cmp.l.f0.0(8) g11<1>UD g63<1,1,0>UD g5<0,1,0>UD { align1 1Q compacted }; else(8) JIP: LABEL5 UIP: LABEL5 { align1 1Q }; END B6 ->B7 ->B8 START B7 <-B5 <-B6 (4 cycles) LABEL6: mov(8) g11<1>UD 0x00000000UD { align1 1Q I@2 }; END B7 ->B8 START B8 <-B7 <-B6 (34 cycles) LABEL5: endif(8) JIP: LABEL0 { align1 1Q }; mov.nz.f0.0(8) null<1>D g11<8,8,1>D { align1 1Q I@2 }; (+f0.0) if(8) JIP: LABEL7 UIP: LABEL7 { align1 1Q }; END B8 ->B9 ->B10 add(8) g52<1>D g20<8,4,2>D g22<1,1,0>D { align1 1Q A@1 compacted }; mov(8) g83<1>D 411042049D { align1 1Q }; mov(8) g50.1<2>F g5.3<0,1,0>F { align1 1Q }; cmp.l.f0.0(8) g53<1>UD g52<8,8,1>UD g20<8,4,2>UD { align1 1Q I@2 }; mov(8) g81<2>UD g52<4,4,1>UD { align1 1Q $4.src }; mov(8) g50<2>F g5.2<0,1,0>F { align1 1Q F@1 compacted }; add(8) g54<1>D -g53<8,8,1>D g20.1<8,4,2>D { align1 1Q I@2 }; mov(8) g84<1>D g50<8,4,2>D { align1 1Q F@1 }; mov(8) g85<1>D g50.1<8,4,2>D { align1 1Q }; mov(8) g81.1<2>UD g54<4,4,1>UD { align1 1Q I@3 }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(8) nullUD g81UD g83UD 0x04007586 0x000000c0 ugm MsgDesc: ( store_cmask, a64, d32, xyz, L1STATE_L3MOCS dst_len = 0, src0_len = 2, src1_len = 3 flat ) base_offset 0 { align1 1Q $4 }; END B9 ->B10 START B10 <-B9 <-B8 (8 cycles) LABEL7: endif(8) JIP: LABEL0 { align1 1Q }; END B10 ->B11 START B11 <-B10 <-B4 (10 cycles) LABEL0: endif(8) JIP: LABEL8 { align1 1Q }; LABEL8: sendc(8) nullUD g123UD nullUD 0x08031400 0x00100000 render MsgDesc: RT write SIMD8 LastRT Surface = 0 mlen 4 ex_mlen 0 rlen 0 { align1 1Q A@1 EOT }; END B11 Native code for unnamed fragment shader (null) (sha1 6bd6dbc501690932eedfd60545be7c906aa7f975) SIMD16 shader: 169 instructions. 0 loops. 1316 cycles. 0:0 spills:fills, 11 sends, scheduled with mode top-down. Promoted 0 constants. Compacted 2704 to 2560 bytes (5%) START B0 (140 cycles) add(32) g75<1>UW g1.4<2,8,0>UW 0x01000100V { align1 WE_all }; add(32) g77<1>UW g1.5<2,8,0>UW 0x01010000V { align1 WE_all }; mov(16) g4<1>F g6.5<0,1,0>F { align1 1H compacted }; and(16) g79<1>UD g6.5<0,1,0>UD 0x00000001UD { align1 1H compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; mov(16) g72<1>F g75<16,8,2>UW { align1 1H }; mov(16) g81<1>UD g4.2<32,8,4>UB { align1 1H F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; mov(16) g74<1>F g77<16,8,2>UW { align1 1H }; mov(8) g69.1<2>F g7.7<0,1,0>F { align1 1Q }; mov(8) g2.1<2>F g7.7<0,1,0>F { align1 2Q }; mov(16) g87<1>UD g72<8,8,1>F { align1 1H F@4 }; mov(16) g83<1>UD g74<8,8,1>F { align1 1H F@3 }; mov(8) g69<2>F g7.6<0,1,0>F { align1 1Q F@2 compacted }; mov(8) g2<2>F g7.6<0,1,0>F { align1 2Q F@2 compacted }; shl(16) g85<1>D g83<8,8,1>D 0x0000000dUD { align1 1H I@1 }; add(16) g89<1>D g85<1,1,0>D g87<1,1,0>D { align1 1H I@1 compacted }; mul(16) g91<1>D g89<8,8,1>D g6.8<0,1,0>UW { align1 1H I@1 }; mul(16) g83<1>D g89<8,8,1>D g6.9<0,1,0>UW { align1 1H }; mul(16) g93<1>D g89<8,8,1>D g81<16,8,2>W { align1 1H I@7 }; add(16) g35<1>D g6.6<0,1,0>D g89<1,1,0>D { align1 1H compacted }; add(16) g91.1<2>UW g91.1<16,8,2>UW g83<16,8,2>UW { align1 1H I@3 }; shl(16) g95<1>D g93<8,8,1>D 0x00000002UD { align1 1H I@3 }; cmp.l.f0.0(16) null<1>UD g35<8,8,1>UD g6.7<0,1,0>UD { align1 1H I@3 }; (+f0.0) if(16) JIP: LABEL1 UIP: LABEL0 { align1 1H }; END B0 ->B1 ->B5 and(16) g99<1>UD g91<8,8,1>UD 0xfffffffcUD { align1 1H I@4 }; and(16) g101<1>UD g6.5<0,1,0>UD 0x00000002UD { align1 1H compacted }; shl(16) g19<1>D g79<8,8,1>D 0x00000008UD { align1 1H }; mov(8) g109<1>UD g69<8,4,2>UD { align1 1Q F@2 }; mov(8) g110<1>UD g2<8,4,2>UD { align1 2Q F@1 }; add(8) g59<1>D g69<8,4,2>D g95<1,1,0>D { align1 1Q I@7 compacted }; add(8) g111<1>D g2<8,4,2>D g96<1,1,0>D { align1 2Q I@7 compacted }; mov(8) g71.1<2>F g7.5<0,1,0>F { align1 1Q }; mov(8) g97.1<2>F g7.5<0,1,0>F { align1 2Q }; shl(16) g103<1>D g101<8,8,1>D 0x00000007UD { align1 1H I@6 }; add3(16) g114<1>D g109<8,8,1>D g95<8,8,1>D 16W { align1 1H I@4 }; add3(16) g117<1>D g109<8,8,1>D g95<8,8,1>D 32W { align1 1H }; mov(8) g41<2>UD g59<4,4,1>UD { align1 1Q I@5 }; mov(8) g43<2>UD g111<4,4,1>UD { align1 2Q I@5 }; mov(8) g71<2>F g7.4<0,1,0>F { align1 1Q F@2 compacted }; mov(8) g97<2>F g7.4<0,1,0>F { align1 2Q F@2 compacted }; or(16) g17<1>UD g103<8,8,1>UD 0x7b000808UD { align1 1H I@5 }; mov(8) g53<2>UD g114<4,4,1>UD { align1 1Q I@5 }; mov(8) g55<2>UD g115<4,4,1>UD { align1 2Q I@6 }; mov(8) g65<2>UD g117<4,4,1>UD { align1 1Q I@6 }; mov(8) g67<2>UD g118<4,4,1>UD { align1 2Q I@7 }; mov(8) g104<1>UD g71<8,4,2>UD { align1 1Q F@2 }; add(8) g52<1>D g71<8,4,2>D g99<1,1,0>D { align1 1Q compacted }; mov(8) g105<1>UD g97<8,4,2>UD { align1 2Q F@1 }; add(8) g106<1>D g97<8,4,2>D g100<1,1,0>D { align1 2Q compacted }; cmp.l.f0.0(8) g57<1>UD g52<8,8,1>UD g71<8,4,2>UD { align1 1Q I@3 }; mov(8) g37<2>UD g52<4,4,1>UD { align1 1Q }; cmp.l.f0.0(8) g60<1>UD g59<8,8,1>UD g69<8,4,2>UD { align1 1Q }; cmp.l.f0.0(8) g107<1>UD g106<8,8,1>UD g97<8,4,2>UD { align1 2Q I@4 }; mov(8) g39<2>UD g106<4,4,1>UD { align1 2Q }; cmp.l.f0.0(8) g62<1>UD g114<8,8,1>UD g69<8,4,2>UD { align1 1Q }; cmp.l.f0.0(8) g112<1>UD g111<8,8,1>UD g2<8,4,2>UD { align1 2Q }; cmp.l.f0.0(8) g64<1>UD g117<8,8,1>UD g69<8,4,2>UD { align1 1Q }; cmp.l.f0.0(8) g115<1>UD g115<8,8,1>UD g2<8,4,2>UD { align1 2Q }; add(8) g58<1>D -g57<8,8,1>D g71.1<8,4,2>D { align1 1Q I@7 }; cmp.l.f0.0(8) g118<1>UD g118<8,8,1>UD g2<8,4,2>UD { align1 2Q }; add(8) g61<1>D -g60<8,8,1>D g69.1<8,4,2>D { align1 1Q I@7 }; cmp.nz.f0.0(16) null<1>D g79<8,8,1>D 0D { align1 1H }; add(8) g108<1>D -g107<8,8,1>D g97.1<8,4,2>D { align1 2Q I@7 }; add(8) g63<1>D -g62<8,8,1>D g69.1<8,4,2>D { align1 1Q I@7 }; add(8) g113<1>D -g112<8,8,1>D g2.1<8,4,2>D { align1 2Q I@7 }; add(8) g73<1>D -g64<8,8,1>D g69.1<8,4,2>D { align1 1Q I@7 }; add(8) g116<1>D -g115<8,8,1>D g2.1<8,4,2>D { align1 2Q I@7 }; mov(8) g37.1<2>UD g58<4,4,1>UD { align1 1Q I@7 }; add(8) g119<1>D -g118<8,8,1>D g2.1<8,4,2>D { align1 2Q I@7 }; mov(8) g41.1<2>UD g61<4,4,1>UD { align1 1Q I@7 }; mov(8) g39.1<2>UD g108<4,4,1>UD { align1 2Q I@7 }; mov(8) g53.1<2>UD g63<4,4,1>UD { align1 1Q I@7 }; mov(8) g43.1<2>UD g113<4,4,1>UD { align1 2Q I@7 }; mov(8) g65.1<2>UD g73<4,4,1>UD { align1 1Q I@7 }; mov(8) g55.1<2>UD g116<4,4,1>UD { align1 2Q I@7 }; mov(8) g67.1<2>UD g119<4,4,1>UD { align1 2Q I@7 }; (+f0.0) if(16) JIP: LABEL3 UIP: LABEL2 { align1 1H }; END B1 ->B2 ->B3 START B2 <-B1 (376 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; send(16) g8UD g37UD nullUD 0x0880f582 0x00000000 ugm MsgDesc: ( load_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 8, src0_len = 4, src1_len = 0 flat ) base_offset 0 { align1 1H $0 }; add3(16) g120<1>D g104<8,8,1>D g99<8,8,1>D 16W { align1 1H }; cmp.l.f0.0(8) g74<1>UD g120<8,8,1>UD g71<8,4,2>UD { align1 1Q I@1 }; cmp.l.f0.0(8) g122<1>UD g121<8,8,1>UD g97<8,4,2>UD { align1 2Q I@2 }; mov(8) g45<2>UD g120<4,4,1>UD { align1 1Q }; mov(8) g47<2>UD g121<4,4,1>UD { align1 2Q }; add(8) g75<1>D -g74<8,8,1>D g71.1<8,4,2>D { align1 1Q I@4 }; add(8) g123<1>D -g122<8,8,1>D g97.1<8,4,2>D { align1 2Q I@4 }; mov(8) g45.1<2>UD g75<4,4,1>UD { align1 1Q I@2 }; mov(8) g47.1<2>UD g123<4,4,1>UD { align1 2Q I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; send(16) g27UD g45UD nullUD 0x08201582 0x00000000 ugm MsgDesc: ( load_cmask, a64, d32, x, L1STATE_L3MOCS dst_len = 2, src0_len = 4, src1_len = 0 flat ) base_offset 0 { align1 1H $1 }; mul(16) g25<1>D g10<8,8,1>D g7.2<0,1,0>UW { align1 1H $0.dst }; mul(16) g84<1>D g10<8,8,1>D g7.3<0,1,0>UW { align1 1H }; mov(16) g21<1>D g8<8,8,1>D { align1 1H $0.dst }; mov(16) g23<1>D g12<8,8,1>D { align1 1H $0.dst }; add(16) g25.1<2>UW g25.1<16,8,2>UW g84<16,8,2>UW { align1 1H I@3 }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; (+f1.0) send(16) nullUD g41UD g17UD 0x0800f586 0x00000200 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 4, src1_len = 8 flat ) base_offset 0 { align1 1H $1 }; mov(16) g29<1>D g14<8,8,1>D { align1 1H $0.dst }; mov(16) g31<1>D g14<8,8,1>D { align1 1H }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(16) nullUD g53UD g25UD 0x0800f586 0x00000200 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 4, src1_len = 8 flat ) base_offset 0 { align1 1H $1 }; mov(16) g33<1>D g27<8,8,1>D { align1 1H $1.src }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(16) nullUD g65UD g33UD 0x08003586 0x00000100 ugm MsgDesc: ( store_cmask, a64, d32, xy, L1STATE_L3MOCS dst_len = 0, src0_len = 4, src1_len = 4 flat ) base_offset 0 { align1 1H $1 }; else(16) JIP: LABEL2 UIP: LABEL2 { align1 1H }; END B2 ->B3 ->B4 START B3 <-B1 <-B2 (376 cycles) LABEL3: sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; send(16) g9UD g37UD nullUD 0x0880f582 0x00000000 ugm MsgDesc: ( load_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 8, src0_len = 4, src1_len = 0 flat ) base_offset 0 { align1 1H $0 }; mov(16) g45<1>D g17<8,8,1>D { align1 1H $1.src }; mov(16) g47<1>D g19<8,8,1>D { align1 1H $1.src }; mul(16) g57<1>D g11<8,8,1>D g7.2<0,1,0>UW { align1 1H $0.dst }; mul(16) g85<1>D g11<8,8,1>D g7.3<0,1,0>UW { align1 1H }; mov(16) g49<1>D g9<8,8,1>D { align1 1H $0.dst }; mov(16) g51<1>D g13<8,8,1>D { align1 1H $0.dst }; add(16) g57.1<2>UW g57.1<16,8,2>UW g85<16,8,2>UW { align1 1H I@3 }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; (+f1.0) send(16) nullUD g41UD g45UD 0x0800f586 0x00000200 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 4, src1_len = 8 flat ) base_offset 0 { align1 1H $1 }; mov(16) g59<1>D g15<8,8,1>D { align1 1H $0.dst }; mov(16) g61<1>D 0D { align1 1H }; mov(16) g63<1>D g13<8,8,1>D { align1 1H }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(16) nullUD g53UD g57UD 0x0800f586 0x00000200 ugm MsgDesc: ( store_cmask, a64, d32, xyzw, L1STATE_L3MOCS dst_len = 0, src0_len = 4, src1_len = 8 flat ) base_offset 0 { align1 1H $1 }; mov(16) g46<1>D g15<8,8,1>D { align1 1H $1.src }; mov(16) g48<1>D g35<8,8,1>D { align1 1H $1.src }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(16) nullUD g65UD g46UD 0x08003586 0x00000100 ugm MsgDesc: ( store_cmask, a64, d32, xy, L1STATE_L3MOCS dst_len = 0, src0_len = 4, src1_len = 4 flat ) base_offset 0 { align1 1H $1 }; END B3 ->B4 START B4 <-B3 <-B2 (16 cycles) LABEL2: endif(16) JIP: LABEL4 { align1 1H }; LABEL4: else(16) JIP: LABEL0 UIP: LABEL0 { align1 1H }; END B4 ->B5 ->B11 START B5 <-B0 <-B4 (26 cycles) LABEL1: cmp.z.f0.0(16) null<1>D g35<8,8,1>D g6.7<0,1,0>D { align1 1H $1.src }; (+f0.0) if(16) JIP: LABEL6 UIP: LABEL5 { align1 1H }; END B5 ->B6 ->B7 START B6 <-B5 (12 cycles) cmp.l.f0.0(16) g78<1>UD g35<1,1,0>UD g7<0,1,0>UD { align1 1H F@5 compacted }; else(16) JIP: LABEL5 UIP: LABEL5 { align1 1H }; END B6 ->B7 ->B8 START B7 <-B5 <-B6 (6 cycles) LABEL6: mov(16) g78<1>UD 0x00000000UD { align1 1H A@2 }; END B7 ->B8 START B8 <-B7 <-B6 (36 cycles) LABEL5: endif(16) JIP: LABEL0 { align1 1H }; mov.nz.f0.0(16) null<1>D g78<8,8,1>D { align1 1H I@2 }; (+f0.0) if(16) JIP: LABEL7 UIP: LABEL7 { align1 1H }; END B8 ->B9 ->B10 add(8) g78<1>D g69<8,4,2>D g95<1,1,0>D { align1 1Q A@2 compacted }; add(8) g126<1>D g2<8,4,2>D g96<1,1,0>D { align1 2Q A@1 compacted }; mov(16) g51<1>D 411042049D { align1 1H $1.src }; mov(8) g76.1<2>F g7.3<0,1,0>F { align1 1Q }; mov(8) g124.1<2>F g7.3<0,1,0>F { align1 2Q }; cmp.l.f0.0(8) g79<1>UD g78<8,8,1>UD g69<8,4,2>UD { align1 1Q I@3 }; cmp.l.f0.0(8) g127<1>UD g126<8,8,1>UD g2<8,4,2>UD { align1 2Q I@3 }; mov(8) g47<2>UD g78<4,4,1>UD { align1 1Q $1.src }; mov(8) g49<2>UD g126<4,4,1>UD { align1 2Q $1.src }; mov(8) g76<2>F g7.2<0,1,0>F { align1 1Q F@2 compacted }; mov(8) g124<2>F g7.2<0,1,0>F { align1 2Q F@2 compacted }; add(8) g80<1>D -g79<8,8,1>D g69.1<8,4,2>D { align1 1Q I@4 }; add(8) g2<1>D -g127<8,8,1>D g2.1<8,4,2>D { align1 2Q I@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mov(8) g53<1>UD g76<8,4,2>UD { align1 1Q F@2 }; mov(8) g55<1>UD g76.1<8,4,2>UD { align1 1Q $1.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 3N $1.src }; mov(8) g54<1>UD g124<8,4,2>UD { align1 2Q F@1 }; mov(8) g56<1>UD g124.1<8,4,2>UD { align1 2Q $1.src }; mov(8) g47.1<2>UD g80<4,4,1>UD { align1 1Q I@6 }; mov(8) g49.1<2>UD g2<4,4,1>UD { align1 2Q I@6 }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; (+f1.0) send(16) nullUD g47UD g51UD 0x08007586 0x00000180 ugm MsgDesc: ( store_cmask, a64, d32, xyz, L1STATE_L3MOCS dst_len = 0, src0_len = 4, src1_len = 6 flat ) base_offset 0 { align1 1H $1 }; END B9 ->B10 START B10 <-B9 <-B8 (8 cycles) LABEL7: endif(16) JIP: LABEL0 { align1 1H }; END B10 ->B11 START B11 <-B10 <-B4 (10 cycles) LABEL0: endif(16) JIP: LABEL8 { align1 1H }; LABEL8: sendc(16) nullUD g119UD nullUD 0x10031000 0x00100000 render MsgDesc: RT write SIMD16 LastRT Surface = 0 mlen 8 ex_mlen 0 rlen 0 { align1 1H A@1 EOT }; END B11 Fossilize INFO: Creating Vulkan device took: 56 ms Fossilize INFO: Replaying for application: Fossilize INFO: apiVersion: 1.1.0 Fossilize INFO: engineVersion: 8421376 Fossilize INFO: applicationVersion: 0 Fossilize INFO: engineName: vkd3d Fossilize INFO: applicationName: Cyberpunk2077.exe Fossilize INFO: Total binary size for AppInfo: 567 (314 compressed) Fossilize INFO: Total time decoding AppInfo in main thread: 0.056 s Fossilize INFO: Total binary size for Descriptor Set Layout: 2358 (1758 compressed) Fossilize INFO: Total time decoding Descriptor Set Layout in main thread: 0.000 s Fossilize INFO: Total binary size for Pipeline Layout: 3054 (3000 compressed) Fossilize INFO: Total time decoding Pipeline Layout in main thread: 0.001 s Fossilize INFO: Total binary size for Render Pass: 1097 (708 compressed) Fossilize INFO: Total time decoding Render Pass in main thread: 0.000 s NIR (from SPIR-V) for MESA_SHADER_VERTEX shader: shader: MESA_SHADER_VERTEX source_sha1: {0x44ea7faa, 0x2435d792, 0xdd0f3938, 0x25cede04, 0x94926cc7} stage: 0 next_stage: 0 num_ssbos: 2 subgroup_size: 2 clip_distance_array_size: 1 inputs: 0 outputs: 0 uniforms: 108 decl_var push_const INTERP_MODE_NONE RootConstants registers decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @0 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict BindlessCBV[] @1 (~0, 0, 2) decl_var shader_in INTERP_MODE_NONE vec3 POSITION (VERT_ATTRIB_GENERIC0.xyz, 0, 0) decl_var shader_in INTERP_MODE_NONE uvec4 BLENDINDICES (VERT_ATTRIB_GENERIC1.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 BLENDWEIGHT (VERT_ATTRIB_GENERIC2.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE uvec4 BLENDINDICES_1 (VERT_ATTRIB_GENERIC3.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 BLENDWEIGHT_1 (VERT_ATTRIB_GENERIC4.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec2 TEXCOORD (VERT_ATTRIB_GENERIC5.xy, 0, 0) decl_var shader_in INTERP_MODE_NONE vec3 NORMAL (VERT_ATTRIB_GENERIC6.xyz, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 TANGENT (VERT_ATTRIB_GENERIC7.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4[3] INSTANCE_TRANSFORM (VERT_ATTRIB_GENERIC10.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE uvec4 INSTANCE_SKINNING_DATA (VERT_ATTRIB_GENERIC13.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE float LIGHT_BLOCKER_INTENSITY (VERT_ATTRIB_GENERIC15.x, 0, 0) decl_var invariant shader_out INTERP_MODE_NONE vec4 SV_Position (VARYING_SLOT_POS.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD@2 (VARYING_SLOT_VAR1.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_1 (VARYING_SLOT_VAR2.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_2 (VARYING_SLOT_VAR3.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_3 (VARYING_SLOT_VAR4.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_4 (VARYING_SLOT_VAR5.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_5 (VARYING_SLOT_VAR6.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec3 TEXCOORD_6 (VARYING_SLOT_VAR7.xyz, 0, 0) decl_var shader_out INTERP_MODE_NONE float[1] @3 (VARYING_SLOT_CLIP_DIST0.x, 0, 0) compact decl_function main (0 params) impl main { decl_var INTERP_MODE_NONE bool loop_break decl_var INTERP_MODE_NONE bool loop_continue decl_var INTERP_MODE_NONE uint phi decl_var INTERP_MODE_NONE float phi@4 decl_var INTERP_MODE_NONE float phi@5 decl_var INTERP_MODE_NONE float phi@6 decl_var INTERP_MODE_NONE float phi@7 decl_var INTERP_MODE_NONE float phi@8 decl_var INTERP_MODE_NONE float phi@9 decl_var INTERP_MODE_NONE float phi@10 decl_var INTERP_MODE_NONE float phi@11 decl_var INTERP_MODE_NONE float phi@12 decl_var INTERP_MODE_NONE bool loop_break@13 decl_var INTERP_MODE_NONE bool loop_continue@14 decl_var INTERP_MODE_NONE uint phi@15 decl_var INTERP_MODE_NONE float phi@16 decl_var INTERP_MODE_NONE float phi@17 decl_var INTERP_MODE_NONE float phi@18 decl_var INTERP_MODE_NONE float phi@19 decl_var INTERP_MODE_NONE float phi@20 decl_var INTERP_MODE_NONE float phi@21 decl_var INTERP_MODE_NONE float phi@22 decl_var INTERP_MODE_NONE float phi@23 decl_var INTERP_MODE_NONE float phi@24 block block_0: /* preds: */ vec1 32 ssa_3624 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3600 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3439 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_3354 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_3352 = load_const (0x40000000 = 2.000000) vec1 32 ssa_3350 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_3348 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_3346 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_3344 = load_const (0x40000000 = 2.000000) vec1 32 ssa_3342 = load_const (0x40000000 = 2.000000) vec1 32 ssa_3340 = load_const (0x40000000 = 2.000000) vec1 32 ssa_3338 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_3336 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_3334 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_3332 = load_const (0x40000000 = 2.000000) vec1 32 ssa_3330 = load_const (0x40000000 = 2.000000) vec1 32 ssa_3328 = load_const (0x40000000 = 2.000000) vec1 32 ssa_3231 = load_const (0x37000000 = 0.000008) vec1 32 ssa_3229 = load_const (0x37000000 = 0.000008) vec1 32 ssa_3227 = load_const (0x37000000 = 0.000008) vec1 32 ssa_3153 = load_const (0x00000001 = 0.000000) vec1 32 ssa_3113 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_3107 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3093 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_3092 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3090 = load_const (0x4479ffff = 999.999939) vec1 32 ssa_3085 = load_const (0x80000000 = -0.000000) vec1 32 ssa_3082 = load_const (0x80000000 = -0.000000) vec1 32 ssa_3079 = load_const (0x80000000 = -0.000000) vec1 32 ssa_3074 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_3068 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3065 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3062 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_3053 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_3023 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3010 = load_const (0x00000003 = 0.000000) vec1 32 ssa_3000 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2989 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2985 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2979 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2971 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2965 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2949 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2941 = load_const (0x00000000 = 0.000000) vec1 32 ssa_2939 = load_const (0x00000008 = 0.000000) vec1 32 ssa_2861 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2854 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2847 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2840 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2838 = load_const (0x00000020 = 0.000000) vec1 32 ssa_2822 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2815 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2808 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2801 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2799 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2783 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2776 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2769 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2762 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2745 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2738 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2731 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2724 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2722 = load_const (0x00000020 = 0.000000) vec1 32 ssa_2706 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2699 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2692 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2685 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2683 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2667 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2660 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2653 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2646 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2581 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2574 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2567 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2560 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2558 = load_const (0x00000020 = 0.000000) vec1 32 ssa_2542 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2535 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2528 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2521 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2519 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2503 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2496 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2489 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2482 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2465 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2458 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2451 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2444 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2442 = load_const (0x00000020 = 0.000000) vec1 32 ssa_2426 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2419 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2412 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2405 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2403 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2387 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2380 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2373 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2366 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2301 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2294 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2287 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2280 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2278 = load_const (0x00000020 = 0.000000) vec1 32 ssa_2262 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2255 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2248 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2241 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2239 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2223 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2216 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2209 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2202 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2185 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2178 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2171 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2164 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2162 = load_const (0x00000020 = 0.000000) vec1 32 ssa_2146 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2139 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2132 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2125 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2123 = load_const (0x00000010 = 0.000000) vec1 32 ssa_2107 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2100 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2093 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2086 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2033 = load_const (0x00000003 = 0.000000) vec1 32 ssa_2026 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2019 = load_const (0x00000001 = 0.000000) vec1 32 ssa_2012 = load_const (0x00000002 = 0.000000) vec1 32 ssa_2010 = load_const (0x00000020 = 0.000000) vec1 32 ssa_1994 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1987 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1980 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1973 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1971 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1955 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1948 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1941 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1934 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1917 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1910 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1903 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1896 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1894 = load_const (0x00000020 = 0.000000) vec1 32 ssa_1878 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1871 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1864 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1857 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1855 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1839 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1832 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1825 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1818 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1702 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1662 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_1656 = load_const (0x00000000 = 0.000000) vec1 32 ssa_1642 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_1641 = load_const (0x00000000 = 0.000000) vec1 32 ssa_1639 = load_const (0x4479ffff = 999.999939) vec1 32 ssa_1634 = load_const (0x80000000 = -0.000000) vec1 32 ssa_1631 = load_const (0x80000000 = -0.000000) vec1 32 ssa_1628 = load_const (0x80000000 = -0.000000) vec1 32 ssa_1623 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_1617 = load_const (0x00000000 = 0.000000) vec1 32 ssa_1614 = load_const (0x00000000 = 0.000000) vec1 32 ssa_1611 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_1602 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_1572 = load_const (0x00000000 = 0.000000) vec1 32 ssa_1559 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1549 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1538 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1534 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1528 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1520 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1514 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1498 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1490 = load_const (0x00000000 = 0.000000) vec1 32 ssa_1488 = load_const (0x00000008 = 0.000000) vec1 32 ssa_1482 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_1479 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_1403 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1396 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1389 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1382 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1380 = load_const (0x00000020 = 0.000000) vec1 32 ssa_1364 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1357 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1350 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1343 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1341 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1325 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1318 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1311 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1304 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1286 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1279 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1272 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1265 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1263 = load_const (0x00000020 = 0.000000) vec1 32 ssa_1247 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1240 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1233 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1226 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1224 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1208 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1201 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1194 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1187 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1121 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1114 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1107 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1100 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1098 = load_const (0x00000020 = 0.000000) vec1 32 ssa_1082 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1075 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1068 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1061 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1059 = load_const (0x00000010 = 0.000000) vec1 32 ssa_1043 = load_const (0x00000003 = 0.000000) vec1 32 ssa_1036 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1029 = load_const (0x00000001 = 0.000000) vec1 32 ssa_1022 = load_const (0x00000002 = 0.000000) vec1 32 ssa_1004 = load_const (0x00000003 = 0.000000) vec1 32 ssa_997 = load_const (0x00000002 = 0.000000) vec1 32 ssa_990 = load_const (0x00000001 = 0.000000) vec1 32 ssa_983 = load_const (0x00000002 = 0.000000) vec1 32 ssa_981 = load_const (0x00000020 = 0.000000) vec1 32 ssa_965 = load_const (0x00000003 = 0.000000) vec1 32 ssa_958 = load_const (0x00000002 = 0.000000) vec1 32 ssa_951 = load_const (0x00000001 = 0.000000) vec1 32 ssa_944 = load_const (0x00000002 = 0.000000) vec1 32 ssa_942 = load_const (0x00000010 = 0.000000) vec1 32 ssa_926 = load_const (0x00000003 = 0.000000) vec1 32 ssa_919 = load_const (0x00000002 = 0.000000) vec1 32 ssa_912 = load_const (0x00000001 = 0.000000) vec1 32 ssa_905 = load_const (0x00000002 = 0.000000) vec1 32 ssa_839 = load_const (0x00000003 = 0.000000) vec1 32 ssa_832 = load_const (0x00000002 = 0.000000) vec1 32 ssa_825 = load_const (0x00000001 = 0.000000) vec1 32 ssa_818 = load_const (0x00000002 = 0.000000) vec1 32 ssa_816 = load_const (0x00000020 = 0.000000) vec1 32 ssa_800 = load_const (0x00000003 = 0.000000) vec1 32 ssa_793 = load_const (0x00000002 = 0.000000) vec1 32 ssa_786 = load_const (0x00000001 = 0.000000) vec1 32 ssa_779 = load_const (0x00000002 = 0.000000) vec1 32 ssa_777 = load_const (0x00000010 = 0.000000) vec1 32 ssa_761 = load_const (0x00000003 = 0.000000) vec1 32 ssa_754 = load_const (0x00000002 = 0.000000) vec1 32 ssa_747 = load_const (0x00000001 = 0.000000) vec1 32 ssa_740 = load_const (0x00000002 = 0.000000) vec1 32 ssa_722 = load_const (0x00000003 = 0.000000) vec1 32 ssa_715 = load_const (0x00000002 = 0.000000) vec1 32 ssa_708 = load_const (0x00000001 = 0.000000) vec1 32 ssa_701 = load_const (0x00000002 = 0.000000) vec1 32 ssa_699 = load_const (0x00000020 = 0.000000) vec1 32 ssa_683 = load_const (0x00000003 = 0.000000) vec1 32 ssa_676 = load_const (0x00000002 = 0.000000) vec1 32 ssa_669 = load_const (0x00000001 = 0.000000) vec1 32 ssa_662 = load_const (0x00000002 = 0.000000) vec1 32 ssa_660 = load_const (0x00000010 = 0.000000) vec1 32 ssa_644 = load_const (0x00000003 = 0.000000) vec1 32 ssa_637 = load_const (0x00000002 = 0.000000) vec1 32 ssa_630 = load_const (0x00000001 = 0.000000) vec1 32 ssa_623 = load_const (0x00000002 = 0.000000) vec1 32 ssa_569 = load_const (0x00000003 = 0.000000) vec1 32 ssa_562 = load_const (0x00000002 = 0.000000) vec1 32 ssa_555 = load_const (0x00000001 = 0.000000) vec1 32 ssa_548 = load_const (0x00000002 = 0.000000) vec1 32 ssa_546 = load_const (0x00000020 = 0.000000) vec1 32 ssa_530 = load_const (0x00000003 = 0.000000) vec1 32 ssa_523 = load_const (0x00000002 = 0.000000) vec1 32 ssa_516 = load_const (0x00000001 = 0.000000) vec1 32 ssa_509 = load_const (0x00000002 = 0.000000) vec1 32 ssa_507 = load_const (0x00000010 = 0.000000) vec1 32 ssa_491 = load_const (0x00000003 = 0.000000) vec1 32 ssa_484 = load_const (0x00000002 = 0.000000) vec1 32 ssa_477 = load_const (0x00000001 = 0.000000) vec1 32 ssa_470 = load_const (0x00000002 = 0.000000) vec1 32 ssa_452 = load_const (0x00000003 = 0.000000) vec1 32 ssa_445 = load_const (0x00000002 = 0.000000) vec1 32 ssa_438 = load_const (0x00000001 = 0.000000) vec1 32 ssa_431 = load_const (0x00000002 = 0.000000) vec1 32 ssa_429 = load_const (0x00000020 = 0.000000) vec1 32 ssa_413 = load_const (0x00000003 = 0.000000) vec1 32 ssa_406 = load_const (0x00000002 = 0.000000) vec1 32 ssa_399 = load_const (0x00000001 = 0.000000) vec1 32 ssa_392 = load_const (0x00000002 = 0.000000) vec1 32 ssa_390 = load_const (0x00000010 = 0.000000) vec1 32 ssa_374 = load_const (0x00000003 = 0.000000) vec1 32 ssa_367 = load_const (0x00000002 = 0.000000) vec1 32 ssa_360 = load_const (0x00000001 = 0.000000) vec1 32 ssa_353 = load_const (0x00000002 = 0.000000) vec1 32 ssa_341 = load_const (0x3727c5ac = 0.000010) vec1 32 ssa_337 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_336 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_335 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_334 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_330 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_329 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_328 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_327 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_292 = load_const (0x37000000 = 0.000008) vec1 32 ssa_290 = load_const (0x37000000 = 0.000008) vec1 32 ssa_288 = load_const (0x37000000 = 0.000008) vec1 32 ssa_17 = load_const (0x00000001 = 0.000000) vec1 32 ssa_3 = load_const (0x00000002 = 0.000000) vec1 32 ssa_0 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1 = deref_struct &ssa_0->field6 (push_const uint) /* ®isters.field6 */ vec1 32 ssa_2 = intrinsic load_deref (ssa_1) (access=0) vec1 32 ssa_4 = iadd ssa_2, ssa_3 vec4 32 ssa_5 = intrinsic vulkan_resource_index (ssa_4) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_6 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_7 = deref_struct &ssa_6->field4 (push_const uint) /* ®isters.field4 */ vec1 32 ssa_8 = intrinsic load_deref (ssa_7) (access=0) vec4 32 ssa_9 = intrinsic vulkan_resource_index (ssa_8) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_10 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_11 = deref_struct &ssa_10->field2 (push_const uint) /* ®isters.field2 */ vec1 32 ssa_12 = intrinsic load_deref (ssa_11) (access=0) vec4 32 ssa_13 = intrinsic vulkan_resource_index (ssa_12) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_14 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_15 = deref_struct &ssa_14->field0 (push_const uint) /* ®isters.field0 */ vec1 32 ssa_16 = intrinsic load_deref (ssa_15) (access=0) vec1 32 ssa_18 = iadd ssa_16, ssa_17 vec4 32 ssa_19 = intrinsic vulkan_resource_index (ssa_18) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_20 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_21 = deref_struct &ssa_20->field3 (push_const uint) /* ®isters.field3 */ vec1 32 ssa_22 = intrinsic load_deref (ssa_21) (access=0) vec4 32 ssa_23 = intrinsic vulkan_resource_index (ssa_22) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_24 = deref_var &LIGHT_BLOCKER_INTENSITY (shader_in float) vec1 32 ssa_25 = intrinsic load_deref (ssa_24) (access=0) vec1 32 ssa_26 = deref_var &INSTANCE_SKINNING_DATA (shader_in uvec4) vec4 32 ssa_29 = intrinsic load_deref (ssa_26) (access=0) vec1 32 ssa_31 = deref_var &INSTANCE_SKINNING_DATA (shader_in uvec4) vec4 32 ssa_34 = intrinsic load_deref (ssa_31) (access=0) vec1 32 ssa_36 = deref_var &INSTANCE_SKINNING_DATA (shader_in uvec4) vec4 32 ssa_39 = intrinsic load_deref (ssa_36) (access=0) vec1 32 ssa_41 = deref_var &TANGENT (shader_in vec4) vec4 32 ssa_44 = intrinsic load_deref (ssa_41) (access=0) vec1 32 ssa_46 = deref_var &TANGENT (shader_in vec4) vec4 32 ssa_49 = intrinsic load_deref (ssa_46) (access=0) vec1 32 ssa_51 = deref_var &TANGENT (shader_in vec4) vec4 32 ssa_54 = intrinsic load_deref (ssa_51) (access=0) vec1 32 ssa_56 = deref_var &TANGENT (shader_in vec4) vec4 32 ssa_59 = intrinsic load_deref (ssa_56) (access=0) vec1 32 ssa_61 = deref_var &NORMAL (shader_in vec3) vec3 32 ssa_64 = intrinsic load_deref (ssa_61) (access=0) vec1 32 ssa_66 = deref_var &NORMAL (shader_in vec3) vec3 32 ssa_69 = intrinsic load_deref (ssa_66) (access=0) vec1 32 ssa_71 = deref_var &NORMAL (shader_in vec3) vec3 32 ssa_74 = intrinsic load_deref (ssa_71) (access=0) vec1 32 ssa_76 = deref_var &TEXCOORD (shader_in vec2) vec2 32 ssa_79 = intrinsic load_deref (ssa_76) (access=0) vec1 32 ssa_81 = deref_var &TEXCOORD (shader_in vec2) vec2 32 ssa_84 = intrinsic load_deref (ssa_81) (access=0) vec1 32 ssa_86 = deref_var &BLENDWEIGHT_1 (shader_in vec4) vec4 32 ssa_89 = intrinsic load_deref (ssa_86) (access=0) vec1 32 ssa_91 = deref_var &BLENDWEIGHT_1 (shader_in vec4) vec4 32 ssa_94 = intrinsic load_deref (ssa_91) (access=0) vec1 32 ssa_96 = deref_var &BLENDWEIGHT_1 (shader_in vec4) vec4 32 ssa_99 = intrinsic load_deref (ssa_96) (access=0) vec1 32 ssa_101 = deref_var &BLENDWEIGHT_1 (shader_in vec4) vec4 32 ssa_104 = intrinsic load_deref (ssa_101) (access=0) vec1 32 ssa_106 = deref_var &BLENDINDICES_1 (shader_in uvec4) vec4 32 ssa_109 = intrinsic load_deref (ssa_106) (access=0) vec1 32 ssa_111 = deref_var &BLENDINDICES_1 (shader_in uvec4) vec4 32 ssa_114 = intrinsic load_deref (ssa_111) (access=0) vec1 32 ssa_116 = deref_var &BLENDINDICES_1 (shader_in uvec4) vec4 32 ssa_119 = intrinsic load_deref (ssa_116) (access=0) vec1 32 ssa_121 = deref_var &BLENDINDICES_1 (shader_in uvec4) vec4 32 ssa_124 = intrinsic load_deref (ssa_121) (access=0) vec1 32 ssa_126 = deref_var &BLENDWEIGHT (shader_in vec4) vec4 32 ssa_129 = intrinsic load_deref (ssa_126) (access=0) vec1 32 ssa_131 = deref_var &BLENDWEIGHT (shader_in vec4) vec4 32 ssa_134 = intrinsic load_deref (ssa_131) (access=0) vec1 32 ssa_136 = deref_var &BLENDWEIGHT (shader_in vec4) vec4 32 ssa_139 = intrinsic load_deref (ssa_136) (access=0) vec1 32 ssa_141 = deref_var &BLENDWEIGHT (shader_in vec4) vec4 32 ssa_144 = intrinsic load_deref (ssa_141) (access=0) vec1 32 ssa_146 = deref_var &BLENDINDICES (shader_in uvec4) vec4 32 ssa_149 = intrinsic load_deref (ssa_146) (access=0) vec1 32 ssa_151 = deref_var &BLENDINDICES (shader_in uvec4) vec4 32 ssa_154 = intrinsic load_deref (ssa_151) (access=0) vec1 32 ssa_156 = deref_var &BLENDINDICES (shader_in uvec4) vec4 32 ssa_159 = intrinsic load_deref (ssa_156) (access=0) vec1 32 ssa_161 = deref_var &BLENDINDICES (shader_in uvec4) vec4 32 ssa_164 = intrinsic load_deref (ssa_161) (access=0) vec1 32 ssa_166 = deref_var &POSITION (shader_in vec3) vec3 32 ssa_169 = intrinsic load_deref (ssa_166) (access=0) vec1 32 ssa_171 = deref_var &POSITION (shader_in vec3) vec3 32 ssa_174 = intrinsic load_deref (ssa_171) (access=0) vec1 32 ssa_176 = deref_var &POSITION (shader_in vec3) vec3 32 ssa_179 = intrinsic load_deref (ssa_176) (access=0) vec1 32 ssa_181 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_182 = load_const (0x00000000 = 0.000000) vec1 32 ssa_183 = deref_array &(*ssa_181)[0] (shader_in vec4) /* &INSTANCE_TRANSFORM[0] */ vec4 32 ssa_186 = intrinsic load_deref (ssa_183) (access=0) vec1 32 ssa_188 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_189 = load_const (0x00000000 = 0.000000) vec1 32 ssa_190 = deref_array &(*ssa_188)[0] (shader_in vec4) /* &INSTANCE_TRANSFORM[0] */ vec4 32 ssa_193 = intrinsic load_deref (ssa_190) (access=0) vec1 32 ssa_195 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_196 = load_const (0x00000000 = 0.000000) vec1 32 ssa_197 = deref_array &(*ssa_195)[0] (shader_in vec4) /* &INSTANCE_TRANSFORM[0] */ vec4 32 ssa_200 = intrinsic load_deref (ssa_197) (access=0) vec1 32 ssa_202 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_203 = load_const (0x00000000 = 0.000000) vec1 32 ssa_204 = deref_array &(*ssa_202)[0] (shader_in vec4) /* &INSTANCE_TRANSFORM[0] */ vec4 32 ssa_207 = intrinsic load_deref (ssa_204) (access=0) vec1 32 ssa_209 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_210 = load_const (0x00000001 = 0.000000) vec1 32 ssa_211 = deref_array &(*ssa_209)[1] (shader_in vec4) /* &INSTANCE_TRANSFORM[1] */ vec4 32 ssa_214 = intrinsic load_deref (ssa_211) (access=0) vec1 32 ssa_216 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_217 = load_const (0x00000001 = 0.000000) vec1 32 ssa_218 = deref_array &(*ssa_216)[1] (shader_in vec4) /* &INSTANCE_TRANSFORM[1] */ vec4 32 ssa_221 = intrinsic load_deref (ssa_218) (access=0) vec1 32 ssa_223 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_224 = load_const (0x00000001 = 0.000000) vec1 32 ssa_225 = deref_array &(*ssa_223)[1] (shader_in vec4) /* &INSTANCE_TRANSFORM[1] */ vec4 32 ssa_228 = intrinsic load_deref (ssa_225) (access=0) vec1 32 ssa_230 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_231 = load_const (0x00000001 = 0.000000) vec1 32 ssa_232 = deref_array &(*ssa_230)[1] (shader_in vec4) /* &INSTANCE_TRANSFORM[1] */ vec4 32 ssa_235 = intrinsic load_deref (ssa_232) (access=0) vec1 32 ssa_237 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_238 = load_const (0x00000002 = 0.000000) vec1 32 ssa_239 = deref_array &(*ssa_237)[2] (shader_in vec4) /* &INSTANCE_TRANSFORM[2] */ vec4 32 ssa_242 = intrinsic load_deref (ssa_239) (access=0) vec1 32 ssa_244 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_245 = load_const (0x00000002 = 0.000000) vec1 32 ssa_246 = deref_array &(*ssa_244)[2] (shader_in vec4) /* &INSTANCE_TRANSFORM[2] */ vec4 32 ssa_249 = intrinsic load_deref (ssa_246) (access=0) vec1 32 ssa_251 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_252 = load_const (0x00000002 = 0.000000) vec1 32 ssa_253 = deref_array &(*ssa_251)[2] (shader_in vec4) /* &INSTANCE_TRANSFORM[2] */ vec4 32 ssa_256 = intrinsic load_deref (ssa_253) (access=0) vec1 32 ssa_258 = deref_var &INSTANCE_TRANSFORM (shader_in vec4[3]) vec1 32 ssa_259 = load_const (0x00000002 = 0.000000) vec1 32 ssa_260 = deref_array &(*ssa_258)[2] (shader_in vec4) /* &INSTANCE_TRANSFORM[2] */ vec4 32 ssa_263 = intrinsic load_deref (ssa_260) (access=0) vec4 32 ssa_265 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_266 = deref_cast (BindlessCBV *)ssa_265 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_267 = deref_struct &ssa_266->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_265)->field0 */ vec1 32 ssa_268 = load_const (0x00000026 = 0.000000) vec4 32 ssa_269 = deref_array &(*ssa_267)[38] (ssbo vec4) /* &((BindlessCBV *)ssa_265)->field0[38] */ vec4 32 ssa_270 = intrinsic load_deref (ssa_269) (access=16) vec1 32 ssa_282 = isub! ssa_207.w, ssa_270.x vec1 32 ssa_283 = isub! ssa_235.w, ssa_270.y vec1 32 ssa_284 = isub! ssa_263.w, ssa_270.z vec1 32 ssa_285 = i2f32! ssa_282 vec1 32 ssa_286 = i2f32! ssa_283 vec1 32 ssa_287 = i2f32! ssa_284 vec1 32 ssa_289 = fmul! ssa_285, ssa_288 vec1 32 ssa_291 = fmul! ssa_286, ssa_290 vec1 32 ssa_293 = fmul! ssa_287, ssa_292 vec4 32 ssa_294 = intrinsic load_vulkan_descriptor (ssa_13) (desc_type=SSBO /*7*/) vec4 32 ssa_295 = deref_cast (BindlessCBV *)ssa_294 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_296 = deref_struct &ssa_295->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_294)->field0 */ vec1 32 ssa_297 = load_const (0x00000005 = 0.000000) vec4 32 ssa_298 = deref_array &(*ssa_296)[5] (ssbo vec4) /* &((BindlessCBV *)ssa_294)->field0[5] */ vec4 32 ssa_299 = intrinsic load_deref (ssa_298) (access=16) vec4 32 ssa_303 = intrinsic load_vulkan_descriptor (ssa_13) (desc_type=SSBO /*7*/) vec4 32 ssa_304 = deref_cast (BindlessCBV *)ssa_303 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_305 = deref_struct &ssa_304->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_303)->field0 */ vec1 32 ssa_306 = load_const (0x00000004 = 0.000000) vec4 32 ssa_307 = deref_array &(*ssa_305)[4] (ssbo vec4) /* &((BindlessCBV *)ssa_303)->field0[4] */ vec4 32 ssa_308 = intrinsic load_deref (ssa_307) (access=16) vec1 32 ssa_312 = fmul! ssa_308.x, ssa_169.x vec1 32 ssa_313 = fmul! ssa_308.y, ssa_174.y vec1 32 ssa_314 = fmul! ssa_308.z, ssa_179.z vec1 32 ssa_315 = fadd! ssa_312, ssa_299.x vec1 32 ssa_316 = fadd! ssa_313, ssa_299.y vec1 32 ssa_317 = fadd! ssa_314, ssa_299.z vec4 32 ssa_318 = intrinsic load_vulkan_descriptor (ssa_23) (desc_type=SSBO /*7*/) vec4 32 ssa_319 = deref_cast (BindlessCBV *)ssa_318 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_320 = deref_struct &ssa_319->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_318)->field0 */ vec1 32 ssa_321 = load_const (0x00000001 = 0.000000) vec4 32 ssa_322 = deref_array &(*ssa_320)[1] (ssbo vec4) /* &((BindlessCBV *)ssa_318)->field0[1] */ vec4 32 ssa_323 = intrinsic load_deref (ssa_322) (access=16) vec4 32 ssa_326 = vec4! ssa_129.x, ssa_134.y, ssa_139.z, ssa_144.w vec4 32 ssa_331 = vec4! ssa_327, ssa_328, ssa_329, ssa_330 vec1 32 ssa_332 = fdot4! ssa_326, ssa_331 vec4 32 ssa_333 = vec4! ssa_89.x, ssa_94.y, ssa_99.z, ssa_104.w vec4 32 ssa_338 = vec4! ssa_334, ssa_335, ssa_336, ssa_337 vec1 32 ssa_339 = fdot4! ssa_333, ssa_338 vec1 32 ssa_340 = fadd! ssa_339, ssa_332 vec1 32 ssa_342 = fmax! ssa_341, ssa_340 vec1 32 ssa_343 = fdiv! ssa_129.x, ssa_342 vec1 32 ssa_344 = fdiv! ssa_134.y, ssa_342 vec1 32 ssa_345 = fdiv! ssa_139.z, ssa_342 vec1 32 ssa_346 = fdiv! ssa_144.w, ssa_342 vec1 32 ssa_347 = fdiv! ssa_89.x, ssa_342 vec1 32 ssa_348 = fdiv! ssa_94.y, ssa_342 vec1 32 ssa_349 = fdiv! ssa_99.z, ssa_342 vec1 32 ssa_350 = fdiv! ssa_104.w, ssa_342 vec1 32 ssa_351 = imul ssa_149.x, ssa_34.y vec1 32 ssa_352 = iadd ssa_351, ssa_29.x vec1 32 ssa_354 = ushr ssa_352, ssa_353 vec4 32 ssa_355 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_356 = deref_cast (SSBO *)ssa_355 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_357 = deref_struct &ssa_356->field0 (ssbo uint[]) /* &((SSBO *)ssa_355)->field0 */ vec4 32 ssa_358 = deref_array &(*ssa_357)[ssa_354] (ssbo uint) /* &((SSBO *)ssa_355)->field0[ssa_354] */ vec1 32 ssa_359 = intrinsic load_deref (ssa_358) (access=16) vec1 32 ssa_361 = iadd ssa_354, ssa_360 vec4 32 ssa_362 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_363 = deref_cast (SSBO *)ssa_362 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_364 = deref_struct &ssa_363->field0 (ssbo uint[]) /* &((SSBO *)ssa_362)->field0 */ vec4 32 ssa_365 = deref_array &(*ssa_364)[ssa_361] (ssbo uint) /* &((SSBO *)ssa_362)->field0[ssa_361] */ vec1 32 ssa_366 = intrinsic load_deref (ssa_365) (access=16) vec1 32 ssa_368 = iadd ssa_354, ssa_367 vec4 32 ssa_369 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_370 = deref_cast (SSBO *)ssa_369 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_371 = deref_struct &ssa_370->field0 (ssbo uint[]) /* &((SSBO *)ssa_369)->field0 */ vec4 32 ssa_372 = deref_array &(*ssa_371)[ssa_368] (ssbo uint) /* &((SSBO *)ssa_369)->field0[ssa_368] */ vec1 32 ssa_373 = intrinsic load_deref (ssa_372) (access=16) vec1 32 ssa_375 = iadd ssa_354, ssa_374 vec4 32 ssa_376 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_377 = deref_cast (SSBO *)ssa_376 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_378 = deref_struct &ssa_377->field0 (ssbo uint[]) /* &((SSBO *)ssa_376)->field0 */ vec4 32 ssa_379 = deref_array &(*ssa_378)[ssa_375] (ssbo uint) /* &((SSBO *)ssa_376)->field0[ssa_375] */ vec1 32 ssa_380 = intrinsic load_deref (ssa_379) (access=16) vec1 32 ssa_391 = iadd ssa_352, ssa_390 vec1 32 ssa_393 = ushr ssa_391, ssa_392 vec4 32 ssa_394 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_395 = deref_cast (SSBO *)ssa_394 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_396 = deref_struct &ssa_395->field0 (ssbo uint[]) /* &((SSBO *)ssa_394)->field0 */ vec4 32 ssa_397 = deref_array &(*ssa_396)[ssa_393] (ssbo uint) /* &((SSBO *)ssa_394)->field0[ssa_393] */ vec1 32 ssa_398 = intrinsic load_deref (ssa_397) (access=16) vec1 32 ssa_400 = iadd ssa_393, ssa_399 vec4 32 ssa_401 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_402 = deref_cast (SSBO *)ssa_401 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_403 = deref_struct &ssa_402->field0 (ssbo uint[]) /* &((SSBO *)ssa_401)->field0 */ vec4 32 ssa_404 = deref_array &(*ssa_403)[ssa_400] (ssbo uint) /* &((SSBO *)ssa_401)->field0[ssa_400] */ vec1 32 ssa_405 = intrinsic load_deref (ssa_404) (access=16) vec1 32 ssa_407 = iadd ssa_393, ssa_406 vec4 32 ssa_408 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_409 = deref_cast (SSBO *)ssa_408 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_410 = deref_struct &ssa_409->field0 (ssbo uint[]) /* &((SSBO *)ssa_408)->field0 */ vec4 32 ssa_411 = deref_array &(*ssa_410)[ssa_407] (ssbo uint) /* &((SSBO *)ssa_408)->field0[ssa_407] */ vec1 32 ssa_412 = intrinsic load_deref (ssa_411) (access=16) vec1 32 ssa_414 = iadd ssa_393, ssa_413 vec4 32 ssa_415 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_416 = deref_cast (SSBO *)ssa_415 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_417 = deref_struct &ssa_416->field0 (ssbo uint[]) /* &((SSBO *)ssa_415)->field0 */ vec4 32 ssa_418 = deref_array &(*ssa_417)[ssa_414] (ssbo uint) /* &((SSBO *)ssa_415)->field0[ssa_414] */ vec1 32 ssa_419 = intrinsic load_deref (ssa_418) (access=16) vec1 32 ssa_430 = iadd ssa_352, ssa_429 vec1 32 ssa_432 = ushr ssa_430, ssa_431 vec4 32 ssa_433 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_434 = deref_cast (SSBO *)ssa_433 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_435 = deref_struct &ssa_434->field0 (ssbo uint[]) /* &((SSBO *)ssa_433)->field0 */ vec4 32 ssa_436 = deref_array &(*ssa_435)[ssa_432] (ssbo uint) /* &((SSBO *)ssa_433)->field0[ssa_432] */ vec1 32 ssa_437 = intrinsic load_deref (ssa_436) (access=16) vec1 32 ssa_439 = iadd ssa_432, ssa_438 vec4 32 ssa_440 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_441 = deref_cast (SSBO *)ssa_440 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_442 = deref_struct &ssa_441->field0 (ssbo uint[]) /* &((SSBO *)ssa_440)->field0 */ vec4 32 ssa_443 = deref_array &(*ssa_442)[ssa_439] (ssbo uint) /* &((SSBO *)ssa_440)->field0[ssa_439] */ vec1 32 ssa_444 = intrinsic load_deref (ssa_443) (access=16) vec1 32 ssa_446 = iadd ssa_432, ssa_445 vec4 32 ssa_447 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_448 = deref_cast (SSBO *)ssa_447 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_449 = deref_struct &ssa_448->field0 (ssbo uint[]) /* &((SSBO *)ssa_447)->field0 */ vec4 32 ssa_450 = deref_array &(*ssa_449)[ssa_446] (ssbo uint) /* &((SSBO *)ssa_447)->field0[ssa_446] */ vec1 32 ssa_451 = intrinsic load_deref (ssa_450) (access=16) vec1 32 ssa_453 = iadd ssa_432, ssa_452 vec4 32 ssa_454 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_455 = deref_cast (SSBO *)ssa_454 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_456 = deref_struct &ssa_455->field0 (ssbo uint[]) /* &((SSBO *)ssa_454)->field0 */ vec4 32 ssa_457 = deref_array &(*ssa_456)[ssa_453] (ssbo uint) /* &((SSBO *)ssa_454)->field0[ssa_453] */ vec1 32 ssa_458 = intrinsic load_deref (ssa_457) (access=16) vec1 32 ssa_468 = imul ssa_109.x, ssa_34.y vec1 32 ssa_469 = iadd ssa_468, ssa_29.x vec1 32 ssa_471 = ushr ssa_469, ssa_470 vec4 32 ssa_472 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_473 = deref_cast (SSBO *)ssa_472 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_474 = deref_struct &ssa_473->field0 (ssbo uint[]) /* &((SSBO *)ssa_472)->field0 */ vec4 32 ssa_475 = deref_array &(*ssa_474)[ssa_471] (ssbo uint) /* &((SSBO *)ssa_472)->field0[ssa_471] */ vec1 32 ssa_476 = intrinsic load_deref (ssa_475) (access=16) vec1 32 ssa_478 = iadd ssa_471, ssa_477 vec4 32 ssa_479 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_480 = deref_cast (SSBO *)ssa_479 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_481 = deref_struct &ssa_480->field0 (ssbo uint[]) /* &((SSBO *)ssa_479)->field0 */ vec4 32 ssa_482 = deref_array &(*ssa_481)[ssa_478] (ssbo uint) /* &((SSBO *)ssa_479)->field0[ssa_478] */ vec1 32 ssa_483 = intrinsic load_deref (ssa_482) (access=16) vec1 32 ssa_485 = iadd ssa_471, ssa_484 vec4 32 ssa_486 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_487 = deref_cast (SSBO *)ssa_486 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_488 = deref_struct &ssa_487->field0 (ssbo uint[]) /* &((SSBO *)ssa_486)->field0 */ vec4 32 ssa_489 = deref_array &(*ssa_488)[ssa_485] (ssbo uint) /* &((SSBO *)ssa_486)->field0[ssa_485] */ vec1 32 ssa_490 = intrinsic load_deref (ssa_489) (access=16) vec1 32 ssa_492 = iadd ssa_471, ssa_491 vec4 32 ssa_493 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_494 = deref_cast (SSBO *)ssa_493 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_495 = deref_struct &ssa_494->field0 (ssbo uint[]) /* &((SSBO *)ssa_493)->field0 */ vec4 32 ssa_496 = deref_array &(*ssa_495)[ssa_492] (ssbo uint) /* &((SSBO *)ssa_493)->field0[ssa_492] */ vec1 32 ssa_497 = intrinsic load_deref (ssa_496) (access=16) vec1 32 ssa_508 = iadd ssa_469, ssa_507 vec1 32 ssa_510 = ushr ssa_508, ssa_509 vec4 32 ssa_511 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_512 = deref_cast (SSBO *)ssa_511 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_513 = deref_struct &ssa_512->field0 (ssbo uint[]) /* &((SSBO *)ssa_511)->field0 */ vec4 32 ssa_514 = deref_array &(*ssa_513)[ssa_510] (ssbo uint) /* &((SSBO *)ssa_511)->field0[ssa_510] */ vec1 32 ssa_515 = intrinsic load_deref (ssa_514) (access=16) vec1 32 ssa_517 = iadd ssa_510, ssa_516 vec4 32 ssa_518 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_519 = deref_cast (SSBO *)ssa_518 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_520 = deref_struct &ssa_519->field0 (ssbo uint[]) /* &((SSBO *)ssa_518)->field0 */ vec4 32 ssa_521 = deref_array &(*ssa_520)[ssa_517] (ssbo uint) /* &((SSBO *)ssa_518)->field0[ssa_517] */ vec1 32 ssa_522 = intrinsic load_deref (ssa_521) (access=16) vec1 32 ssa_524 = iadd ssa_510, ssa_523 vec4 32 ssa_525 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_526 = deref_cast (SSBO *)ssa_525 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_527 = deref_struct &ssa_526->field0 (ssbo uint[]) /* &((SSBO *)ssa_525)->field0 */ vec4 32 ssa_528 = deref_array &(*ssa_527)[ssa_524] (ssbo uint) /* &((SSBO *)ssa_525)->field0[ssa_524] */ vec1 32 ssa_529 = intrinsic load_deref (ssa_528) (access=16) vec1 32 ssa_531 = iadd ssa_510, ssa_530 vec4 32 ssa_532 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_533 = deref_cast (SSBO *)ssa_532 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_534 = deref_struct &ssa_533->field0 (ssbo uint[]) /* &((SSBO *)ssa_532)->field0 */ vec4 32 ssa_535 = deref_array &(*ssa_534)[ssa_531] (ssbo uint) /* &((SSBO *)ssa_532)->field0[ssa_531] */ vec1 32 ssa_536 = intrinsic load_deref (ssa_535) (access=16) vec1 32 ssa_547 = iadd ssa_469, ssa_546 vec1 32 ssa_549 = ushr ssa_547, ssa_548 vec4 32 ssa_550 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_551 = deref_cast (SSBO *)ssa_550 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_552 = deref_struct &ssa_551->field0 (ssbo uint[]) /* &((SSBO *)ssa_550)->field0 */ vec4 32 ssa_553 = deref_array &(*ssa_552)[ssa_549] (ssbo uint) /* &((SSBO *)ssa_550)->field0[ssa_549] */ vec1 32 ssa_554 = intrinsic load_deref (ssa_553) (access=16) vec1 32 ssa_556 = iadd ssa_549, ssa_555 vec4 32 ssa_557 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_558 = deref_cast (SSBO *)ssa_557 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_559 = deref_struct &ssa_558->field0 (ssbo uint[]) /* &((SSBO *)ssa_557)->field0 */ vec4 32 ssa_560 = deref_array &(*ssa_559)[ssa_556] (ssbo uint) /* &((SSBO *)ssa_557)->field0[ssa_556] */ vec1 32 ssa_561 = intrinsic load_deref (ssa_560) (access=16) vec1 32 ssa_563 = iadd ssa_549, ssa_562 vec4 32 ssa_564 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_565 = deref_cast (SSBO *)ssa_564 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_566 = deref_struct &ssa_565->field0 (ssbo uint[]) /* &((SSBO *)ssa_564)->field0 */ vec4 32 ssa_567 = deref_array &(*ssa_566)[ssa_563] (ssbo uint) /* &((SSBO *)ssa_564)->field0[ssa_563] */ vec1 32 ssa_568 = intrinsic load_deref (ssa_567) (access=16) vec1 32 ssa_570 = iadd ssa_549, ssa_569 vec4 32 ssa_571 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_572 = deref_cast (SSBO *)ssa_571 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_573 = deref_struct &ssa_572->field0 (ssbo uint[]) /* &((SSBO *)ssa_571)->field0 */ vec4 32 ssa_574 = deref_array &(*ssa_573)[ssa_570] (ssbo uint) /* &((SSBO *)ssa_571)->field0[ssa_570] */ vec1 32 ssa_575 = intrinsic load_deref (ssa_574) (access=16) vec1 32 ssa_585 = fmul! ssa_359, ssa_343 vec1 32 ssa_586 = fmul! ssa_398, ssa_343 vec1 32 ssa_587 = fmul! ssa_437, ssa_343 vec1 32 ssa_588 = fmul! ssa_366, ssa_343 vec1 32 ssa_589 = fmul! ssa_405, ssa_343 vec1 32 ssa_590 = fmul! ssa_444, ssa_343 vec1 32 ssa_591 = fmul! ssa_373, ssa_343 vec1 32 ssa_592 = fmul! ssa_412, ssa_343 vec1 32 ssa_593 = fmul! ssa_451, ssa_343 vec1 32 ssa_594 = fmul! ssa_380, ssa_343 vec1 32 ssa_595 = fmul! ssa_419, ssa_343 vec1 32 ssa_596 = fmul! ssa_458, ssa_343 vec1 32 ssa_597 = fmul! ssa_476, ssa_347 vec1 32 ssa_598 = fmul! ssa_515, ssa_347 vec1 32 ssa_599 = fmul! ssa_554, ssa_347 vec1 32 ssa_600 = fmul! ssa_483, ssa_347 vec1 32 ssa_601 = fmul! ssa_522, ssa_347 vec1 32 ssa_602 = fmul! ssa_561, ssa_347 vec1 32 ssa_603 = fmul! ssa_490, ssa_347 vec1 32 ssa_604 = fmul! ssa_529, ssa_347 vec1 32 ssa_605 = fmul! ssa_568, ssa_347 vec1 32 ssa_606 = fmul! ssa_497, ssa_347 vec1 32 ssa_607 = fmul! ssa_536, ssa_347 vec1 32 ssa_608 = fmul! ssa_575, ssa_347 vec1 32 ssa_609 = fadd! ssa_597, ssa_585 vec1 32 ssa_610 = fadd! ssa_598, ssa_586 vec1 32 ssa_611 = fadd! ssa_599, ssa_587 vec1 32 ssa_612 = fadd! ssa_600, ssa_588 vec1 32 ssa_613 = fadd! ssa_601, ssa_589 vec1 32 ssa_614 = fadd! ssa_602, ssa_590 vec1 32 ssa_615 = fadd! ssa_603, ssa_591 vec1 32 ssa_616 = fadd! ssa_604, ssa_592 vec1 32 ssa_617 = fadd! ssa_605, ssa_593 vec1 32 ssa_618 = fadd! ssa_606, ssa_594 vec1 32 ssa_619 = fadd! ssa_607, ssa_595 vec1 32 ssa_620 = fadd! ssa_608, ssa_596 vec1 32 ssa_621 = imul ssa_154.y, ssa_34.y vec1 32 ssa_622 = iadd ssa_621, ssa_29.x vec1 32 ssa_624 = ushr ssa_622, ssa_623 vec4 32 ssa_625 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_626 = deref_cast (SSBO *)ssa_625 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_627 = deref_struct &ssa_626->field0 (ssbo uint[]) /* &((SSBO *)ssa_625)->field0 */ vec4 32 ssa_628 = deref_array &(*ssa_627)[ssa_624] (ssbo uint) /* &((SSBO *)ssa_625)->field0[ssa_624] */ vec1 32 ssa_629 = intrinsic load_deref (ssa_628) (access=16) vec1 32 ssa_631 = iadd ssa_624, ssa_630 vec4 32 ssa_632 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_633 = deref_cast (SSBO *)ssa_632 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_634 = deref_struct &ssa_633->field0 (ssbo uint[]) /* &((SSBO *)ssa_632)->field0 */ vec4 32 ssa_635 = deref_array &(*ssa_634)[ssa_631] (ssbo uint) /* &((SSBO *)ssa_632)->field0[ssa_631] */ vec1 32 ssa_636 = intrinsic load_deref (ssa_635) (access=16) vec1 32 ssa_638 = iadd ssa_624, ssa_637 vec4 32 ssa_639 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_640 = deref_cast (SSBO *)ssa_639 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_641 = deref_struct &ssa_640->field0 (ssbo uint[]) /* &((SSBO *)ssa_639)->field0 */ vec4 32 ssa_642 = deref_array &(*ssa_641)[ssa_638] (ssbo uint) /* &((SSBO *)ssa_639)->field0[ssa_638] */ vec1 32 ssa_643 = intrinsic load_deref (ssa_642) (access=16) vec1 32 ssa_645 = iadd ssa_624, ssa_644 vec4 32 ssa_646 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_647 = deref_cast (SSBO *)ssa_646 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_648 = deref_struct &ssa_647->field0 (ssbo uint[]) /* &((SSBO *)ssa_646)->field0 */ vec4 32 ssa_649 = deref_array &(*ssa_648)[ssa_645] (ssbo uint) /* &((SSBO *)ssa_646)->field0[ssa_645] */ vec1 32 ssa_650 = intrinsic load_deref (ssa_649) (access=16) vec1 32 ssa_661 = iadd ssa_622, ssa_660 vec1 32 ssa_663 = ushr ssa_661, ssa_662 vec4 32 ssa_664 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_665 = deref_cast (SSBO *)ssa_664 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_666 = deref_struct &ssa_665->field0 (ssbo uint[]) /* &((SSBO *)ssa_664)->field0 */ vec4 32 ssa_667 = deref_array &(*ssa_666)[ssa_663] (ssbo uint) /* &((SSBO *)ssa_664)->field0[ssa_663] */ vec1 32 ssa_668 = intrinsic load_deref (ssa_667) (access=16) vec1 32 ssa_670 = iadd ssa_663, ssa_669 vec4 32 ssa_671 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_672 = deref_cast (SSBO *)ssa_671 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_673 = deref_struct &ssa_672->field0 (ssbo uint[]) /* &((SSBO *)ssa_671)->field0 */ vec4 32 ssa_674 = deref_array &(*ssa_673)[ssa_670] (ssbo uint) /* &((SSBO *)ssa_671)->field0[ssa_670] */ vec1 32 ssa_675 = intrinsic load_deref (ssa_674) (access=16) vec1 32 ssa_677 = iadd ssa_663, ssa_676 vec4 32 ssa_678 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_679 = deref_cast (SSBO *)ssa_678 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_680 = deref_struct &ssa_679->field0 (ssbo uint[]) /* &((SSBO *)ssa_678)->field0 */ vec4 32 ssa_681 = deref_array &(*ssa_680)[ssa_677] (ssbo uint) /* &((SSBO *)ssa_678)->field0[ssa_677] */ vec1 32 ssa_682 = intrinsic load_deref (ssa_681) (access=16) vec1 32 ssa_684 = iadd ssa_663, ssa_683 vec4 32 ssa_685 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_686 = deref_cast (SSBO *)ssa_685 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_687 = deref_struct &ssa_686->field0 (ssbo uint[]) /* &((SSBO *)ssa_685)->field0 */ vec4 32 ssa_688 = deref_array &(*ssa_687)[ssa_684] (ssbo uint) /* &((SSBO *)ssa_685)->field0[ssa_684] */ vec1 32 ssa_689 = intrinsic load_deref (ssa_688) (access=16) vec1 32 ssa_700 = iadd ssa_622, ssa_699 vec1 32 ssa_702 = ushr ssa_700, ssa_701 vec4 32 ssa_703 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_704 = deref_cast (SSBO *)ssa_703 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_705 = deref_struct &ssa_704->field0 (ssbo uint[]) /* &((SSBO *)ssa_703)->field0 */ vec4 32 ssa_706 = deref_array &(*ssa_705)[ssa_702] (ssbo uint) /* &((SSBO *)ssa_703)->field0[ssa_702] */ vec1 32 ssa_707 = intrinsic load_deref (ssa_706) (access=16) vec1 32 ssa_709 = iadd ssa_702, ssa_708 vec4 32 ssa_710 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_711 = deref_cast (SSBO *)ssa_710 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_712 = deref_struct &ssa_711->field0 (ssbo uint[]) /* &((SSBO *)ssa_710)->field0 */ vec4 32 ssa_713 = deref_array &(*ssa_712)[ssa_709] (ssbo uint) /* &((SSBO *)ssa_710)->field0[ssa_709] */ vec1 32 ssa_714 = intrinsic load_deref (ssa_713) (access=16) vec1 32 ssa_716 = iadd ssa_702, ssa_715 vec4 32 ssa_717 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_718 = deref_cast (SSBO *)ssa_717 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_719 = deref_struct &ssa_718->field0 (ssbo uint[]) /* &((SSBO *)ssa_717)->field0 */ vec4 32 ssa_720 = deref_array &(*ssa_719)[ssa_716] (ssbo uint) /* &((SSBO *)ssa_717)->field0[ssa_716] */ vec1 32 ssa_721 = intrinsic load_deref (ssa_720) (access=16) vec1 32 ssa_723 = iadd ssa_702, ssa_722 vec4 32 ssa_724 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_725 = deref_cast (SSBO *)ssa_724 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_726 = deref_struct &ssa_725->field0 (ssbo uint[]) /* &((SSBO *)ssa_724)->field0 */ vec4 32 ssa_727 = deref_array &(*ssa_726)[ssa_723] (ssbo uint) /* &((SSBO *)ssa_724)->field0[ssa_723] */ vec1 32 ssa_728 = intrinsic load_deref (ssa_727) (access=16) vec1 32 ssa_738 = imul ssa_114.y, ssa_34.y vec1 32 ssa_739 = iadd ssa_738, ssa_29.x vec1 32 ssa_741 = ushr ssa_739, ssa_740 vec4 32 ssa_742 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_743 = deref_cast (SSBO *)ssa_742 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_744 = deref_struct &ssa_743->field0 (ssbo uint[]) /* &((SSBO *)ssa_742)->field0 */ vec4 32 ssa_745 = deref_array &(*ssa_744)[ssa_741] (ssbo uint) /* &((SSBO *)ssa_742)->field0[ssa_741] */ vec1 32 ssa_746 = intrinsic load_deref (ssa_745) (access=16) vec1 32 ssa_748 = iadd ssa_741, ssa_747 vec4 32 ssa_749 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_750 = deref_cast (SSBO *)ssa_749 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_751 = deref_struct &ssa_750->field0 (ssbo uint[]) /* &((SSBO *)ssa_749)->field0 */ vec4 32 ssa_752 = deref_array &(*ssa_751)[ssa_748] (ssbo uint) /* &((SSBO *)ssa_749)->field0[ssa_748] */ vec1 32 ssa_753 = intrinsic load_deref (ssa_752) (access=16) vec1 32 ssa_755 = iadd ssa_741, ssa_754 vec4 32 ssa_756 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_757 = deref_cast (SSBO *)ssa_756 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_758 = deref_struct &ssa_757->field0 (ssbo uint[]) /* &((SSBO *)ssa_756)->field0 */ vec4 32 ssa_759 = deref_array &(*ssa_758)[ssa_755] (ssbo uint) /* &((SSBO *)ssa_756)->field0[ssa_755] */ vec1 32 ssa_760 = intrinsic load_deref (ssa_759) (access=16) vec1 32 ssa_762 = iadd ssa_741, ssa_761 vec4 32 ssa_763 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_764 = deref_cast (SSBO *)ssa_763 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_765 = deref_struct &ssa_764->field0 (ssbo uint[]) /* &((SSBO *)ssa_763)->field0 */ vec4 32 ssa_766 = deref_array &(*ssa_765)[ssa_762] (ssbo uint) /* &((SSBO *)ssa_763)->field0[ssa_762] */ vec1 32 ssa_767 = intrinsic load_deref (ssa_766) (access=16) vec1 32 ssa_778 = iadd ssa_739, ssa_777 vec1 32 ssa_780 = ushr ssa_778, ssa_779 vec4 32 ssa_781 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_782 = deref_cast (SSBO *)ssa_781 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_783 = deref_struct &ssa_782->field0 (ssbo uint[]) /* &((SSBO *)ssa_781)->field0 */ vec4 32 ssa_784 = deref_array &(*ssa_783)[ssa_780] (ssbo uint) /* &((SSBO *)ssa_781)->field0[ssa_780] */ vec1 32 ssa_785 = intrinsic load_deref (ssa_784) (access=16) vec1 32 ssa_787 = iadd ssa_780, ssa_786 vec4 32 ssa_788 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_789 = deref_cast (SSBO *)ssa_788 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_790 = deref_struct &ssa_789->field0 (ssbo uint[]) /* &((SSBO *)ssa_788)->field0 */ vec4 32 ssa_791 = deref_array &(*ssa_790)[ssa_787] (ssbo uint) /* &((SSBO *)ssa_788)->field0[ssa_787] */ vec1 32 ssa_792 = intrinsic load_deref (ssa_791) (access=16) vec1 32 ssa_794 = iadd ssa_780, ssa_793 vec4 32 ssa_795 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_796 = deref_cast (SSBO *)ssa_795 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_797 = deref_struct &ssa_796->field0 (ssbo uint[]) /* &((SSBO *)ssa_795)->field0 */ vec4 32 ssa_798 = deref_array &(*ssa_797)[ssa_794] (ssbo uint) /* &((SSBO *)ssa_795)->field0[ssa_794] */ vec1 32 ssa_799 = intrinsic load_deref (ssa_798) (access=16) vec1 32 ssa_801 = iadd ssa_780, ssa_800 vec4 32 ssa_802 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_803 = deref_cast (SSBO *)ssa_802 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_804 = deref_struct &ssa_803->field0 (ssbo uint[]) /* &((SSBO *)ssa_802)->field0 */ vec4 32 ssa_805 = deref_array &(*ssa_804)[ssa_801] (ssbo uint) /* &((SSBO *)ssa_802)->field0[ssa_801] */ vec1 32 ssa_806 = intrinsic load_deref (ssa_805) (access=16) vec1 32 ssa_817 = iadd ssa_739, ssa_816 vec1 32 ssa_819 = ushr ssa_817, ssa_818 vec4 32 ssa_820 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_821 = deref_cast (SSBO *)ssa_820 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_822 = deref_struct &ssa_821->field0 (ssbo uint[]) /* &((SSBO *)ssa_820)->field0 */ vec4 32 ssa_823 = deref_array &(*ssa_822)[ssa_819] (ssbo uint) /* &((SSBO *)ssa_820)->field0[ssa_819] */ vec1 32 ssa_824 = intrinsic load_deref (ssa_823) (access=16) vec1 32 ssa_826 = iadd ssa_819, ssa_825 vec4 32 ssa_827 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_828 = deref_cast (SSBO *)ssa_827 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_829 = deref_struct &ssa_828->field0 (ssbo uint[]) /* &((SSBO *)ssa_827)->field0 */ vec4 32 ssa_830 = deref_array &(*ssa_829)[ssa_826] (ssbo uint) /* &((SSBO *)ssa_827)->field0[ssa_826] */ vec1 32 ssa_831 = intrinsic load_deref (ssa_830) (access=16) vec1 32 ssa_833 = iadd ssa_819, ssa_832 vec4 32 ssa_834 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_835 = deref_cast (SSBO *)ssa_834 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_836 = deref_struct &ssa_835->field0 (ssbo uint[]) /* &((SSBO *)ssa_834)->field0 */ vec4 32 ssa_837 = deref_array &(*ssa_836)[ssa_833] (ssbo uint) /* &((SSBO *)ssa_834)->field0[ssa_833] */ vec1 32 ssa_838 = intrinsic load_deref (ssa_837) (access=16) vec1 32 ssa_840 = iadd ssa_819, ssa_839 vec4 32 ssa_841 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_842 = deref_cast (SSBO *)ssa_841 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_843 = deref_struct &ssa_842->field0 (ssbo uint[]) /* &((SSBO *)ssa_841)->field0 */ vec4 32 ssa_844 = deref_array &(*ssa_843)[ssa_840] (ssbo uint) /* &((SSBO *)ssa_841)->field0[ssa_840] */ vec1 32 ssa_845 = intrinsic load_deref (ssa_844) (access=16) vec1 32 ssa_855 = fmul! ssa_629, ssa_344 vec1 32 ssa_856 = fmul! ssa_668, ssa_344 vec1 32 ssa_857 = fmul! ssa_707, ssa_344 vec1 32 ssa_858 = fmul! ssa_636, ssa_344 vec1 32 ssa_859 = fmul! ssa_675, ssa_344 vec1 32 ssa_860 = fmul! ssa_714, ssa_344 vec1 32 ssa_861 = fmul! ssa_643, ssa_344 vec1 32 ssa_862 = fmul! ssa_682, ssa_344 vec1 32 ssa_863 = fmul! ssa_721, ssa_344 vec1 32 ssa_864 = fmul! ssa_650, ssa_344 vec1 32 ssa_865 = fmul! ssa_689, ssa_344 vec1 32 ssa_866 = fmul! ssa_728, ssa_344 vec1 32 ssa_867 = fadd! ssa_609, ssa_855 vec1 32 ssa_868 = fadd! ssa_610, ssa_856 vec1 32 ssa_869 = fadd! ssa_611, ssa_857 vec1 32 ssa_870 = fadd! ssa_612, ssa_858 vec1 32 ssa_871 = fadd! ssa_613, ssa_859 vec1 32 ssa_872 = fadd! ssa_614, ssa_860 vec1 32 ssa_873 = fadd! ssa_615, ssa_861 vec1 32 ssa_874 = fadd! ssa_616, ssa_862 vec1 32 ssa_875 = fadd! ssa_617, ssa_863 vec1 32 ssa_876 = fadd! ssa_618, ssa_864 vec1 32 ssa_877 = fadd! ssa_619, ssa_865 vec1 32 ssa_878 = fadd! ssa_620, ssa_866 vec1 32 ssa_879 = fmul! ssa_746, ssa_348 vec1 32 ssa_880 = fmul! ssa_785, ssa_348 vec1 32 ssa_881 = fmul! ssa_824, ssa_348 vec1 32 ssa_882 = fmul! ssa_753, ssa_348 vec1 32 ssa_883 = fmul! ssa_792, ssa_348 vec1 32 ssa_884 = fmul! ssa_831, ssa_348 vec1 32 ssa_885 = fmul! ssa_760, ssa_348 vec1 32 ssa_886 = fmul! ssa_799, ssa_348 vec1 32 ssa_887 = fmul! ssa_838, ssa_348 vec1 32 ssa_888 = fmul! ssa_767, ssa_348 vec1 32 ssa_889 = fmul! ssa_806, ssa_348 vec1 32 ssa_890 = fmul! ssa_845, ssa_348 vec1 32 ssa_891 = fadd! ssa_867, ssa_879 vec1 32 ssa_892 = fadd! ssa_868, ssa_880 vec1 32 ssa_893 = fadd! ssa_869, ssa_881 vec1 32 ssa_894 = fadd! ssa_870, ssa_882 vec1 32 ssa_895 = fadd! ssa_871, ssa_883 vec1 32 ssa_896 = fadd! ssa_872, ssa_884 vec1 32 ssa_897 = fadd! ssa_873, ssa_885 vec1 32 ssa_898 = fadd! ssa_874, ssa_886 vec1 32 ssa_899 = fadd! ssa_875, ssa_887 vec1 32 ssa_900 = fadd! ssa_876, ssa_888 vec1 32 ssa_901 = fadd! ssa_877, ssa_889 vec1 32 ssa_902 = fadd! ssa_878, ssa_890 vec1 32 ssa_903 = imul ssa_159.z, ssa_34.y vec1 32 ssa_904 = iadd ssa_903, ssa_29.x vec1 32 ssa_906 = ushr ssa_904, ssa_905 vec4 32 ssa_907 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_908 = deref_cast (SSBO *)ssa_907 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_909 = deref_struct &ssa_908->field0 (ssbo uint[]) /* &((SSBO *)ssa_907)->field0 */ vec4 32 ssa_910 = deref_array &(*ssa_909)[ssa_906] (ssbo uint) /* &((SSBO *)ssa_907)->field0[ssa_906] */ vec1 32 ssa_911 = intrinsic load_deref (ssa_910) (access=16) vec1 32 ssa_913 = iadd ssa_906, ssa_912 vec4 32 ssa_914 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_915 = deref_cast (SSBO *)ssa_914 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_916 = deref_struct &ssa_915->field0 (ssbo uint[]) /* &((SSBO *)ssa_914)->field0 */ vec4 32 ssa_917 = deref_array &(*ssa_916)[ssa_913] (ssbo uint) /* &((SSBO *)ssa_914)->field0[ssa_913] */ vec1 32 ssa_918 = intrinsic load_deref (ssa_917) (access=16) vec1 32 ssa_920 = iadd ssa_906, ssa_919 vec4 32 ssa_921 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_922 = deref_cast (SSBO *)ssa_921 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_923 = deref_struct &ssa_922->field0 (ssbo uint[]) /* &((SSBO *)ssa_921)->field0 */ vec4 32 ssa_924 = deref_array &(*ssa_923)[ssa_920] (ssbo uint) /* &((SSBO *)ssa_921)->field0[ssa_920] */ vec1 32 ssa_925 = intrinsic load_deref (ssa_924) (access=16) vec1 32 ssa_927 = iadd ssa_906, ssa_926 vec4 32 ssa_928 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_929 = deref_cast (SSBO *)ssa_928 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_930 = deref_struct &ssa_929->field0 (ssbo uint[]) /* &((SSBO *)ssa_928)->field0 */ vec4 32 ssa_931 = deref_array &(*ssa_930)[ssa_927] (ssbo uint) /* &((SSBO *)ssa_928)->field0[ssa_927] */ vec1 32 ssa_932 = intrinsic load_deref (ssa_931) (access=16) vec1 32 ssa_943 = iadd ssa_904, ssa_942 vec1 32 ssa_945 = ushr ssa_943, ssa_944 vec4 32 ssa_946 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_947 = deref_cast (SSBO *)ssa_946 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_948 = deref_struct &ssa_947->field0 (ssbo uint[]) /* &((SSBO *)ssa_946)->field0 */ vec4 32 ssa_949 = deref_array &(*ssa_948)[ssa_945] (ssbo uint) /* &((SSBO *)ssa_946)->field0[ssa_945] */ vec1 32 ssa_950 = intrinsic load_deref (ssa_949) (access=16) vec1 32 ssa_952 = iadd ssa_945, ssa_951 vec4 32 ssa_953 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_954 = deref_cast (SSBO *)ssa_953 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_955 = deref_struct &ssa_954->field0 (ssbo uint[]) /* &((SSBO *)ssa_953)->field0 */ vec4 32 ssa_956 = deref_array &(*ssa_955)[ssa_952] (ssbo uint) /* &((SSBO *)ssa_953)->field0[ssa_952] */ vec1 32 ssa_957 = intrinsic load_deref (ssa_956) (access=16) vec1 32 ssa_959 = iadd ssa_945, ssa_958 vec4 32 ssa_960 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_961 = deref_cast (SSBO *)ssa_960 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_962 = deref_struct &ssa_961->field0 (ssbo uint[]) /* &((SSBO *)ssa_960)->field0 */ vec4 32 ssa_963 = deref_array &(*ssa_962)[ssa_959] (ssbo uint) /* &((SSBO *)ssa_960)->field0[ssa_959] */ vec1 32 ssa_964 = intrinsic load_deref (ssa_963) (access=16) vec1 32 ssa_966 = iadd ssa_945, ssa_965 vec4 32 ssa_967 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_968 = deref_cast (SSBO *)ssa_967 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_969 = deref_struct &ssa_968->field0 (ssbo uint[]) /* &((SSBO *)ssa_967)->field0 */ vec4 32 ssa_970 = deref_array &(*ssa_969)[ssa_966] (ssbo uint) /* &((SSBO *)ssa_967)->field0[ssa_966] */ vec1 32 ssa_971 = intrinsic load_deref (ssa_970) (access=16) vec1 32 ssa_982 = iadd ssa_904, ssa_981 vec1 32 ssa_984 = ushr ssa_982, ssa_983 vec4 32 ssa_985 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_986 = deref_cast (SSBO *)ssa_985 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_987 = deref_struct &ssa_986->field0 (ssbo uint[]) /* &((SSBO *)ssa_985)->field0 */ vec4 32 ssa_988 = deref_array &(*ssa_987)[ssa_984] (ssbo uint) /* &((SSBO *)ssa_985)->field0[ssa_984] */ vec1 32 ssa_989 = intrinsic load_deref (ssa_988) (access=16) vec1 32 ssa_991 = iadd ssa_984, ssa_990 vec4 32 ssa_992 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_993 = deref_cast (SSBO *)ssa_992 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_994 = deref_struct &ssa_993->field0 (ssbo uint[]) /* &((SSBO *)ssa_992)->field0 */ vec4 32 ssa_995 = deref_array &(*ssa_994)[ssa_991] (ssbo uint) /* &((SSBO *)ssa_992)->field0[ssa_991] */ vec1 32 ssa_996 = intrinsic load_deref (ssa_995) (access=16) vec1 32 ssa_998 = iadd ssa_984, ssa_997 vec4 32 ssa_999 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1000 = deref_cast (SSBO *)ssa_999 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1001 = deref_struct &ssa_1000->field0 (ssbo uint[]) /* &((SSBO *)ssa_999)->field0 */ vec4 32 ssa_1002 = deref_array &(*ssa_1001)[ssa_998] (ssbo uint) /* &((SSBO *)ssa_999)->field0[ssa_998] */ vec1 32 ssa_1003 = intrinsic load_deref (ssa_1002) (access=16) vec1 32 ssa_1005 = iadd ssa_984, ssa_1004 vec4 32 ssa_1006 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1007 = deref_cast (SSBO *)ssa_1006 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1008 = deref_struct &ssa_1007->field0 (ssbo uint[]) /* &((SSBO *)ssa_1006)->field0 */ vec4 32 ssa_1009 = deref_array &(*ssa_1008)[ssa_1005] (ssbo uint) /* &((SSBO *)ssa_1006)->field0[ssa_1005] */ vec1 32 ssa_1010 = intrinsic load_deref (ssa_1009) (access=16) vec1 32 ssa_1020 = imul ssa_119.z, ssa_34.y vec1 32 ssa_1021 = iadd ssa_1020, ssa_29.x vec1 32 ssa_1023 = ushr ssa_1021, ssa_1022 vec4 32 ssa_1024 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1025 = deref_cast (SSBO *)ssa_1024 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1026 = deref_struct &ssa_1025->field0 (ssbo uint[]) /* &((SSBO *)ssa_1024)->field0 */ vec4 32 ssa_1027 = deref_array &(*ssa_1026)[ssa_1023] (ssbo uint) /* &((SSBO *)ssa_1024)->field0[ssa_1023] */ vec1 32 ssa_1028 = intrinsic load_deref (ssa_1027) (access=16) vec1 32 ssa_1030 = iadd ssa_1023, ssa_1029 vec4 32 ssa_1031 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1032 = deref_cast (SSBO *)ssa_1031 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1033 = deref_struct &ssa_1032->field0 (ssbo uint[]) /* &((SSBO *)ssa_1031)->field0 */ vec4 32 ssa_1034 = deref_array &(*ssa_1033)[ssa_1030] (ssbo uint) /* &((SSBO *)ssa_1031)->field0[ssa_1030] */ vec1 32 ssa_1035 = intrinsic load_deref (ssa_1034) (access=16) vec1 32 ssa_1037 = iadd ssa_1023, ssa_1036 vec4 32 ssa_1038 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1039 = deref_cast (SSBO *)ssa_1038 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1040 = deref_struct &ssa_1039->field0 (ssbo uint[]) /* &((SSBO *)ssa_1038)->field0 */ vec4 32 ssa_1041 = deref_array &(*ssa_1040)[ssa_1037] (ssbo uint) /* &((SSBO *)ssa_1038)->field0[ssa_1037] */ vec1 32 ssa_1042 = intrinsic load_deref (ssa_1041) (access=16) vec1 32 ssa_1044 = iadd ssa_1023, ssa_1043 vec4 32 ssa_1045 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1046 = deref_cast (SSBO *)ssa_1045 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1047 = deref_struct &ssa_1046->field0 (ssbo uint[]) /* &((SSBO *)ssa_1045)->field0 */ vec4 32 ssa_1048 = deref_array &(*ssa_1047)[ssa_1044] (ssbo uint) /* &((SSBO *)ssa_1045)->field0[ssa_1044] */ vec1 32 ssa_1049 = intrinsic load_deref (ssa_1048) (access=16) vec1 32 ssa_1060 = iadd ssa_1021, ssa_1059 vec1 32 ssa_1062 = ushr ssa_1060, ssa_1061 vec4 32 ssa_1063 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1064 = deref_cast (SSBO *)ssa_1063 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1065 = deref_struct &ssa_1064->field0 (ssbo uint[]) /* &((SSBO *)ssa_1063)->field0 */ vec4 32 ssa_1066 = deref_array &(*ssa_1065)[ssa_1062] (ssbo uint) /* &((SSBO *)ssa_1063)->field0[ssa_1062] */ vec1 32 ssa_1067 = intrinsic load_deref (ssa_1066) (access=16) vec1 32 ssa_1069 = iadd ssa_1062, ssa_1068 vec4 32 ssa_1070 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1071 = deref_cast (SSBO *)ssa_1070 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1072 = deref_struct &ssa_1071->field0 (ssbo uint[]) /* &((SSBO *)ssa_1070)->field0 */ vec4 32 ssa_1073 = deref_array &(*ssa_1072)[ssa_1069] (ssbo uint) /* &((SSBO *)ssa_1070)->field0[ssa_1069] */ vec1 32 ssa_1074 = intrinsic load_deref (ssa_1073) (access=16) vec1 32 ssa_1076 = iadd ssa_1062, ssa_1075 vec4 32 ssa_1077 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1078 = deref_cast (SSBO *)ssa_1077 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1079 = deref_struct &ssa_1078->field0 (ssbo uint[]) /* &((SSBO *)ssa_1077)->field0 */ vec4 32 ssa_1080 = deref_array &(*ssa_1079)[ssa_1076] (ssbo uint) /* &((SSBO *)ssa_1077)->field0[ssa_1076] */ vec1 32 ssa_1081 = intrinsic load_deref (ssa_1080) (access=16) vec1 32 ssa_1083 = iadd ssa_1062, ssa_1082 vec4 32 ssa_1084 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1085 = deref_cast (SSBO *)ssa_1084 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1086 = deref_struct &ssa_1085->field0 (ssbo uint[]) /* &((SSBO *)ssa_1084)->field0 */ vec4 32 ssa_1087 = deref_array &(*ssa_1086)[ssa_1083] (ssbo uint) /* &((SSBO *)ssa_1084)->field0[ssa_1083] */ vec1 32 ssa_1088 = intrinsic load_deref (ssa_1087) (access=16) vec1 32 ssa_1099 = iadd ssa_1021, ssa_1098 vec1 32 ssa_1101 = ushr ssa_1099, ssa_1100 vec4 32 ssa_1102 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1103 = deref_cast (SSBO *)ssa_1102 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1104 = deref_struct &ssa_1103->field0 (ssbo uint[]) /* &((SSBO *)ssa_1102)->field0 */ vec4 32 ssa_1105 = deref_array &(*ssa_1104)[ssa_1101] (ssbo uint) /* &((SSBO *)ssa_1102)->field0[ssa_1101] */ vec1 32 ssa_1106 = intrinsic load_deref (ssa_1105) (access=16) vec1 32 ssa_1108 = iadd ssa_1101, ssa_1107 vec4 32 ssa_1109 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1110 = deref_cast (SSBO *)ssa_1109 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1111 = deref_struct &ssa_1110->field0 (ssbo uint[]) /* &((SSBO *)ssa_1109)->field0 */ vec4 32 ssa_1112 = deref_array &(*ssa_1111)[ssa_1108] (ssbo uint) /* &((SSBO *)ssa_1109)->field0[ssa_1108] */ vec1 32 ssa_1113 = intrinsic load_deref (ssa_1112) (access=16) vec1 32 ssa_1115 = iadd ssa_1101, ssa_1114 vec4 32 ssa_1116 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1117 = deref_cast (SSBO *)ssa_1116 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1118 = deref_struct &ssa_1117->field0 (ssbo uint[]) /* &((SSBO *)ssa_1116)->field0 */ vec4 32 ssa_1119 = deref_array &(*ssa_1118)[ssa_1115] (ssbo uint) /* &((SSBO *)ssa_1116)->field0[ssa_1115] */ vec1 32 ssa_1120 = intrinsic load_deref (ssa_1119) (access=16) vec1 32 ssa_1122 = iadd ssa_1101, ssa_1121 vec4 32 ssa_1123 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1124 = deref_cast (SSBO *)ssa_1123 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1125 = deref_struct &ssa_1124->field0 (ssbo uint[]) /* &((SSBO *)ssa_1123)->field0 */ vec4 32 ssa_1126 = deref_array &(*ssa_1125)[ssa_1122] (ssbo uint) /* &((SSBO *)ssa_1123)->field0[ssa_1122] */ vec1 32 ssa_1127 = intrinsic load_deref (ssa_1126) (access=16) vec1 32 ssa_1137 = fmul! ssa_911, ssa_345 vec1 32 ssa_1138 = fmul! ssa_950, ssa_345 vec1 32 ssa_1139 = fmul! ssa_989, ssa_345 vec1 32 ssa_1140 = fmul! ssa_918, ssa_345 vec1 32 ssa_1141 = fmul! ssa_957, ssa_345 vec1 32 ssa_1142 = fmul! ssa_996, ssa_345 vec1 32 ssa_1143 = fmul! ssa_925, ssa_345 vec1 32 ssa_1144 = fmul! ssa_964, ssa_345 vec1 32 ssa_1145 = fmul! ssa_1003, ssa_345 vec1 32 ssa_1146 = fmul! ssa_932, ssa_345 vec1 32 ssa_1147 = fmul! ssa_971, ssa_345 vec1 32 ssa_1148 = fmul! ssa_1010, ssa_345 vec1 32 ssa_1149 = fadd! ssa_891, ssa_1137 vec1 32 ssa_1150 = fadd! ssa_892, ssa_1138 vec1 32 ssa_1151 = fadd! ssa_893, ssa_1139 vec1 32 ssa_1152 = fadd! ssa_894, ssa_1140 vec1 32 ssa_1153 = fadd! ssa_895, ssa_1141 vec1 32 ssa_1154 = fadd! ssa_896, ssa_1142 vec1 32 ssa_1155 = fadd! ssa_897, ssa_1143 vec1 32 ssa_1156 = fadd! ssa_898, ssa_1144 vec1 32 ssa_1157 = fadd! ssa_899, ssa_1145 vec1 32 ssa_1158 = fadd! ssa_900, ssa_1146 vec1 32 ssa_1159 = fadd! ssa_901, ssa_1147 vec1 32 ssa_1160 = fadd! ssa_902, ssa_1148 vec1 32 ssa_1161 = fmul! ssa_1028, ssa_349 vec1 32 ssa_1162 = fmul! ssa_1067, ssa_349 vec1 32 ssa_1163 = fmul! ssa_1106, ssa_349 vec1 32 ssa_1164 = fmul! ssa_1035, ssa_349 vec1 32 ssa_1165 = fmul! ssa_1074, ssa_349 vec1 32 ssa_1166 = fmul! ssa_1113, ssa_349 vec1 32 ssa_1167 = fmul! ssa_1042, ssa_349 vec1 32 ssa_1168 = fmul! ssa_1081, ssa_349 vec1 32 ssa_1169 = fmul! ssa_1120, ssa_349 vec1 32 ssa_1170 = fmul! ssa_1049, ssa_349 vec1 32 ssa_1171 = fmul! ssa_1088, ssa_349 vec1 32 ssa_1172 = fmul! ssa_1127, ssa_349 vec1 32 ssa_1173 = fadd! ssa_1149, ssa_1161 vec1 32 ssa_1174 = fadd! ssa_1150, ssa_1162 vec1 32 ssa_1175 = fadd! ssa_1151, ssa_1163 vec1 32 ssa_1176 = fadd! ssa_1152, ssa_1164 vec1 32 ssa_1177 = fadd! ssa_1153, ssa_1165 vec1 32 ssa_1178 = fadd! ssa_1154, ssa_1166 vec1 32 ssa_1179 = fadd! ssa_1155, ssa_1167 vec1 32 ssa_1180 = fadd! ssa_1156, ssa_1168 vec1 32 ssa_1181 = fadd! ssa_1157, ssa_1169 vec1 32 ssa_1182 = fadd! ssa_1158, ssa_1170 vec1 32 ssa_1183 = fadd! ssa_1159, ssa_1171 vec1 32 ssa_1184 = fadd! ssa_1160, ssa_1172 vec1 32 ssa_1185 = imul ssa_164.w, ssa_34.y vec1 32 ssa_1186 = iadd ssa_1185, ssa_29.x vec1 32 ssa_1188 = ushr ssa_1186, ssa_1187 vec4 32 ssa_1189 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1190 = deref_cast (SSBO *)ssa_1189 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1191 = deref_struct &ssa_1190->field0 (ssbo uint[]) /* &((SSBO *)ssa_1189)->field0 */ vec4 32 ssa_1192 = deref_array &(*ssa_1191)[ssa_1188] (ssbo uint) /* &((SSBO *)ssa_1189)->field0[ssa_1188] */ vec1 32 ssa_1193 = intrinsic load_deref (ssa_1192) (access=16) vec1 32 ssa_1195 = iadd ssa_1188, ssa_1194 vec4 32 ssa_1196 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1197 = deref_cast (SSBO *)ssa_1196 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1198 = deref_struct &ssa_1197->field0 (ssbo uint[]) /* &((SSBO *)ssa_1196)->field0 */ vec4 32 ssa_1199 = deref_array &(*ssa_1198)[ssa_1195] (ssbo uint) /* &((SSBO *)ssa_1196)->field0[ssa_1195] */ vec1 32 ssa_1200 = intrinsic load_deref (ssa_1199) (access=16) vec1 32 ssa_1202 = iadd ssa_1188, ssa_1201 vec4 32 ssa_1203 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1204 = deref_cast (SSBO *)ssa_1203 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1205 = deref_struct &ssa_1204->field0 (ssbo uint[]) /* &((SSBO *)ssa_1203)->field0 */ vec4 32 ssa_1206 = deref_array &(*ssa_1205)[ssa_1202] (ssbo uint) /* &((SSBO *)ssa_1203)->field0[ssa_1202] */ vec1 32 ssa_1207 = intrinsic load_deref (ssa_1206) (access=16) vec1 32 ssa_1209 = iadd ssa_1188, ssa_1208 vec4 32 ssa_1210 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1211 = deref_cast (SSBO *)ssa_1210 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1212 = deref_struct &ssa_1211->field0 (ssbo uint[]) /* &((SSBO *)ssa_1210)->field0 */ vec4 32 ssa_1213 = deref_array &(*ssa_1212)[ssa_1209] (ssbo uint) /* &((SSBO *)ssa_1210)->field0[ssa_1209] */ vec1 32 ssa_1214 = intrinsic load_deref (ssa_1213) (access=16) vec1 32 ssa_1225 = iadd ssa_1186, ssa_1224 vec1 32 ssa_1227 = ushr ssa_1225, ssa_1226 vec4 32 ssa_1228 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1229 = deref_cast (SSBO *)ssa_1228 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1230 = deref_struct &ssa_1229->field0 (ssbo uint[]) /* &((SSBO *)ssa_1228)->field0 */ vec4 32 ssa_1231 = deref_array &(*ssa_1230)[ssa_1227] (ssbo uint) /* &((SSBO *)ssa_1228)->field0[ssa_1227] */ vec1 32 ssa_1232 = intrinsic load_deref (ssa_1231) (access=16) vec1 32 ssa_1234 = iadd ssa_1227, ssa_1233 vec4 32 ssa_1235 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1236 = deref_cast (SSBO *)ssa_1235 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1237 = deref_struct &ssa_1236->field0 (ssbo uint[]) /* &((SSBO *)ssa_1235)->field0 */ vec4 32 ssa_1238 = deref_array &(*ssa_1237)[ssa_1234] (ssbo uint) /* &((SSBO *)ssa_1235)->field0[ssa_1234] */ vec1 32 ssa_1239 = intrinsic load_deref (ssa_1238) (access=16) vec1 32 ssa_1241 = iadd ssa_1227, ssa_1240 vec4 32 ssa_1242 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1243 = deref_cast (SSBO *)ssa_1242 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1244 = deref_struct &ssa_1243->field0 (ssbo uint[]) /* &((SSBO *)ssa_1242)->field0 */ vec4 32 ssa_1245 = deref_array &(*ssa_1244)[ssa_1241] (ssbo uint) /* &((SSBO *)ssa_1242)->field0[ssa_1241] */ vec1 32 ssa_1246 = intrinsic load_deref (ssa_1245) (access=16) vec1 32 ssa_1248 = iadd ssa_1227, ssa_1247 vec4 32 ssa_1249 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1250 = deref_cast (SSBO *)ssa_1249 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1251 = deref_struct &ssa_1250->field0 (ssbo uint[]) /* &((SSBO *)ssa_1249)->field0 */ vec4 32 ssa_1252 = deref_array &(*ssa_1251)[ssa_1248] (ssbo uint) /* &((SSBO *)ssa_1249)->field0[ssa_1248] */ vec1 32 ssa_1253 = intrinsic load_deref (ssa_1252) (access=16) vec1 32 ssa_1264 = iadd ssa_1186, ssa_1263 vec1 32 ssa_1266 = ushr ssa_1264, ssa_1265 vec4 32 ssa_1267 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1268 = deref_cast (SSBO *)ssa_1267 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1269 = deref_struct &ssa_1268->field0 (ssbo uint[]) /* &((SSBO *)ssa_1267)->field0 */ vec4 32 ssa_1270 = deref_array &(*ssa_1269)[ssa_1266] (ssbo uint) /* &((SSBO *)ssa_1267)->field0[ssa_1266] */ vec1 32 ssa_1271 = intrinsic load_deref (ssa_1270) (access=16) vec1 32 ssa_1273 = iadd ssa_1266, ssa_1272 vec4 32 ssa_1274 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1275 = deref_cast (SSBO *)ssa_1274 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1276 = deref_struct &ssa_1275->field0 (ssbo uint[]) /* &((SSBO *)ssa_1274)->field0 */ vec4 32 ssa_1277 = deref_array &(*ssa_1276)[ssa_1273] (ssbo uint) /* &((SSBO *)ssa_1274)->field0[ssa_1273] */ vec1 32 ssa_1278 = intrinsic load_deref (ssa_1277) (access=16) vec1 32 ssa_1280 = iadd ssa_1266, ssa_1279 vec4 32 ssa_1281 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1282 = deref_cast (SSBO *)ssa_1281 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1283 = deref_struct &ssa_1282->field0 (ssbo uint[]) /* &((SSBO *)ssa_1281)->field0 */ vec4 32 ssa_1284 = deref_array &(*ssa_1283)[ssa_1280] (ssbo uint) /* &((SSBO *)ssa_1281)->field0[ssa_1280] */ vec1 32 ssa_1285 = intrinsic load_deref (ssa_1284) (access=16) vec1 32 ssa_1287 = iadd ssa_1266, ssa_1286 vec4 32 ssa_1288 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1289 = deref_cast (SSBO *)ssa_1288 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1290 = deref_struct &ssa_1289->field0 (ssbo uint[]) /* &((SSBO *)ssa_1288)->field0 */ vec4 32 ssa_1291 = deref_array &(*ssa_1290)[ssa_1287] (ssbo uint) /* &((SSBO *)ssa_1288)->field0[ssa_1287] */ vec1 32 ssa_1292 = intrinsic load_deref (ssa_1291) (access=16) vec1 32 ssa_1302 = imul ssa_124.w, ssa_34.y vec1 32 ssa_1303 = iadd ssa_1302, ssa_29.x vec1 32 ssa_1305 = ushr ssa_1303, ssa_1304 vec4 32 ssa_1306 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1307 = deref_cast (SSBO *)ssa_1306 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1308 = deref_struct &ssa_1307->field0 (ssbo uint[]) /* &((SSBO *)ssa_1306)->field0 */ vec4 32 ssa_1309 = deref_array &(*ssa_1308)[ssa_1305] (ssbo uint) /* &((SSBO *)ssa_1306)->field0[ssa_1305] */ vec1 32 ssa_1310 = intrinsic load_deref (ssa_1309) (access=16) vec1 32 ssa_1312 = iadd ssa_1305, ssa_1311 vec4 32 ssa_1313 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1314 = deref_cast (SSBO *)ssa_1313 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1315 = deref_struct &ssa_1314->field0 (ssbo uint[]) /* &((SSBO *)ssa_1313)->field0 */ vec4 32 ssa_1316 = deref_array &(*ssa_1315)[ssa_1312] (ssbo uint) /* &((SSBO *)ssa_1313)->field0[ssa_1312] */ vec1 32 ssa_1317 = intrinsic load_deref (ssa_1316) (access=16) vec1 32 ssa_1319 = iadd ssa_1305, ssa_1318 vec4 32 ssa_1320 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1321 = deref_cast (SSBO *)ssa_1320 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1322 = deref_struct &ssa_1321->field0 (ssbo uint[]) /* &((SSBO *)ssa_1320)->field0 */ vec4 32 ssa_1323 = deref_array &(*ssa_1322)[ssa_1319] (ssbo uint) /* &((SSBO *)ssa_1320)->field0[ssa_1319] */ vec1 32 ssa_1324 = intrinsic load_deref (ssa_1323) (access=16) vec1 32 ssa_1326 = iadd ssa_1305, ssa_1325 vec4 32 ssa_1327 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1328 = deref_cast (SSBO *)ssa_1327 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1329 = deref_struct &ssa_1328->field0 (ssbo uint[]) /* &((SSBO *)ssa_1327)->field0 */ vec4 32 ssa_1330 = deref_array &(*ssa_1329)[ssa_1326] (ssbo uint) /* &((SSBO *)ssa_1327)->field0[ssa_1326] */ vec1 32 ssa_1331 = intrinsic load_deref (ssa_1330) (access=16) vec1 32 ssa_1342 = iadd ssa_1303, ssa_1341 vec1 32 ssa_1344 = ushr ssa_1342, ssa_1343 vec4 32 ssa_1345 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1346 = deref_cast (SSBO *)ssa_1345 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1347 = deref_struct &ssa_1346->field0 (ssbo uint[]) /* &((SSBO *)ssa_1345)->field0 */ vec4 32 ssa_1348 = deref_array &(*ssa_1347)[ssa_1344] (ssbo uint) /* &((SSBO *)ssa_1345)->field0[ssa_1344] */ vec1 32 ssa_1349 = intrinsic load_deref (ssa_1348) (access=16) vec1 32 ssa_1351 = iadd ssa_1344, ssa_1350 vec4 32 ssa_1352 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1353 = deref_cast (SSBO *)ssa_1352 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1354 = deref_struct &ssa_1353->field0 (ssbo uint[]) /* &((SSBO *)ssa_1352)->field0 */ vec4 32 ssa_1355 = deref_array &(*ssa_1354)[ssa_1351] (ssbo uint) /* &((SSBO *)ssa_1352)->field0[ssa_1351] */ vec1 32 ssa_1356 = intrinsic load_deref (ssa_1355) (access=16) vec1 32 ssa_1358 = iadd ssa_1344, ssa_1357 vec4 32 ssa_1359 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1360 = deref_cast (SSBO *)ssa_1359 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1361 = deref_struct &ssa_1360->field0 (ssbo uint[]) /* &((SSBO *)ssa_1359)->field0 */ vec4 32 ssa_1362 = deref_array &(*ssa_1361)[ssa_1358] (ssbo uint) /* &((SSBO *)ssa_1359)->field0[ssa_1358] */ vec1 32 ssa_1363 = intrinsic load_deref (ssa_1362) (access=16) vec1 32 ssa_1365 = iadd ssa_1344, ssa_1364 vec4 32 ssa_1366 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1367 = deref_cast (SSBO *)ssa_1366 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1368 = deref_struct &ssa_1367->field0 (ssbo uint[]) /* &((SSBO *)ssa_1366)->field0 */ vec4 32 ssa_1369 = deref_array &(*ssa_1368)[ssa_1365] (ssbo uint) /* &((SSBO *)ssa_1366)->field0[ssa_1365] */ vec1 32 ssa_1370 = intrinsic load_deref (ssa_1369) (access=16) vec1 32 ssa_1381 = iadd ssa_1303, ssa_1380 vec1 32 ssa_1383 = ushr ssa_1381, ssa_1382 vec4 32 ssa_1384 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1385 = deref_cast (SSBO *)ssa_1384 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1386 = deref_struct &ssa_1385->field0 (ssbo uint[]) /* &((SSBO *)ssa_1384)->field0 */ vec4 32 ssa_1387 = deref_array &(*ssa_1386)[ssa_1383] (ssbo uint) /* &((SSBO *)ssa_1384)->field0[ssa_1383] */ vec1 32 ssa_1388 = intrinsic load_deref (ssa_1387) (access=16) vec1 32 ssa_1390 = iadd ssa_1383, ssa_1389 vec4 32 ssa_1391 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1392 = deref_cast (SSBO *)ssa_1391 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1393 = deref_struct &ssa_1392->field0 (ssbo uint[]) /* &((SSBO *)ssa_1391)->field0 */ vec4 32 ssa_1394 = deref_array &(*ssa_1393)[ssa_1390] (ssbo uint) /* &((SSBO *)ssa_1391)->field0[ssa_1390] */ vec1 32 ssa_1395 = intrinsic load_deref (ssa_1394) (access=16) vec1 32 ssa_1397 = iadd ssa_1383, ssa_1396 vec4 32 ssa_1398 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1399 = deref_cast (SSBO *)ssa_1398 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1400 = deref_struct &ssa_1399->field0 (ssbo uint[]) /* &((SSBO *)ssa_1398)->field0 */ vec4 32 ssa_1401 = deref_array &(*ssa_1400)[ssa_1397] (ssbo uint) /* &((SSBO *)ssa_1398)->field0[ssa_1397] */ vec1 32 ssa_1402 = intrinsic load_deref (ssa_1401) (access=16) vec1 32 ssa_1404 = iadd ssa_1383, ssa_1403 vec4 32 ssa_1405 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1406 = deref_cast (SSBO *)ssa_1405 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1407 = deref_struct &ssa_1406->field0 (ssbo uint[]) /* &((SSBO *)ssa_1405)->field0 */ vec4 32 ssa_1408 = deref_array &(*ssa_1407)[ssa_1404] (ssbo uint) /* &((SSBO *)ssa_1405)->field0[ssa_1404] */ vec1 32 ssa_1409 = intrinsic load_deref (ssa_1408) (access=16) vec1 32 ssa_1419 = fmul! ssa_1193, ssa_346 vec1 32 ssa_1420 = fmul! ssa_1232, ssa_346 vec1 32 ssa_1421 = fmul! ssa_1271, ssa_346 vec1 32 ssa_1422 = fmul! ssa_1200, ssa_346 vec1 32 ssa_1423 = fmul! ssa_1239, ssa_346 vec1 32 ssa_1424 = fmul! ssa_1278, ssa_346 vec1 32 ssa_1425 = fmul! ssa_1207, ssa_346 vec1 32 ssa_1426 = fmul! ssa_1246, ssa_346 vec1 32 ssa_1427 = fmul! ssa_1285, ssa_346 vec1 32 ssa_1428 = fmul! ssa_1214, ssa_346 vec1 32 ssa_1429 = fmul! ssa_1253, ssa_346 vec1 32 ssa_1430 = fmul! ssa_1292, ssa_346 vec1 32 ssa_1431 = fadd! ssa_1173, ssa_1419 vec1 32 ssa_1432 = fadd! ssa_1174, ssa_1420 vec1 32 ssa_1433 = fadd! ssa_1175, ssa_1421 vec1 32 ssa_1434 = fadd! ssa_1176, ssa_1422 vec1 32 ssa_1435 = fadd! ssa_1177, ssa_1423 vec1 32 ssa_1436 = fadd! ssa_1178, ssa_1424 vec1 32 ssa_1437 = fadd! ssa_1179, ssa_1425 vec1 32 ssa_1438 = fadd! ssa_1180, ssa_1426 vec1 32 ssa_1439 = fadd! ssa_1181, ssa_1427 vec1 32 ssa_1440 = fadd! ssa_1182, ssa_1428 vec1 32 ssa_1441 = fadd! ssa_1183, ssa_1429 vec1 32 ssa_1442 = fadd! ssa_1184, ssa_1430 vec1 32 ssa_1443 = fmul! ssa_1310, ssa_350 vec1 32 ssa_1444 = fmul! ssa_1349, ssa_350 vec1 32 ssa_1445 = fmul! ssa_1388, ssa_350 vec1 32 ssa_1446 = fmul! ssa_1317, ssa_350 vec1 32 ssa_1447 = fmul! ssa_1356, ssa_350 vec1 32 ssa_1448 = fmul! ssa_1395, ssa_350 vec1 32 ssa_1449 = fmul! ssa_1324, ssa_350 vec1 32 ssa_1450 = fmul! ssa_1363, ssa_350 vec1 32 ssa_1451 = fmul! ssa_1402, ssa_350 vec1 32 ssa_1452 = fmul! ssa_1331, ssa_350 vec1 32 ssa_1453 = fmul! ssa_1370, ssa_350 vec1 32 ssa_1454 = fmul! ssa_1409, ssa_350 vec1 32 ssa_1455 = fadd! ssa_1431, ssa_1443 vec1 32 ssa_1456 = fadd! ssa_1432, ssa_1444 vec1 32 ssa_1457 = fadd! ssa_1433, ssa_1445 vec1 32 ssa_1458 = fadd! ssa_1434, ssa_1446 vec1 32 ssa_1459 = fadd! ssa_1435, ssa_1447 vec1 32 ssa_1460 = fadd! ssa_1436, ssa_1448 vec1 32 ssa_1461 = fadd! ssa_1437, ssa_1449 vec1 32 ssa_1462 = fadd! ssa_1438, ssa_1450 vec1 32 ssa_1463 = fadd! ssa_1439, ssa_1451 vec1 32 ssa_1464 = fadd! ssa_1440, ssa_1452 vec1 32 ssa_1465 = fadd! ssa_1441, ssa_1453 vec1 32 ssa_1466 = fadd! ssa_1442, ssa_1454 vec1 32 ssa_1467 = fmul! ssa_1455, ssa_315 vec1 32 ssa_1468 = ffma! ssa_316, ssa_1458, ssa_1467 vec1 32 ssa_1469 = ffma! ssa_317, ssa_1461, ssa_1468 vec1 32 ssa_1470 = fadd! ssa_1469, ssa_1464 vec1 32 ssa_1471 = fmul! ssa_1456, ssa_315 vec1 32 ssa_1472 = ffma! ssa_316, ssa_1459, ssa_1471 vec1 32 ssa_1473 = ffma! ssa_317, ssa_1462, ssa_1472 vec1 32 ssa_1474 = fadd! ssa_1473, ssa_1465 vec1 32 ssa_1475 = fmul! ssa_1457, ssa_315 vec1 32 ssa_1476 = ffma! ssa_316, ssa_1460, ssa_1475 vec1 32 ssa_1477 = ffma! ssa_317, ssa_1463, ssa_1476 vec1 32 ssa_1478 = fadd! ssa_1477, ssa_1466 vec1 1 ssa_1480 = flt! ssa_1479, ssa_323.y vec1 1 ssa_1481 = inot! ssa_1480 vec1 1 ssa_1483 = flt! ssa_1482, ssa_323.z vec1 1 ssa_1484 = inot! ssa_1483 vec1 1 ssa_1485 = iand ssa_1481, ssa_1484 vec1 32 ssa_3622 = deref_var &phi@12 (function_temp float) intrinsic store_deref (ssa_3622, ssa_1478) (wrmask=x /*1*/, access=0) vec1 32 ssa_3620 = deref_var &phi@11 (function_temp float) intrinsic store_deref (ssa_3620, ssa_1474) (wrmask=x /*1*/, access=0) vec1 32 ssa_3618 = deref_var &phi@10 (function_temp float) intrinsic store_deref (ssa_3618, ssa_1470) (wrmask=x /*1*/, access=0) /* succs: block_1 block_28 */ if ssa_1485 { block block_1: /* preds: block_0 */ vec1 32 ssa_1487 = f2u32 ssa_323.x vec1 32 ssa_1489 = umin ssa_1487, ssa_1488 vec1 1 ssa_1491 = ieq ssa_1489, ssa_1490 vec1 32 ssa_3616 = deref_var &phi@9 (function_temp float) intrinsic store_deref (ssa_3616, ssa_1478) (wrmask=x /*1*/, access=0) vec1 32 ssa_3614 = deref_var &phi@8 (function_temp float) intrinsic store_deref (ssa_3614, ssa_1474) (wrmask=x /*1*/, access=0) vec1 32 ssa_3612 = deref_var &phi@7 (function_temp float) intrinsic store_deref (ssa_3612, ssa_1470) (wrmask=x /*1*/, access=0) /* succs: block_2 block_3 */ if ssa_1491 { block block_2: /* preds: block_1 */ /* succs: block_27 */ } else { block block_3: /* preds: block_1 */ vec1 32 ssa_3601 = deref_var &phi (function_temp uint) intrinsic store_deref (ssa_3601, ssa_3600) (wrmask=x /*1*/, access=0) vec1 1 ssa_1492 = load_const (false) vec1 32 ssa_1493 = deref_var &loop_break (function_temp bool) intrinsic store_deref (ssa_1493, ssa_1492) (wrmask=x /*1*/, access=0) /* succs: block_4 */ loop { block block_4: /* preds: block_3 block_25 */ vec1 1 ssa_1494 = load_const (false) vec1 32 ssa_1495 = deref_var &loop_continue (function_temp bool) intrinsic store_deref (ssa_1495, ssa_1494) (wrmask=x /*1*/, access=0) vec1 32 ssa_1496 = deref_var &phi (function_temp uint) vec1 32 ssa_1497 = intrinsic load_deref (ssa_1496) (access=0) vec1 32 ssa_1499 = ishl ssa_1497, ssa_1498 vec4 32 ssa_1500 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_1501 = deref_cast (BindlessCBV *)ssa_1500 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1502 = deref_struct &ssa_1501->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1500)->field0 */ vec4 32 ssa_1503 = deref_array &(*ssa_1502)[ssa_1499] (ssbo vec4) /* &((BindlessCBV *)ssa_1500)->field0[ssa_1499] */ vec4 32 ssa_1504 = intrinsic load_deref (ssa_1503) (access=16) vec2 32 ssa_1512 = unpack_half_2x16! ssa_1504.x vec1 32 ssa_1515 = ushr! ssa_1504.x, ssa_1514 vec2 32 ssa_1516 = unpack_half_2x16! ssa_1515 vec2 32 ssa_1518 = unpack_half_2x16! ssa_1504.y vec1 32 ssa_1521 = ushr! ssa_1504.y, ssa_1520 vec2 32 ssa_1522 = unpack_half_2x16! ssa_1521 vec2 32 ssa_1526 = unpack_half_2x16 ssa_1504.z vec1 32 ssa_1529 = ushr ssa_1504.z, ssa_1528 vec2 32 ssa_1530 = unpack_half_2x16 ssa_1529 vec2 32 ssa_1532 = unpack_half_2x16 ssa_1504.w vec1 32 ssa_1535 = ushr ssa_1504.w, ssa_1534 vec2 32 ssa_1536 = unpack_half_2x16 ssa_1535 vec1 32 ssa_1539 = ior ssa_1499, ssa_1538 vec4 32 ssa_1540 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_1541 = deref_cast (BindlessCBV *)ssa_1540 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1542 = deref_struct &ssa_1541->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1540)->field0 */ vec4 32 ssa_1543 = deref_array &(*ssa_1542)[ssa_1539] (ssbo vec4) /* &((BindlessCBV *)ssa_1540)->field0[ssa_1539] */ vec4 32 ssa_1544 = intrinsic load_deref (ssa_1543) (access=16) vec1 32 ssa_1550 = ior ssa_1499, ssa_1549 vec4 32 ssa_1551 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_1552 = deref_cast (BindlessCBV *)ssa_1551 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1553 = deref_struct &ssa_1552->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1551)->field0 */ vec4 32 ssa_1554 = deref_array &(*ssa_1553)[ssa_1550] (ssbo vec4) /* &((BindlessCBV *)ssa_1551)->field0[ssa_1550] */ vec4 32 ssa_1555 = intrinsic load_deref (ssa_1554) (access=16) vec1 32 ssa_1560 = ior ssa_1499, ssa_1559 vec4 32 ssa_1561 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_1562 = deref_cast (BindlessCBV *)ssa_1561 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1563 = deref_struct &ssa_1562->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1561)->field0 */ vec4 32 ssa_1564 = deref_array &(*ssa_1563)[ssa_1560] (ssbo vec4) /* &((BindlessCBV *)ssa_1561)->field0[ssa_1560] */ vec4 32 ssa_1565 = intrinsic load_deref (ssa_1564) (access=16) vec1 1 ssa_1573 = ieq ssa_1565.w, ssa_1572 vec1 32 ssa_1574 = fsub! ssa_315, ssa_1544.x vec1 32 ssa_1575 = fsub! ssa_316, ssa_1544.y vec1 32 ssa_1576 = fsub! ssa_317, ssa_1544.z vec1 32 ssa_1577 = fsub! ssa_1555.x, ssa_1544.x vec1 32 ssa_1578 = fsub! ssa_1555.y, ssa_1544.y vec1 32 ssa_1579 = fsub! ssa_1555.z, ssa_1544.z vec3 32 ssa_1580 = vec3! ssa_1574, ssa_1575, ssa_1576 vec3 32 ssa_1581 = vec3! ssa_1577, ssa_1578, ssa_1579 vec1 32 ssa_1582 = fdot3! ssa_1580, ssa_1581 vec3 32 ssa_1583 = vec3! ssa_1577, ssa_1578, ssa_1579 vec3 32 ssa_1584 = vec3! ssa_1577, ssa_1578, ssa_1579 vec1 32 ssa_1585 = fdot3! ssa_1583, ssa_1584 vec1 32 ssa_1586 = fdiv! ssa_1582, ssa_1585 vec1 32 ssa_1587 = fmul! ssa_1586, ssa_1577 vec1 32 ssa_1588 = fmul! ssa_1586, ssa_1578 vec1 32 ssa_1589 = fmul! ssa_1586, ssa_1579 vec1 32 ssa_1590 = fsub ssa_1544.x, ssa_315 vec1 32 ssa_1591 = fadd ssa_1590, ssa_1587 vec1 32 ssa_1592 = fsub ssa_1544.y, ssa_316 vec1 32 ssa_1593 = fadd ssa_1592, ssa_1588 vec1 32 ssa_1594 = fsub ssa_1544.z, ssa_317 vec1 32 ssa_1595 = fadd ssa_1594, ssa_1589 vec3 32 ssa_1596 = vec3 ssa_1591, ssa_1593, ssa_1595 vec3 32 ssa_1597 = vec3 ssa_1591, ssa_1593, ssa_1595 vec1 32 ssa_1598 = fdot3 ssa_1596, ssa_1597 vec1 32 ssa_1599 = fmul ssa_1544.w, ssa_1544.w vec1 1 ssa_1600 = fge! ssa_1599, ssa_1598 vec4 32 ssa_1601 = vec4 ssa_1526.x, ssa_1530.x, ssa_1532.x, ssa_1536.x vec4 32 ssa_1603 = vec4 ssa_315, ssa_316, ssa_317, ssa_1602 vec1 32 ssa_1604 = fdot4 ssa_1601, ssa_1603 /* succs: block_5 block_12 */ if ssa_1573 { block block_5: /* preds: block_4 */ vec4 32 ssa_1605 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_1606 = deref_cast (BindlessCBV *)ssa_1605 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1607 = deref_struct &ssa_1606->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1605)->field0 */ vec4 32 ssa_1608 = deref_array &(*ssa_1607)[ssa_1560] (ssbo vec4) /* &((BindlessCBV *)ssa_1605)->field0[ssa_1560] */ vec4 32 ssa_1609 = intrinsic load_deref (ssa_1608) (access=16) vec4 32 ssa_1610 = vec4 ssa_1512.x, ssa_1516.x, ssa_1518.x, ssa_1522.x vec4 32 ssa_1612 = vec4 ssa_315, ssa_316, ssa_317, ssa_1611 vec1 32 ssa_1613 = fdot4 ssa_1610, ssa_1612 vec1 1 ssa_1615 = flt! ssa_1614, ssa_1613 vec1 1 ssa_1616 = iand ssa_1600, ssa_1615 vec1 1 ssa_1618 = flt! ssa_1617, ssa_1604 vec1 1 ssa_1619 = iand ssa_1616, ssa_1618 /* succs: block_6 block_7 */ if ssa_1619 { block block_6: /* preds: block_5 */ vec4 32 ssa_1624 = vec4! ssa_315, ssa_316, ssa_317, ssa_1623 vec4 32 ssa_1625 = vec4! ssa_1512.x, ssa_1516.x, ssa_1518.x, ssa_1522.x vec1 32 ssa_1626 = fdot4! ssa_1624, ssa_1625 vec1 32 ssa_1627 = fmul! ssa_1512.x, ssa_1626 vec1 32 ssa_1629 = fsub! ssa_1628, ssa_1627 vec1 32 ssa_1630 = fmul! ssa_1516.x, ssa_1626 vec1 32 ssa_1632 = fsub! ssa_1631, ssa_1630 vec1 32 ssa_1633 = fmul! ssa_1518.x, ssa_1626 vec1 32 ssa_1635 = fsub! ssa_1634, ssa_1633 vec3 32 ssa_1636 = vec3! ssa_1629, ssa_1632, ssa_1635 vec3 32 ssa_1637 = vec3! ssa_1629, ssa_1632, ssa_1635 vec1 32 ssa_1638 = fdot3! ssa_1636, ssa_1637 vec1 32 ssa_1640 = fmul! ssa_1638, ssa_1639 vec1 32 ssa_1643 = fmax! ssa_1640, ssa_1641 vec1 32 ssa_1644 = fmin! ssa_1643, ssa_1642 vec1 32 ssa_1645 = fsub! ssa_1609.x, ssa_1470 vec1 32 ssa_1646 = fsub! ssa_1609.y, ssa_1474 vec1 32 ssa_1647 = fsub! ssa_1609.z, ssa_1478 vec1 32 ssa_1648 = fmul! ssa_1644, ssa_1645 vec1 32 ssa_1649 = fmul! ssa_1644, ssa_1646 vec1 32 ssa_1650 = fmul! ssa_1644, ssa_1647 vec1 32 ssa_1651 = fadd! ssa_1648, ssa_1470 vec1 32 ssa_1652 = fadd! ssa_1649, ssa_1474 vec1 32 ssa_1653 = fadd! ssa_1650, ssa_1478 vec1 32 ssa_3610 = deref_var &phi@6 (function_temp float) intrinsic store_deref (ssa_3610, ssa_1653) (wrmask=x /*1*/, access=0) vec1 32 ssa_3607 = deref_var &phi@5 (function_temp float) intrinsic store_deref (ssa_3607, ssa_1652) (wrmask=x /*1*/, access=0) vec1 32 ssa_3604 = deref_var &phi@4 (function_temp float) intrinsic store_deref (ssa_3604, ssa_1651) (wrmask=x /*1*/, access=0) break /* succs: block_26 */ } else { block block_7: /* preds: block_5 */ /* succs: block_8 */ } block block_8: /* preds: block_7 */ vec1 32 ssa_1654 = deref_var &loop_break (function_temp bool) vec1 1 ssa_1655 = intrinsic load_deref (ssa_1654) (access=0) /* succs: block_9 block_10 */ if ssa_1655 { block block_9: /* preds: block_8 */ break /* succs: block_26 */ } else { block block_10: /* preds: block_8 */ /* succs: block_11 */ } block block_11: /* preds: block_10 */ /* succs: block_19 */ } else { block block_12: /* preds: block_4 */ vec1 1 ssa_1657 = fge! ssa_1604, ssa_1656 vec1 1 ssa_1658 = iand ssa_1600, ssa_1657 /* succs: block_13 block_14 */ if ssa_1658 { block block_13: /* preds: block_12 */ vec1 32 ssa_1659 = fadd! ssa_1587, ssa_1544.x vec1 32 ssa_1660 = fadd! ssa_1588, ssa_1544.y vec1 32 ssa_1661 = fadd! ssa_1589, ssa_1544.z vec4 32 ssa_1663 = vec4! ssa_315, ssa_316, ssa_317, ssa_1662 vec4 32 ssa_1664 = vec4! ssa_1512.x, ssa_1516.x, ssa_1518.x, ssa_1522.x vec1 32 ssa_1665 = fdot4! ssa_1663, ssa_1664 vec1 32 ssa_1666 = fmul! ssa_1665, ssa_1512.x vec1 32 ssa_1667 = fmul! ssa_1665, ssa_1516.x vec1 32 ssa_1668 = fmul! ssa_1665, ssa_1518.x vec1 32 ssa_1669 = fsub! ssa_315, ssa_1666 vec1 32 ssa_1670 = fsub! ssa_316, ssa_1667 vec1 32 ssa_1671 = fsub! ssa_317, ssa_1668 vec1 32 ssa_1672 = fsub! ssa_1669, ssa_1659 vec1 32 ssa_1673 = fsub! ssa_1670, ssa_1660 vec1 32 ssa_1674 = fsub! ssa_1671, ssa_1661 vec3 32 ssa_1675 = vec3! ssa_1672, ssa_1673, ssa_1674 vec3 32 ssa_1676 = vec3! ssa_1672, ssa_1673, ssa_1674 vec1 32 ssa_1677 = fdot3! ssa_1675, ssa_1676 vec1 32 ssa_1678 = frsq! ssa_1677 vec1 32 ssa_1679 = fmul! ssa_1678, ssa_1544.w vec1 32 ssa_1680 = fmul! ssa_1679, ssa_1672 vec1 32 ssa_1681 = fmul! ssa_1679, ssa_1673 vec1 32 ssa_1682 = fmul! ssa_1679, ssa_1674 vec1 32 ssa_1683 = fadd! ssa_1680, ssa_1659 vec1 32 ssa_1684 = fadd! ssa_1681, ssa_1660 vec1 32 ssa_1685 = fadd! ssa_1682, ssa_1661 vec1 32 ssa_1686 = fmul! ssa_1683, ssa_1455 vec1 32 ssa_1687 = ffma! ssa_1684, ssa_1458, ssa_1686 vec1 32 ssa_1688 = ffma! ssa_1685, ssa_1461, ssa_1687 vec1 32 ssa_1689 = fadd! ssa_1688, ssa_1464 vec1 32 ssa_1690 = fmul! ssa_1683, ssa_1456 vec1 32 ssa_1691 = ffma! ssa_1684, ssa_1459, ssa_1690 vec1 32 ssa_1692 = ffma! ssa_1685, ssa_1462, ssa_1691 vec1 32 ssa_1693 = fadd! ssa_1692, ssa_1465 vec1 32 ssa_1694 = fmul! ssa_1683, ssa_1457 vec1 32 ssa_1695 = ffma! ssa_1684, ssa_1460, ssa_1694 vec1 32 ssa_1696 = ffma! ssa_1685, ssa_1463, ssa_1695 vec1 32 ssa_1697 = fadd! ssa_1696, ssa_1466 vec1 32 ssa_3611 = deref_var &phi@6 (function_temp float) intrinsic store_deref (ssa_3611, ssa_1697) (wrmask=x /*1*/, access=0) vec1 32 ssa_3608 = deref_var &phi@5 (function_temp float) intrinsic store_deref (ssa_3608, ssa_1693) (wrmask=x /*1*/, access=0) vec1 32 ssa_3605 = deref_var &phi@4 (function_temp float) intrinsic store_deref (ssa_3605, ssa_1689) (wrmask=x /*1*/, access=0) break /* succs: block_26 */ } else { block block_14: /* preds: block_12 */ /* succs: block_15 */ } block block_15: /* preds: block_14 */ vec1 32 ssa_1698 = deref_var &loop_break (function_temp bool) vec1 1 ssa_1699 = intrinsic load_deref (ssa_1698) (access=0) /* succs: block_16 block_17 */ if ssa_1699 { block block_16: /* preds: block_15 */ break /* succs: block_26 */ } else { block block_17: /* preds: block_15 */ /* succs: block_18 */ } block block_18: /* preds: block_17 */ /* succs: block_19 */ } block block_19: /* preds: block_11 block_18 */ vec1 32 ssa_1700 = deref_var &loop_break (function_temp bool) vec1 1 ssa_1701 = intrinsic load_deref (ssa_1700) (access=0) /* succs: block_20 block_21 */ if ssa_1701 { block block_20: /* preds: block_19 */ break /* succs: block_26 */ } else { block block_21: /* preds: block_19 */ /* succs: block_22 */ } block block_22: /* preds: block_21 */ vec1 32 ssa_1703 = iadd ssa_1497, ssa_1702 vec1 1 ssa_1704 = ult ssa_1703, ssa_1489 vec1 32 ssa_3609 = deref_var &phi@6 (function_temp float) intrinsic store_deref (ssa_3609, ssa_1478) (wrmask=x /*1*/, access=0) vec1 32 ssa_3606 = deref_var &phi@5 (function_temp float) intrinsic store_deref (ssa_3606, ssa_1474) (wrmask=x /*1*/, access=0) vec1 32 ssa_3603 = deref_var &phi@4 (function_temp float) intrinsic store_deref (ssa_3603, ssa_1470) (wrmask=x /*1*/, access=0) vec1 32 ssa_3602 = deref_var &phi (function_temp uint) intrinsic store_deref (ssa_3602, ssa_1703) (wrmask=x /*1*/, access=0) /* succs: block_23 block_24 */ if ssa_1704 { block block_23: /* preds: block_22 */ /* succs: block_25 */ } else { block block_24: /* preds: block_22 */ break /* succs: block_26 */ } block block_25: /* preds: block_23 */ continue /* succs: block_4 */ } block block_26: /* preds: block_6 block_9 block_13 block_16 block_20 block_24 */ vec1 32 ssa_1705 = deref_var &phi@4 (function_temp float) vec1 32 ssa_1706 = intrinsic load_deref (ssa_1705) (access=0) vec1 32 ssa_1707 = deref_var &phi@5 (function_temp float) vec1 32 ssa_1708 = intrinsic load_deref (ssa_1707) (access=0) vec1 32 ssa_1709 = deref_var &phi@6 (function_temp float) vec1 32 ssa_1710 = intrinsic load_deref (ssa_1709) (access=0) vec1 32 ssa_3617 = deref_var &phi@9 (function_temp float) intrinsic store_deref (ssa_3617, ssa_1710) (wrmask=x /*1*/, access=0) vec1 32 ssa_3615 = deref_var &phi@8 (function_temp float) intrinsic store_deref (ssa_3615, ssa_1708) (wrmask=x /*1*/, access=0) vec1 32 ssa_3613 = deref_var &phi@7 (function_temp float) intrinsic store_deref (ssa_3613, ssa_1706) (wrmask=x /*1*/, access=0) /* succs: block_27 */ } block block_27: /* preds: block_2 block_26 */ vec1 32 ssa_1711 = deref_var &phi@7 (function_temp float) vec1 32 ssa_1712 = intrinsic load_deref (ssa_1711) (access=0) vec1 32 ssa_1713 = deref_var &phi@8 (function_temp float) vec1 32 ssa_1714 = intrinsic load_deref (ssa_1713) (access=0) vec1 32 ssa_1715 = deref_var &phi@9 (function_temp float) vec1 32 ssa_1716 = intrinsic load_deref (ssa_1715) (access=0) vec1 32 ssa_3623 = deref_var &phi@12 (function_temp float) intrinsic store_deref (ssa_3623, ssa_1716) (wrmask=x /*1*/, access=0) vec1 32 ssa_3621 = deref_var &phi@11 (function_temp float) intrinsic store_deref (ssa_3621, ssa_1714) (wrmask=x /*1*/, access=0) vec1 32 ssa_3619 = deref_var &phi@10 (function_temp float) intrinsic store_deref (ssa_3619, ssa_1712) (wrmask=x /*1*/, access=0) /* succs: block_29 */ } else { block block_28: /* preds: block_0 */ /* succs: block_29 */ } block block_29: /* preds: block_27 block_28 */ vec1 32 ssa_1717 = deref_var &phi@10 (function_temp float) vec1 32 ssa_1718 = intrinsic load_deref (ssa_1717) (access=0) vec1 32 ssa_1719 = deref_var &phi@11 (function_temp float) vec1 32 ssa_1720 = intrinsic load_deref (ssa_1719) (access=0) vec1 32 ssa_1721 = deref_var &phi@12 (function_temp float) vec1 32 ssa_1722 = intrinsic load_deref (ssa_1721) (access=0) vec1 32 ssa_1723 = fmul! ssa_1718, ssa_186.x vec1 32 ssa_1724 = ffma! ssa_1720, ssa_193.y, ssa_1723 vec1 32 ssa_1725 = ffma! ssa_1722, ssa_200.z, ssa_1724 vec1 32 ssa_1726 = fadd! ssa_1725, ssa_289 vec1 32 ssa_1727 = fmul! ssa_1718, ssa_214.x vec1 32 ssa_1728 = ffma! ssa_1720, ssa_221.y, ssa_1727 vec1 32 ssa_1729 = ffma! ssa_1722, ssa_228.z, ssa_1728 vec1 32 ssa_1730 = fadd! ssa_1729, ssa_291 vec1 32 ssa_1731 = fmul! ssa_1718, ssa_242.x vec1 32 ssa_1732 = ffma! ssa_1720, ssa_249.y, ssa_1731 vec1 32 ssa_1733 = ffma! ssa_1722, ssa_256.z, ssa_1732 vec1 32 ssa_1734 = fadd! ssa_1733, ssa_293 vec4 32 ssa_1735 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_1736 = deref_cast (BindlessCBV *)ssa_1735 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1737 = deref_struct &ssa_1736->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1735)->field0 */ vec1 32 ssa_1738 = load_const (0x0000001a = 0.000000) vec4 32 ssa_1739 = deref_array &(*ssa_1737)[26] (ssbo vec4) /* &((BindlessCBV *)ssa_1735)->field0[26] */ vec4 32 ssa_1740 = intrinsic load_deref (ssa_1739) (access=16) vec1 32 ssa_1745 = fmul ssa_1740.x, ssa_1726 vec1 32 ssa_1746 = ffma ssa_1730, ssa_1740.y, ssa_1745 vec1 32 ssa_1747 = ffma ssa_1734, ssa_1740.z, ssa_1746 vec1 32 ssa_1748 = fadd ssa_1747, ssa_1740.w vec4 32 ssa_1749 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_1750 = deref_cast (BindlessCBV *)ssa_1749 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1751 = deref_struct &ssa_1750->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1749)->field0 */ vec1 32 ssa_1752 = load_const (0x00000025 = 0.000000) vec4 32 ssa_1753 = deref_array &(*ssa_1751)[37] (ssbo vec4) /* &((BindlessCBV *)ssa_1749)->field0[37] */ vec4 32 ssa_1754 = intrinsic load_deref (ssa_1753) (access=16) vec1 32 ssa_1758 = fadd ssa_1754.x, ssa_1726 vec1 32 ssa_1759 = fadd ssa_1754.y, ssa_1730 vec1 32 ssa_1760 = fadd ssa_1754.z, ssa_1734 vec4 32 ssa_1761 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_1762 = deref_cast (BindlessCBV *)ssa_1761 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1763 = deref_struct &ssa_1762->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1761)->field0 */ vec1 32 ssa_1764 = load_const (0x0000001c = 0.000000) vec4 32 ssa_1765 = deref_array &(*ssa_1763)[28] (ssbo vec4) /* &((BindlessCBV *)ssa_1761)->field0[28] */ vec4 32 ssa_1766 = intrinsic load_deref (ssa_1765) (access=16) vec4 32 ssa_1771 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_1772 = deref_cast (BindlessCBV *)ssa_1771 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1773 = deref_struct &ssa_1772->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1771)->field0 */ vec1 32 ssa_1774 = load_const (0x0000001d = 0.000000) vec4 32 ssa_1775 = deref_array &(*ssa_1773)[29] (ssbo vec4) /* &((BindlessCBV *)ssa_1771)->field0[29] */ vec4 32 ssa_1776 = intrinsic load_deref (ssa_1775) (access=16) vec4 32 ssa_1781 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_1782 = deref_cast (BindlessCBV *)ssa_1781 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1783 = deref_struct &ssa_1782->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1781)->field0 */ vec1 32 ssa_1784 = load_const (0x0000001e = 0.000000) vec4 32 ssa_1785 = deref_array &(*ssa_1783)[30] (ssbo vec4) /* &((BindlessCBV *)ssa_1781)->field0[30] */ vec4 32 ssa_1786 = intrinsic load_deref (ssa_1785) (access=16) vec4 32 ssa_1791 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_1792 = deref_cast (BindlessCBV *)ssa_1791 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1793 = deref_struct &ssa_1792->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1791)->field0 */ vec1 32 ssa_1794 = load_const (0x0000001f = 0.000000) vec4 32 ssa_1795 = deref_array &(*ssa_1793)[31] (ssbo vec4) /* &((BindlessCBV *)ssa_1791)->field0[31] */ vec4 32 ssa_1796 = intrinsic load_deref (ssa_1795) (access=16) vec1 32 ssa_1801 = fmul! ssa_1766.x, ssa_1726 vec1 32 ssa_1802 = ffma! ssa_1730, ssa_1766.y, ssa_1801 vec1 32 ssa_1803 = ffma! ssa_1734, ssa_1766.z, ssa_1802 vec1 32 ssa_1804 = fadd! ssa_1803, ssa_1766.w vec1 32 ssa_1805 = fmul! ssa_1776.x, ssa_1726 vec1 32 ssa_1806 = ffma! ssa_1730, ssa_1776.y, ssa_1805 vec1 32 ssa_1807 = ffma! ssa_1734, ssa_1776.z, ssa_1806 vec1 32 ssa_1808 = fadd! ssa_1807, ssa_1776.w vec1 32 ssa_1809 = fmul! ssa_1786.x, ssa_1726 vec1 32 ssa_1810 = ffma! ssa_1730, ssa_1786.y, ssa_1809 vec1 32 ssa_1811 = ffma! ssa_1734, ssa_1786.z, ssa_1810 vec1 32 ssa_1812 = fadd! ssa_1811, ssa_1786.w vec1 32 ssa_1813 = fmul! ssa_1796.x, ssa_1726 vec1 32 ssa_1814 = ffma! ssa_1730, ssa_1796.y, ssa_1813 vec1 32 ssa_1815 = ffma! ssa_1734, ssa_1796.z, ssa_1814 vec1 32 ssa_1816 = fadd! ssa_1815, ssa_1796.w vec1 32 ssa_1817 = iadd ssa_351, ssa_39.z vec1 32 ssa_1819 = ushr ssa_1817, ssa_1818 vec4 32 ssa_1820 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1821 = deref_cast (SSBO *)ssa_1820 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1822 = deref_struct &ssa_1821->field0 (ssbo uint[]) /* &((SSBO *)ssa_1820)->field0 */ vec4 32 ssa_1823 = deref_array &(*ssa_1822)[ssa_1819] (ssbo uint) /* &((SSBO *)ssa_1820)->field0[ssa_1819] */ vec1 32 ssa_1824 = intrinsic load_deref (ssa_1823) (access=16) vec1 32 ssa_1826 = iadd ssa_1819, ssa_1825 vec4 32 ssa_1827 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1828 = deref_cast (SSBO *)ssa_1827 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1829 = deref_struct &ssa_1828->field0 (ssbo uint[]) /* &((SSBO *)ssa_1827)->field0 */ vec4 32 ssa_1830 = deref_array &(*ssa_1829)[ssa_1826] (ssbo uint) /* &((SSBO *)ssa_1827)->field0[ssa_1826] */ vec1 32 ssa_1831 = intrinsic load_deref (ssa_1830) (access=16) vec1 32 ssa_1833 = iadd ssa_1819, ssa_1832 vec4 32 ssa_1834 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1835 = deref_cast (SSBO *)ssa_1834 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1836 = deref_struct &ssa_1835->field0 (ssbo uint[]) /* &((SSBO *)ssa_1834)->field0 */ vec4 32 ssa_1837 = deref_array &(*ssa_1836)[ssa_1833] (ssbo uint) /* &((SSBO *)ssa_1834)->field0[ssa_1833] */ vec1 32 ssa_1838 = intrinsic load_deref (ssa_1837) (access=16) vec1 32 ssa_1840 = iadd ssa_1819, ssa_1839 vec4 32 ssa_1841 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1842 = deref_cast (SSBO *)ssa_1841 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1843 = deref_struct &ssa_1842->field0 (ssbo uint[]) /* &((SSBO *)ssa_1841)->field0 */ vec4 32 ssa_1844 = deref_array &(*ssa_1843)[ssa_1840] (ssbo uint) /* &((SSBO *)ssa_1841)->field0[ssa_1840] */ vec1 32 ssa_1845 = intrinsic load_deref (ssa_1844) (access=16) vec1 32 ssa_1856 = iadd ssa_1817, ssa_1855 vec1 32 ssa_1858 = ushr ssa_1856, ssa_1857 vec4 32 ssa_1859 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1860 = deref_cast (SSBO *)ssa_1859 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1861 = deref_struct &ssa_1860->field0 (ssbo uint[]) /* &((SSBO *)ssa_1859)->field0 */ vec4 32 ssa_1862 = deref_array &(*ssa_1861)[ssa_1858] (ssbo uint) /* &((SSBO *)ssa_1859)->field0[ssa_1858] */ vec1 32 ssa_1863 = intrinsic load_deref (ssa_1862) (access=16) vec1 32 ssa_1865 = iadd ssa_1858, ssa_1864 vec4 32 ssa_1866 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1867 = deref_cast (SSBO *)ssa_1866 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1868 = deref_struct &ssa_1867->field0 (ssbo uint[]) /* &((SSBO *)ssa_1866)->field0 */ vec4 32 ssa_1869 = deref_array &(*ssa_1868)[ssa_1865] (ssbo uint) /* &((SSBO *)ssa_1866)->field0[ssa_1865] */ vec1 32 ssa_1870 = intrinsic load_deref (ssa_1869) (access=16) vec1 32 ssa_1872 = iadd ssa_1858, ssa_1871 vec4 32 ssa_1873 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1874 = deref_cast (SSBO *)ssa_1873 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1875 = deref_struct &ssa_1874->field0 (ssbo uint[]) /* &((SSBO *)ssa_1873)->field0 */ vec4 32 ssa_1876 = deref_array &(*ssa_1875)[ssa_1872] (ssbo uint) /* &((SSBO *)ssa_1873)->field0[ssa_1872] */ vec1 32 ssa_1877 = intrinsic load_deref (ssa_1876) (access=16) vec1 32 ssa_1879 = iadd ssa_1858, ssa_1878 vec4 32 ssa_1880 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1881 = deref_cast (SSBO *)ssa_1880 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1882 = deref_struct &ssa_1881->field0 (ssbo uint[]) /* &((SSBO *)ssa_1880)->field0 */ vec4 32 ssa_1883 = deref_array &(*ssa_1882)[ssa_1879] (ssbo uint) /* &((SSBO *)ssa_1880)->field0[ssa_1879] */ vec1 32 ssa_1884 = intrinsic load_deref (ssa_1883) (access=16) vec1 32 ssa_1895 = iadd ssa_1817, ssa_1894 vec1 32 ssa_1897 = ushr ssa_1895, ssa_1896 vec4 32 ssa_1898 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1899 = deref_cast (SSBO *)ssa_1898 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1900 = deref_struct &ssa_1899->field0 (ssbo uint[]) /* &((SSBO *)ssa_1898)->field0 */ vec4 32 ssa_1901 = deref_array &(*ssa_1900)[ssa_1897] (ssbo uint) /* &((SSBO *)ssa_1898)->field0[ssa_1897] */ vec1 32 ssa_1902 = intrinsic load_deref (ssa_1901) (access=16) vec1 32 ssa_1904 = iadd ssa_1897, ssa_1903 vec4 32 ssa_1905 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1906 = deref_cast (SSBO *)ssa_1905 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1907 = deref_struct &ssa_1906->field0 (ssbo uint[]) /* &((SSBO *)ssa_1905)->field0 */ vec4 32 ssa_1908 = deref_array &(*ssa_1907)[ssa_1904] (ssbo uint) /* &((SSBO *)ssa_1905)->field0[ssa_1904] */ vec1 32 ssa_1909 = intrinsic load_deref (ssa_1908) (access=16) vec1 32 ssa_1911 = iadd ssa_1897, ssa_1910 vec4 32 ssa_1912 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1913 = deref_cast (SSBO *)ssa_1912 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1914 = deref_struct &ssa_1913->field0 (ssbo uint[]) /* &((SSBO *)ssa_1912)->field0 */ vec4 32 ssa_1915 = deref_array &(*ssa_1914)[ssa_1911] (ssbo uint) /* &((SSBO *)ssa_1912)->field0[ssa_1911] */ vec1 32 ssa_1916 = intrinsic load_deref (ssa_1915) (access=16) vec1 32 ssa_1918 = iadd ssa_1897, ssa_1917 vec4 32 ssa_1919 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1920 = deref_cast (SSBO *)ssa_1919 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1921 = deref_struct &ssa_1920->field0 (ssbo uint[]) /* &((SSBO *)ssa_1919)->field0 */ vec4 32 ssa_1922 = deref_array &(*ssa_1921)[ssa_1918] (ssbo uint) /* &((SSBO *)ssa_1919)->field0[ssa_1918] */ vec1 32 ssa_1923 = intrinsic load_deref (ssa_1922) (access=16) vec1 32 ssa_1933 = iadd ssa_468, ssa_39.z vec1 32 ssa_1935 = ushr ssa_1933, ssa_1934 vec4 32 ssa_1936 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1937 = deref_cast (SSBO *)ssa_1936 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1938 = deref_struct &ssa_1937->field0 (ssbo uint[]) /* &((SSBO *)ssa_1936)->field0 */ vec4 32 ssa_1939 = deref_array &(*ssa_1938)[ssa_1935] (ssbo uint) /* &((SSBO *)ssa_1936)->field0[ssa_1935] */ vec1 32 ssa_1940 = intrinsic load_deref (ssa_1939) (access=16) vec1 32 ssa_1942 = iadd ssa_1935, ssa_1941 vec4 32 ssa_1943 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1944 = deref_cast (SSBO *)ssa_1943 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1945 = deref_struct &ssa_1944->field0 (ssbo uint[]) /* &((SSBO *)ssa_1943)->field0 */ vec4 32 ssa_1946 = deref_array &(*ssa_1945)[ssa_1942] (ssbo uint) /* &((SSBO *)ssa_1943)->field0[ssa_1942] */ vec1 32 ssa_1947 = intrinsic load_deref (ssa_1946) (access=16) vec1 32 ssa_1949 = iadd ssa_1935, ssa_1948 vec4 32 ssa_1950 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1951 = deref_cast (SSBO *)ssa_1950 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1952 = deref_struct &ssa_1951->field0 (ssbo uint[]) /* &((SSBO *)ssa_1950)->field0 */ vec4 32 ssa_1953 = deref_array &(*ssa_1952)[ssa_1949] (ssbo uint) /* &((SSBO *)ssa_1950)->field0[ssa_1949] */ vec1 32 ssa_1954 = intrinsic load_deref (ssa_1953) (access=16) vec1 32 ssa_1956 = iadd ssa_1935, ssa_1955 vec4 32 ssa_1957 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1958 = deref_cast (SSBO *)ssa_1957 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1959 = deref_struct &ssa_1958->field0 (ssbo uint[]) /* &((SSBO *)ssa_1957)->field0 */ vec4 32 ssa_1960 = deref_array &(*ssa_1959)[ssa_1956] (ssbo uint) /* &((SSBO *)ssa_1957)->field0[ssa_1956] */ vec1 32 ssa_1961 = intrinsic load_deref (ssa_1960) (access=16) vec1 32 ssa_1972 = iadd ssa_1933, ssa_1971 vec1 32 ssa_1974 = ushr ssa_1972, ssa_1973 vec4 32 ssa_1975 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1976 = deref_cast (SSBO *)ssa_1975 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1977 = deref_struct &ssa_1976->field0 (ssbo uint[]) /* &((SSBO *)ssa_1975)->field0 */ vec4 32 ssa_1978 = deref_array &(*ssa_1977)[ssa_1974] (ssbo uint) /* &((SSBO *)ssa_1975)->field0[ssa_1974] */ vec1 32 ssa_1979 = intrinsic load_deref (ssa_1978) (access=16) vec1 32 ssa_1981 = iadd ssa_1974, ssa_1980 vec4 32 ssa_1982 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1983 = deref_cast (SSBO *)ssa_1982 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1984 = deref_struct &ssa_1983->field0 (ssbo uint[]) /* &((SSBO *)ssa_1982)->field0 */ vec4 32 ssa_1985 = deref_array &(*ssa_1984)[ssa_1981] (ssbo uint) /* &((SSBO *)ssa_1982)->field0[ssa_1981] */ vec1 32 ssa_1986 = intrinsic load_deref (ssa_1985) (access=16) vec1 32 ssa_1988 = iadd ssa_1974, ssa_1987 vec4 32 ssa_1989 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1990 = deref_cast (SSBO *)ssa_1989 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1991 = deref_struct &ssa_1990->field0 (ssbo uint[]) /* &((SSBO *)ssa_1989)->field0 */ vec4 32 ssa_1992 = deref_array &(*ssa_1991)[ssa_1988] (ssbo uint) /* &((SSBO *)ssa_1989)->field0[ssa_1988] */ vec1 32 ssa_1993 = intrinsic load_deref (ssa_1992) (access=16) vec1 32 ssa_1995 = iadd ssa_1974, ssa_1994 vec4 32 ssa_1996 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_1997 = deref_cast (SSBO *)ssa_1996 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1998 = deref_struct &ssa_1997->field0 (ssbo uint[]) /* &((SSBO *)ssa_1996)->field0 */ vec4 32 ssa_1999 = deref_array &(*ssa_1998)[ssa_1995] (ssbo uint) /* &((SSBO *)ssa_1996)->field0[ssa_1995] */ vec1 32 ssa_2000 = intrinsic load_deref (ssa_1999) (access=16) vec1 32 ssa_2011 = iadd ssa_1933, ssa_2010 vec1 32 ssa_2013 = ushr ssa_2011, ssa_2012 vec4 32 ssa_2014 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2015 = deref_cast (SSBO *)ssa_2014 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2016 = deref_struct &ssa_2015->field0 (ssbo uint[]) /* &((SSBO *)ssa_2014)->field0 */ vec4 32 ssa_2017 = deref_array &(*ssa_2016)[ssa_2013] (ssbo uint) /* &((SSBO *)ssa_2014)->field0[ssa_2013] */ vec1 32 ssa_2018 = intrinsic load_deref (ssa_2017) (access=16) vec1 32 ssa_2020 = iadd ssa_2013, ssa_2019 vec4 32 ssa_2021 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2022 = deref_cast (SSBO *)ssa_2021 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2023 = deref_struct &ssa_2022->field0 (ssbo uint[]) /* &((SSBO *)ssa_2021)->field0 */ vec4 32 ssa_2024 = deref_array &(*ssa_2023)[ssa_2020] (ssbo uint) /* &((SSBO *)ssa_2021)->field0[ssa_2020] */ vec1 32 ssa_2025 = intrinsic load_deref (ssa_2024) (access=16) vec1 32 ssa_2027 = iadd ssa_2013, ssa_2026 vec4 32 ssa_2028 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2029 = deref_cast (SSBO *)ssa_2028 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2030 = deref_struct &ssa_2029->field0 (ssbo uint[]) /* &((SSBO *)ssa_2028)->field0 */ vec4 32 ssa_2031 = deref_array &(*ssa_2030)[ssa_2027] (ssbo uint) /* &((SSBO *)ssa_2028)->field0[ssa_2027] */ vec1 32 ssa_2032 = intrinsic load_deref (ssa_2031) (access=16) vec1 32 ssa_2034 = iadd ssa_2013, ssa_2033 vec4 32 ssa_2035 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2036 = deref_cast (SSBO *)ssa_2035 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2037 = deref_struct &ssa_2036->field0 (ssbo uint[]) /* &((SSBO *)ssa_2035)->field0 */ vec4 32 ssa_2038 = deref_array &(*ssa_2037)[ssa_2034] (ssbo uint) /* &((SSBO *)ssa_2035)->field0[ssa_2034] */ vec1 32 ssa_2039 = intrinsic load_deref (ssa_2038) (access=16) vec1 32 ssa_2049 = fmul ssa_1824, ssa_343 vec1 32 ssa_2050 = fmul ssa_1863, ssa_343 vec1 32 ssa_2051 = fmul ssa_1902, ssa_343 vec1 32 ssa_2052 = fmul ssa_1831, ssa_343 vec1 32 ssa_2053 = fmul ssa_1870, ssa_343 vec1 32 ssa_2054 = fmul ssa_1909, ssa_343 vec1 32 ssa_2055 = fmul ssa_1838, ssa_343 vec1 32 ssa_2056 = fmul ssa_1877, ssa_343 vec1 32 ssa_2057 = fmul ssa_1916, ssa_343 vec1 32 ssa_2058 = fmul ssa_1845, ssa_343 vec1 32 ssa_2059 = fmul ssa_1884, ssa_343 vec1 32 ssa_2060 = fmul ssa_1923, ssa_343 vec1 32 ssa_2061 = fmul ssa_1940, ssa_347 vec1 32 ssa_2062 = fmul ssa_1979, ssa_347 vec1 32 ssa_2063 = fmul ssa_2018, ssa_347 vec1 32 ssa_2064 = fmul ssa_1947, ssa_347 vec1 32 ssa_2065 = fmul ssa_1986, ssa_347 vec1 32 ssa_2066 = fmul ssa_2025, ssa_347 vec1 32 ssa_2067 = fmul ssa_1954, ssa_347 vec1 32 ssa_2068 = fmul ssa_1993, ssa_347 vec1 32 ssa_2069 = fmul ssa_2032, ssa_347 vec1 32 ssa_2070 = fmul ssa_1961, ssa_347 vec1 32 ssa_2071 = fmul ssa_2000, ssa_347 vec1 32 ssa_2072 = fmul ssa_2039, ssa_347 vec1 32 ssa_2073 = fadd ssa_2061, ssa_2049 vec1 32 ssa_2074 = fadd ssa_2062, ssa_2050 vec1 32 ssa_2075 = fadd ssa_2063, ssa_2051 vec1 32 ssa_2076 = fadd ssa_2064, ssa_2052 vec1 32 ssa_2077 = fadd ssa_2065, ssa_2053 vec1 32 ssa_2078 = fadd ssa_2066, ssa_2054 vec1 32 ssa_2079 = fadd ssa_2067, ssa_2055 vec1 32 ssa_2080 = fadd ssa_2068, ssa_2056 vec1 32 ssa_2081 = fadd ssa_2069, ssa_2057 vec1 32 ssa_2082 = fadd ssa_2070, ssa_2058 vec1 32 ssa_2083 = fadd ssa_2071, ssa_2059 vec1 32 ssa_2084 = fadd ssa_2072, ssa_2060 vec1 32 ssa_2085 = iadd ssa_621, ssa_39.z vec1 32 ssa_2087 = ushr ssa_2085, ssa_2086 vec4 32 ssa_2088 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2089 = deref_cast (SSBO *)ssa_2088 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2090 = deref_struct &ssa_2089->field0 (ssbo uint[]) /* &((SSBO *)ssa_2088)->field0 */ vec4 32 ssa_2091 = deref_array &(*ssa_2090)[ssa_2087] (ssbo uint) /* &((SSBO *)ssa_2088)->field0[ssa_2087] */ vec1 32 ssa_2092 = intrinsic load_deref (ssa_2091) (access=16) vec1 32 ssa_2094 = iadd ssa_2087, ssa_2093 vec4 32 ssa_2095 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2096 = deref_cast (SSBO *)ssa_2095 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2097 = deref_struct &ssa_2096->field0 (ssbo uint[]) /* &((SSBO *)ssa_2095)->field0 */ vec4 32 ssa_2098 = deref_array &(*ssa_2097)[ssa_2094] (ssbo uint) /* &((SSBO *)ssa_2095)->field0[ssa_2094] */ vec1 32 ssa_2099 = intrinsic load_deref (ssa_2098) (access=16) vec1 32 ssa_2101 = iadd ssa_2087, ssa_2100 vec4 32 ssa_2102 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2103 = deref_cast (SSBO *)ssa_2102 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2104 = deref_struct &ssa_2103->field0 (ssbo uint[]) /* &((SSBO *)ssa_2102)->field0 */ vec4 32 ssa_2105 = deref_array &(*ssa_2104)[ssa_2101] (ssbo uint) /* &((SSBO *)ssa_2102)->field0[ssa_2101] */ vec1 32 ssa_2106 = intrinsic load_deref (ssa_2105) (access=16) vec1 32 ssa_2108 = iadd ssa_2087, ssa_2107 vec4 32 ssa_2109 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2110 = deref_cast (SSBO *)ssa_2109 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2111 = deref_struct &ssa_2110->field0 (ssbo uint[]) /* &((SSBO *)ssa_2109)->field0 */ vec4 32 ssa_2112 = deref_array &(*ssa_2111)[ssa_2108] (ssbo uint) /* &((SSBO *)ssa_2109)->field0[ssa_2108] */ vec1 32 ssa_2113 = intrinsic load_deref (ssa_2112) (access=16) vec1 32 ssa_2124 = iadd ssa_2085, ssa_2123 vec1 32 ssa_2126 = ushr ssa_2124, ssa_2125 vec4 32 ssa_2127 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2128 = deref_cast (SSBO *)ssa_2127 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2129 = deref_struct &ssa_2128->field0 (ssbo uint[]) /* &((SSBO *)ssa_2127)->field0 */ vec4 32 ssa_2130 = deref_array &(*ssa_2129)[ssa_2126] (ssbo uint) /* &((SSBO *)ssa_2127)->field0[ssa_2126] */ vec1 32 ssa_2131 = intrinsic load_deref (ssa_2130) (access=16) vec1 32 ssa_2133 = iadd ssa_2126, ssa_2132 vec4 32 ssa_2134 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2135 = deref_cast (SSBO *)ssa_2134 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2136 = deref_struct &ssa_2135->field0 (ssbo uint[]) /* &((SSBO *)ssa_2134)->field0 */ vec4 32 ssa_2137 = deref_array &(*ssa_2136)[ssa_2133] (ssbo uint) /* &((SSBO *)ssa_2134)->field0[ssa_2133] */ vec1 32 ssa_2138 = intrinsic load_deref (ssa_2137) (access=16) vec1 32 ssa_2140 = iadd ssa_2126, ssa_2139 vec4 32 ssa_2141 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2142 = deref_cast (SSBO *)ssa_2141 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2143 = deref_struct &ssa_2142->field0 (ssbo uint[]) /* &((SSBO *)ssa_2141)->field0 */ vec4 32 ssa_2144 = deref_array &(*ssa_2143)[ssa_2140] (ssbo uint) /* &((SSBO *)ssa_2141)->field0[ssa_2140] */ vec1 32 ssa_2145 = intrinsic load_deref (ssa_2144) (access=16) vec1 32 ssa_2147 = iadd ssa_2126, ssa_2146 vec4 32 ssa_2148 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2149 = deref_cast (SSBO *)ssa_2148 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2150 = deref_struct &ssa_2149->field0 (ssbo uint[]) /* &((SSBO *)ssa_2148)->field0 */ vec4 32 ssa_2151 = deref_array &(*ssa_2150)[ssa_2147] (ssbo uint) /* &((SSBO *)ssa_2148)->field0[ssa_2147] */ vec1 32 ssa_2152 = intrinsic load_deref (ssa_2151) (access=16) vec1 32 ssa_2163 = iadd ssa_2085, ssa_2162 vec1 32 ssa_2165 = ushr ssa_2163, ssa_2164 vec4 32 ssa_2166 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2167 = deref_cast (SSBO *)ssa_2166 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2168 = deref_struct &ssa_2167->field0 (ssbo uint[]) /* &((SSBO *)ssa_2166)->field0 */ vec4 32 ssa_2169 = deref_array &(*ssa_2168)[ssa_2165] (ssbo uint) /* &((SSBO *)ssa_2166)->field0[ssa_2165] */ vec1 32 ssa_2170 = intrinsic load_deref (ssa_2169) (access=16) vec1 32 ssa_2172 = iadd ssa_2165, ssa_2171 vec4 32 ssa_2173 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2174 = deref_cast (SSBO *)ssa_2173 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2175 = deref_struct &ssa_2174->field0 (ssbo uint[]) /* &((SSBO *)ssa_2173)->field0 */ vec4 32 ssa_2176 = deref_array &(*ssa_2175)[ssa_2172] (ssbo uint) /* &((SSBO *)ssa_2173)->field0[ssa_2172] */ vec1 32 ssa_2177 = intrinsic load_deref (ssa_2176) (access=16) vec1 32 ssa_2179 = iadd ssa_2165, ssa_2178 vec4 32 ssa_2180 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2181 = deref_cast (SSBO *)ssa_2180 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2182 = deref_struct &ssa_2181->field0 (ssbo uint[]) /* &((SSBO *)ssa_2180)->field0 */ vec4 32 ssa_2183 = deref_array &(*ssa_2182)[ssa_2179] (ssbo uint) /* &((SSBO *)ssa_2180)->field0[ssa_2179] */ vec1 32 ssa_2184 = intrinsic load_deref (ssa_2183) (access=16) vec1 32 ssa_2186 = iadd ssa_2165, ssa_2185 vec4 32 ssa_2187 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2188 = deref_cast (SSBO *)ssa_2187 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2189 = deref_struct &ssa_2188->field0 (ssbo uint[]) /* &((SSBO *)ssa_2187)->field0 */ vec4 32 ssa_2190 = deref_array &(*ssa_2189)[ssa_2186] (ssbo uint) /* &((SSBO *)ssa_2187)->field0[ssa_2186] */ vec1 32 ssa_2191 = intrinsic load_deref (ssa_2190) (access=16) vec1 32 ssa_2201 = iadd ssa_738, ssa_39.z vec1 32 ssa_2203 = ushr ssa_2201, ssa_2202 vec4 32 ssa_2204 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2205 = deref_cast (SSBO *)ssa_2204 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2206 = deref_struct &ssa_2205->field0 (ssbo uint[]) /* &((SSBO *)ssa_2204)->field0 */ vec4 32 ssa_2207 = deref_array &(*ssa_2206)[ssa_2203] (ssbo uint) /* &((SSBO *)ssa_2204)->field0[ssa_2203] */ vec1 32 ssa_2208 = intrinsic load_deref (ssa_2207) (access=16) vec1 32 ssa_2210 = iadd ssa_2203, ssa_2209 vec4 32 ssa_2211 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2212 = deref_cast (SSBO *)ssa_2211 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2213 = deref_struct &ssa_2212->field0 (ssbo uint[]) /* &((SSBO *)ssa_2211)->field0 */ vec4 32 ssa_2214 = deref_array &(*ssa_2213)[ssa_2210] (ssbo uint) /* &((SSBO *)ssa_2211)->field0[ssa_2210] */ vec1 32 ssa_2215 = intrinsic load_deref (ssa_2214) (access=16) vec1 32 ssa_2217 = iadd ssa_2203, ssa_2216 vec4 32 ssa_2218 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2219 = deref_cast (SSBO *)ssa_2218 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2220 = deref_struct &ssa_2219->field0 (ssbo uint[]) /* &((SSBO *)ssa_2218)->field0 */ vec4 32 ssa_2221 = deref_array &(*ssa_2220)[ssa_2217] (ssbo uint) /* &((SSBO *)ssa_2218)->field0[ssa_2217] */ vec1 32 ssa_2222 = intrinsic load_deref (ssa_2221) (access=16) vec1 32 ssa_2224 = iadd ssa_2203, ssa_2223 vec4 32 ssa_2225 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2226 = deref_cast (SSBO *)ssa_2225 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2227 = deref_struct &ssa_2226->field0 (ssbo uint[]) /* &((SSBO *)ssa_2225)->field0 */ vec4 32 ssa_2228 = deref_array &(*ssa_2227)[ssa_2224] (ssbo uint) /* &((SSBO *)ssa_2225)->field0[ssa_2224] */ vec1 32 ssa_2229 = intrinsic load_deref (ssa_2228) (access=16) vec1 32 ssa_2240 = iadd ssa_2201, ssa_2239 vec1 32 ssa_2242 = ushr ssa_2240, ssa_2241 vec4 32 ssa_2243 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2244 = deref_cast (SSBO *)ssa_2243 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2245 = deref_struct &ssa_2244->field0 (ssbo uint[]) /* &((SSBO *)ssa_2243)->field0 */ vec4 32 ssa_2246 = deref_array &(*ssa_2245)[ssa_2242] (ssbo uint) /* &((SSBO *)ssa_2243)->field0[ssa_2242] */ vec1 32 ssa_2247 = intrinsic load_deref (ssa_2246) (access=16) vec1 32 ssa_2249 = iadd ssa_2242, ssa_2248 vec4 32 ssa_2250 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2251 = deref_cast (SSBO *)ssa_2250 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2252 = deref_struct &ssa_2251->field0 (ssbo uint[]) /* &((SSBO *)ssa_2250)->field0 */ vec4 32 ssa_2253 = deref_array &(*ssa_2252)[ssa_2249] (ssbo uint) /* &((SSBO *)ssa_2250)->field0[ssa_2249] */ vec1 32 ssa_2254 = intrinsic load_deref (ssa_2253) (access=16) vec1 32 ssa_2256 = iadd ssa_2242, ssa_2255 vec4 32 ssa_2257 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2258 = deref_cast (SSBO *)ssa_2257 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2259 = deref_struct &ssa_2258->field0 (ssbo uint[]) /* &((SSBO *)ssa_2257)->field0 */ vec4 32 ssa_2260 = deref_array &(*ssa_2259)[ssa_2256] (ssbo uint) /* &((SSBO *)ssa_2257)->field0[ssa_2256] */ vec1 32 ssa_2261 = intrinsic load_deref (ssa_2260) (access=16) vec1 32 ssa_2263 = iadd ssa_2242, ssa_2262 vec4 32 ssa_2264 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2265 = deref_cast (SSBO *)ssa_2264 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2266 = deref_struct &ssa_2265->field0 (ssbo uint[]) /* &((SSBO *)ssa_2264)->field0 */ vec4 32 ssa_2267 = deref_array &(*ssa_2266)[ssa_2263] (ssbo uint) /* &((SSBO *)ssa_2264)->field0[ssa_2263] */ vec1 32 ssa_2268 = intrinsic load_deref (ssa_2267) (access=16) vec1 32 ssa_2279 = iadd ssa_2201, ssa_2278 vec1 32 ssa_2281 = ushr ssa_2279, ssa_2280 vec4 32 ssa_2282 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2283 = deref_cast (SSBO *)ssa_2282 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2284 = deref_struct &ssa_2283->field0 (ssbo uint[]) /* &((SSBO *)ssa_2282)->field0 */ vec4 32 ssa_2285 = deref_array &(*ssa_2284)[ssa_2281] (ssbo uint) /* &((SSBO *)ssa_2282)->field0[ssa_2281] */ vec1 32 ssa_2286 = intrinsic load_deref (ssa_2285) (access=16) vec1 32 ssa_2288 = iadd ssa_2281, ssa_2287 vec4 32 ssa_2289 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2290 = deref_cast (SSBO *)ssa_2289 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2291 = deref_struct &ssa_2290->field0 (ssbo uint[]) /* &((SSBO *)ssa_2289)->field0 */ vec4 32 ssa_2292 = deref_array &(*ssa_2291)[ssa_2288] (ssbo uint) /* &((SSBO *)ssa_2289)->field0[ssa_2288] */ vec1 32 ssa_2293 = intrinsic load_deref (ssa_2292) (access=16) vec1 32 ssa_2295 = iadd ssa_2281, ssa_2294 vec4 32 ssa_2296 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2297 = deref_cast (SSBO *)ssa_2296 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2298 = deref_struct &ssa_2297->field0 (ssbo uint[]) /* &((SSBO *)ssa_2296)->field0 */ vec4 32 ssa_2299 = deref_array &(*ssa_2298)[ssa_2295] (ssbo uint) /* &((SSBO *)ssa_2296)->field0[ssa_2295] */ vec1 32 ssa_2300 = intrinsic load_deref (ssa_2299) (access=16) vec1 32 ssa_2302 = iadd ssa_2281, ssa_2301 vec4 32 ssa_2303 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2304 = deref_cast (SSBO *)ssa_2303 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2305 = deref_struct &ssa_2304->field0 (ssbo uint[]) /* &((SSBO *)ssa_2303)->field0 */ vec4 32 ssa_2306 = deref_array &(*ssa_2305)[ssa_2302] (ssbo uint) /* &((SSBO *)ssa_2303)->field0[ssa_2302] */ vec1 32 ssa_2307 = intrinsic load_deref (ssa_2306) (access=16) vec1 32 ssa_2317 = fmul ssa_2092, ssa_344 vec1 32 ssa_2318 = fmul ssa_2131, ssa_344 vec1 32 ssa_2319 = fmul ssa_2170, ssa_344 vec1 32 ssa_2320 = fmul ssa_2099, ssa_344 vec1 32 ssa_2321 = fmul ssa_2138, ssa_344 vec1 32 ssa_2322 = fmul ssa_2177, ssa_344 vec1 32 ssa_2323 = fmul ssa_2106, ssa_344 vec1 32 ssa_2324 = fmul ssa_2145, ssa_344 vec1 32 ssa_2325 = fmul ssa_2184, ssa_344 vec1 32 ssa_2326 = fmul ssa_2113, ssa_344 vec1 32 ssa_2327 = fmul ssa_2152, ssa_344 vec1 32 ssa_2328 = fmul ssa_2191, ssa_344 vec1 32 ssa_2329 = fadd ssa_2073, ssa_2317 vec1 32 ssa_2330 = fadd ssa_2074, ssa_2318 vec1 32 ssa_2331 = fadd ssa_2075, ssa_2319 vec1 32 ssa_2332 = fadd ssa_2076, ssa_2320 vec1 32 ssa_2333 = fadd ssa_2077, ssa_2321 vec1 32 ssa_2334 = fadd ssa_2078, ssa_2322 vec1 32 ssa_2335 = fadd ssa_2079, ssa_2323 vec1 32 ssa_2336 = fadd ssa_2080, ssa_2324 vec1 32 ssa_2337 = fadd ssa_2081, ssa_2325 vec1 32 ssa_2338 = fadd ssa_2082, ssa_2326 vec1 32 ssa_2339 = fadd ssa_2083, ssa_2327 vec1 32 ssa_2340 = fadd ssa_2084, ssa_2328 vec1 32 ssa_2341 = fmul ssa_2208, ssa_348 vec1 32 ssa_2342 = fmul ssa_2247, ssa_348 vec1 32 ssa_2343 = fmul ssa_2286, ssa_348 vec1 32 ssa_2344 = fmul ssa_2215, ssa_348 vec1 32 ssa_2345 = fmul ssa_2254, ssa_348 vec1 32 ssa_2346 = fmul ssa_2293, ssa_348 vec1 32 ssa_2347 = fmul ssa_2222, ssa_348 vec1 32 ssa_2348 = fmul ssa_2261, ssa_348 vec1 32 ssa_2349 = fmul ssa_2300, ssa_348 vec1 32 ssa_2350 = fmul ssa_2229, ssa_348 vec1 32 ssa_2351 = fmul ssa_2268, ssa_348 vec1 32 ssa_2352 = fmul ssa_2307, ssa_348 vec1 32 ssa_2353 = fadd ssa_2329, ssa_2341 vec1 32 ssa_2354 = fadd ssa_2330, ssa_2342 vec1 32 ssa_2355 = fadd ssa_2331, ssa_2343 vec1 32 ssa_2356 = fadd ssa_2332, ssa_2344 vec1 32 ssa_2357 = fadd ssa_2333, ssa_2345 vec1 32 ssa_2358 = fadd ssa_2334, ssa_2346 vec1 32 ssa_2359 = fadd ssa_2335, ssa_2347 vec1 32 ssa_2360 = fadd ssa_2336, ssa_2348 vec1 32 ssa_2361 = fadd ssa_2337, ssa_2349 vec1 32 ssa_2362 = fadd ssa_2338, ssa_2350 vec1 32 ssa_2363 = fadd ssa_2339, ssa_2351 vec1 32 ssa_2364 = fadd ssa_2340, ssa_2352 vec1 32 ssa_2365 = iadd ssa_903, ssa_39.z vec1 32 ssa_2367 = ushr ssa_2365, ssa_2366 vec4 32 ssa_2368 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2369 = deref_cast (SSBO *)ssa_2368 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2370 = deref_struct &ssa_2369->field0 (ssbo uint[]) /* &((SSBO *)ssa_2368)->field0 */ vec4 32 ssa_2371 = deref_array &(*ssa_2370)[ssa_2367] (ssbo uint) /* &((SSBO *)ssa_2368)->field0[ssa_2367] */ vec1 32 ssa_2372 = intrinsic load_deref (ssa_2371) (access=16) vec1 32 ssa_2374 = iadd ssa_2367, ssa_2373 vec4 32 ssa_2375 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2376 = deref_cast (SSBO *)ssa_2375 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2377 = deref_struct &ssa_2376->field0 (ssbo uint[]) /* &((SSBO *)ssa_2375)->field0 */ vec4 32 ssa_2378 = deref_array &(*ssa_2377)[ssa_2374] (ssbo uint) /* &((SSBO *)ssa_2375)->field0[ssa_2374] */ vec1 32 ssa_2379 = intrinsic load_deref (ssa_2378) (access=16) vec1 32 ssa_2381 = iadd ssa_2367, ssa_2380 vec4 32 ssa_2382 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2383 = deref_cast (SSBO *)ssa_2382 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2384 = deref_struct &ssa_2383->field0 (ssbo uint[]) /* &((SSBO *)ssa_2382)->field0 */ vec4 32 ssa_2385 = deref_array &(*ssa_2384)[ssa_2381] (ssbo uint) /* &((SSBO *)ssa_2382)->field0[ssa_2381] */ vec1 32 ssa_2386 = intrinsic load_deref (ssa_2385) (access=16) vec1 32 ssa_2388 = iadd ssa_2367, ssa_2387 vec4 32 ssa_2389 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2390 = deref_cast (SSBO *)ssa_2389 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2391 = deref_struct &ssa_2390->field0 (ssbo uint[]) /* &((SSBO *)ssa_2389)->field0 */ vec4 32 ssa_2392 = deref_array &(*ssa_2391)[ssa_2388] (ssbo uint) /* &((SSBO *)ssa_2389)->field0[ssa_2388] */ vec1 32 ssa_2393 = intrinsic load_deref (ssa_2392) (access=16) vec1 32 ssa_2404 = iadd ssa_2365, ssa_2403 vec1 32 ssa_2406 = ushr ssa_2404, ssa_2405 vec4 32 ssa_2407 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2408 = deref_cast (SSBO *)ssa_2407 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2409 = deref_struct &ssa_2408->field0 (ssbo uint[]) /* &((SSBO *)ssa_2407)->field0 */ vec4 32 ssa_2410 = deref_array &(*ssa_2409)[ssa_2406] (ssbo uint) /* &((SSBO *)ssa_2407)->field0[ssa_2406] */ vec1 32 ssa_2411 = intrinsic load_deref (ssa_2410) (access=16) vec1 32 ssa_2413 = iadd ssa_2406, ssa_2412 vec4 32 ssa_2414 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2415 = deref_cast (SSBO *)ssa_2414 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2416 = deref_struct &ssa_2415->field0 (ssbo uint[]) /* &((SSBO *)ssa_2414)->field0 */ vec4 32 ssa_2417 = deref_array &(*ssa_2416)[ssa_2413] (ssbo uint) /* &((SSBO *)ssa_2414)->field0[ssa_2413] */ vec1 32 ssa_2418 = intrinsic load_deref (ssa_2417) (access=16) vec1 32 ssa_2420 = iadd ssa_2406, ssa_2419 vec4 32 ssa_2421 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2422 = deref_cast (SSBO *)ssa_2421 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2423 = deref_struct &ssa_2422->field0 (ssbo uint[]) /* &((SSBO *)ssa_2421)->field0 */ vec4 32 ssa_2424 = deref_array &(*ssa_2423)[ssa_2420] (ssbo uint) /* &((SSBO *)ssa_2421)->field0[ssa_2420] */ vec1 32 ssa_2425 = intrinsic load_deref (ssa_2424) (access=16) vec1 32 ssa_2427 = iadd ssa_2406, ssa_2426 vec4 32 ssa_2428 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2429 = deref_cast (SSBO *)ssa_2428 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2430 = deref_struct &ssa_2429->field0 (ssbo uint[]) /* &((SSBO *)ssa_2428)->field0 */ vec4 32 ssa_2431 = deref_array &(*ssa_2430)[ssa_2427] (ssbo uint) /* &((SSBO *)ssa_2428)->field0[ssa_2427] */ vec1 32 ssa_2432 = intrinsic load_deref (ssa_2431) (access=16) vec1 32 ssa_2443 = iadd ssa_2365, ssa_2442 vec1 32 ssa_2445 = ushr ssa_2443, ssa_2444 vec4 32 ssa_2446 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2447 = deref_cast (SSBO *)ssa_2446 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2448 = deref_struct &ssa_2447->field0 (ssbo uint[]) /* &((SSBO *)ssa_2446)->field0 */ vec4 32 ssa_2449 = deref_array &(*ssa_2448)[ssa_2445] (ssbo uint) /* &((SSBO *)ssa_2446)->field0[ssa_2445] */ vec1 32 ssa_2450 = intrinsic load_deref (ssa_2449) (access=16) vec1 32 ssa_2452 = iadd ssa_2445, ssa_2451 vec4 32 ssa_2453 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2454 = deref_cast (SSBO *)ssa_2453 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2455 = deref_struct &ssa_2454->field0 (ssbo uint[]) /* &((SSBO *)ssa_2453)->field0 */ vec4 32 ssa_2456 = deref_array &(*ssa_2455)[ssa_2452] (ssbo uint) /* &((SSBO *)ssa_2453)->field0[ssa_2452] */ vec1 32 ssa_2457 = intrinsic load_deref (ssa_2456) (access=16) vec1 32 ssa_2459 = iadd ssa_2445, ssa_2458 vec4 32 ssa_2460 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2461 = deref_cast (SSBO *)ssa_2460 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2462 = deref_struct &ssa_2461->field0 (ssbo uint[]) /* &((SSBO *)ssa_2460)->field0 */ vec4 32 ssa_2463 = deref_array &(*ssa_2462)[ssa_2459] (ssbo uint) /* &((SSBO *)ssa_2460)->field0[ssa_2459] */ vec1 32 ssa_2464 = intrinsic load_deref (ssa_2463) (access=16) vec1 32 ssa_2466 = iadd ssa_2445, ssa_2465 vec4 32 ssa_2467 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2468 = deref_cast (SSBO *)ssa_2467 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2469 = deref_struct &ssa_2468->field0 (ssbo uint[]) /* &((SSBO *)ssa_2467)->field0 */ vec4 32 ssa_2470 = deref_array &(*ssa_2469)[ssa_2466] (ssbo uint) /* &((SSBO *)ssa_2467)->field0[ssa_2466] */ vec1 32 ssa_2471 = intrinsic load_deref (ssa_2470) (access=16) vec1 32 ssa_2481 = iadd ssa_1020, ssa_39.z vec1 32 ssa_2483 = ushr ssa_2481, ssa_2482 vec4 32 ssa_2484 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2485 = deref_cast (SSBO *)ssa_2484 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2486 = deref_struct &ssa_2485->field0 (ssbo uint[]) /* &((SSBO *)ssa_2484)->field0 */ vec4 32 ssa_2487 = deref_array &(*ssa_2486)[ssa_2483] (ssbo uint) /* &((SSBO *)ssa_2484)->field0[ssa_2483] */ vec1 32 ssa_2488 = intrinsic load_deref (ssa_2487) (access=16) vec1 32 ssa_2490 = iadd ssa_2483, ssa_2489 vec4 32 ssa_2491 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2492 = deref_cast (SSBO *)ssa_2491 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2493 = deref_struct &ssa_2492->field0 (ssbo uint[]) /* &((SSBO *)ssa_2491)->field0 */ vec4 32 ssa_2494 = deref_array &(*ssa_2493)[ssa_2490] (ssbo uint) /* &((SSBO *)ssa_2491)->field0[ssa_2490] */ vec1 32 ssa_2495 = intrinsic load_deref (ssa_2494) (access=16) vec1 32 ssa_2497 = iadd ssa_2483, ssa_2496 vec4 32 ssa_2498 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2499 = deref_cast (SSBO *)ssa_2498 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2500 = deref_struct &ssa_2499->field0 (ssbo uint[]) /* &((SSBO *)ssa_2498)->field0 */ vec4 32 ssa_2501 = deref_array &(*ssa_2500)[ssa_2497] (ssbo uint) /* &((SSBO *)ssa_2498)->field0[ssa_2497] */ vec1 32 ssa_2502 = intrinsic load_deref (ssa_2501) (access=16) vec1 32 ssa_2504 = iadd ssa_2483, ssa_2503 vec4 32 ssa_2505 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2506 = deref_cast (SSBO *)ssa_2505 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2507 = deref_struct &ssa_2506->field0 (ssbo uint[]) /* &((SSBO *)ssa_2505)->field0 */ vec4 32 ssa_2508 = deref_array &(*ssa_2507)[ssa_2504] (ssbo uint) /* &((SSBO *)ssa_2505)->field0[ssa_2504] */ vec1 32 ssa_2509 = intrinsic load_deref (ssa_2508) (access=16) vec1 32 ssa_2520 = iadd ssa_2481, ssa_2519 vec1 32 ssa_2522 = ushr ssa_2520, ssa_2521 vec4 32 ssa_2523 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2524 = deref_cast (SSBO *)ssa_2523 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2525 = deref_struct &ssa_2524->field0 (ssbo uint[]) /* &((SSBO *)ssa_2523)->field0 */ vec4 32 ssa_2526 = deref_array &(*ssa_2525)[ssa_2522] (ssbo uint) /* &((SSBO *)ssa_2523)->field0[ssa_2522] */ vec1 32 ssa_2527 = intrinsic load_deref (ssa_2526) (access=16) vec1 32 ssa_2529 = iadd ssa_2522, ssa_2528 vec4 32 ssa_2530 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2531 = deref_cast (SSBO *)ssa_2530 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2532 = deref_struct &ssa_2531->field0 (ssbo uint[]) /* &((SSBO *)ssa_2530)->field0 */ vec4 32 ssa_2533 = deref_array &(*ssa_2532)[ssa_2529] (ssbo uint) /* &((SSBO *)ssa_2530)->field0[ssa_2529] */ vec1 32 ssa_2534 = intrinsic load_deref (ssa_2533) (access=16) vec1 32 ssa_2536 = iadd ssa_2522, ssa_2535 vec4 32 ssa_2537 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2538 = deref_cast (SSBO *)ssa_2537 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2539 = deref_struct &ssa_2538->field0 (ssbo uint[]) /* &((SSBO *)ssa_2537)->field0 */ vec4 32 ssa_2540 = deref_array &(*ssa_2539)[ssa_2536] (ssbo uint) /* &((SSBO *)ssa_2537)->field0[ssa_2536] */ vec1 32 ssa_2541 = intrinsic load_deref (ssa_2540) (access=16) vec1 32 ssa_2543 = iadd ssa_2522, ssa_2542 vec4 32 ssa_2544 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2545 = deref_cast (SSBO *)ssa_2544 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2546 = deref_struct &ssa_2545->field0 (ssbo uint[]) /* &((SSBO *)ssa_2544)->field0 */ vec4 32 ssa_2547 = deref_array &(*ssa_2546)[ssa_2543] (ssbo uint) /* &((SSBO *)ssa_2544)->field0[ssa_2543] */ vec1 32 ssa_2548 = intrinsic load_deref (ssa_2547) (access=16) vec1 32 ssa_2559 = iadd ssa_2481, ssa_2558 vec1 32 ssa_2561 = ushr ssa_2559, ssa_2560 vec4 32 ssa_2562 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2563 = deref_cast (SSBO *)ssa_2562 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2564 = deref_struct &ssa_2563->field0 (ssbo uint[]) /* &((SSBO *)ssa_2562)->field0 */ vec4 32 ssa_2565 = deref_array &(*ssa_2564)[ssa_2561] (ssbo uint) /* &((SSBO *)ssa_2562)->field0[ssa_2561] */ vec1 32 ssa_2566 = intrinsic load_deref (ssa_2565) (access=16) vec1 32 ssa_2568 = iadd ssa_2561, ssa_2567 vec4 32 ssa_2569 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2570 = deref_cast (SSBO *)ssa_2569 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2571 = deref_struct &ssa_2570->field0 (ssbo uint[]) /* &((SSBO *)ssa_2569)->field0 */ vec4 32 ssa_2572 = deref_array &(*ssa_2571)[ssa_2568] (ssbo uint) /* &((SSBO *)ssa_2569)->field0[ssa_2568] */ vec1 32 ssa_2573 = intrinsic load_deref (ssa_2572) (access=16) vec1 32 ssa_2575 = iadd ssa_2561, ssa_2574 vec4 32 ssa_2576 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2577 = deref_cast (SSBO *)ssa_2576 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2578 = deref_struct &ssa_2577->field0 (ssbo uint[]) /* &((SSBO *)ssa_2576)->field0 */ vec4 32 ssa_2579 = deref_array &(*ssa_2578)[ssa_2575] (ssbo uint) /* &((SSBO *)ssa_2576)->field0[ssa_2575] */ vec1 32 ssa_2580 = intrinsic load_deref (ssa_2579) (access=16) vec1 32 ssa_2582 = iadd ssa_2561, ssa_2581 vec4 32 ssa_2583 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2584 = deref_cast (SSBO *)ssa_2583 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2585 = deref_struct &ssa_2584->field0 (ssbo uint[]) /* &((SSBO *)ssa_2583)->field0 */ vec4 32 ssa_2586 = deref_array &(*ssa_2585)[ssa_2582] (ssbo uint) /* &((SSBO *)ssa_2583)->field0[ssa_2582] */ vec1 32 ssa_2587 = intrinsic load_deref (ssa_2586) (access=16) vec1 32 ssa_2597 = fmul ssa_2372, ssa_345 vec1 32 ssa_2598 = fmul ssa_2411, ssa_345 vec1 32 ssa_2599 = fmul ssa_2450, ssa_345 vec1 32 ssa_2600 = fmul ssa_2379, ssa_345 vec1 32 ssa_2601 = fmul ssa_2418, ssa_345 vec1 32 ssa_2602 = fmul ssa_2457, ssa_345 vec1 32 ssa_2603 = fmul ssa_2386, ssa_345 vec1 32 ssa_2604 = fmul ssa_2425, ssa_345 vec1 32 ssa_2605 = fmul ssa_2464, ssa_345 vec1 32 ssa_2606 = fmul ssa_2393, ssa_345 vec1 32 ssa_2607 = fmul ssa_2432, ssa_345 vec1 32 ssa_2608 = fmul ssa_2471, ssa_345 vec1 32 ssa_2609 = fadd ssa_2353, ssa_2597 vec1 32 ssa_2610 = fadd ssa_2354, ssa_2598 vec1 32 ssa_2611 = fadd ssa_2355, ssa_2599 vec1 32 ssa_2612 = fadd ssa_2356, ssa_2600 vec1 32 ssa_2613 = fadd ssa_2357, ssa_2601 vec1 32 ssa_2614 = fadd ssa_2358, ssa_2602 vec1 32 ssa_2615 = fadd ssa_2359, ssa_2603 vec1 32 ssa_2616 = fadd ssa_2360, ssa_2604 vec1 32 ssa_2617 = fadd ssa_2361, ssa_2605 vec1 32 ssa_2618 = fadd ssa_2362, ssa_2606 vec1 32 ssa_2619 = fadd ssa_2363, ssa_2607 vec1 32 ssa_2620 = fadd ssa_2364, ssa_2608 vec1 32 ssa_2621 = fmul ssa_2488, ssa_349 vec1 32 ssa_2622 = fmul ssa_2527, ssa_349 vec1 32 ssa_2623 = fmul ssa_2566, ssa_349 vec1 32 ssa_2624 = fmul ssa_2495, ssa_349 vec1 32 ssa_2625 = fmul ssa_2534, ssa_349 vec1 32 ssa_2626 = fmul ssa_2573, ssa_349 vec1 32 ssa_2627 = fmul ssa_2502, ssa_349 vec1 32 ssa_2628 = fmul ssa_2541, ssa_349 vec1 32 ssa_2629 = fmul ssa_2580, ssa_349 vec1 32 ssa_2630 = fmul ssa_2509, ssa_349 vec1 32 ssa_2631 = fmul ssa_2548, ssa_349 vec1 32 ssa_2632 = fmul ssa_2587, ssa_349 vec1 32 ssa_2633 = fadd ssa_2609, ssa_2621 vec1 32 ssa_2634 = fadd ssa_2610, ssa_2622 vec1 32 ssa_2635 = fadd ssa_2611, ssa_2623 vec1 32 ssa_2636 = fadd ssa_2612, ssa_2624 vec1 32 ssa_2637 = fadd ssa_2613, ssa_2625 vec1 32 ssa_2638 = fadd ssa_2614, ssa_2626 vec1 32 ssa_2639 = fadd ssa_2615, ssa_2627 vec1 32 ssa_2640 = fadd ssa_2616, ssa_2628 vec1 32 ssa_2641 = fadd ssa_2617, ssa_2629 vec1 32 ssa_2642 = fadd ssa_2618, ssa_2630 vec1 32 ssa_2643 = fadd ssa_2619, ssa_2631 vec1 32 ssa_2644 = fadd ssa_2620, ssa_2632 vec1 32 ssa_2645 = iadd ssa_1185, ssa_39.z vec1 32 ssa_2647 = ushr ssa_2645, ssa_2646 vec4 32 ssa_2648 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2649 = deref_cast (SSBO *)ssa_2648 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2650 = deref_struct &ssa_2649->field0 (ssbo uint[]) /* &((SSBO *)ssa_2648)->field0 */ vec4 32 ssa_2651 = deref_array &(*ssa_2650)[ssa_2647] (ssbo uint) /* &((SSBO *)ssa_2648)->field0[ssa_2647] */ vec1 32 ssa_2652 = intrinsic load_deref (ssa_2651) (access=16) vec1 32 ssa_2654 = iadd ssa_2647, ssa_2653 vec4 32 ssa_2655 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2656 = deref_cast (SSBO *)ssa_2655 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2657 = deref_struct &ssa_2656->field0 (ssbo uint[]) /* &((SSBO *)ssa_2655)->field0 */ vec4 32 ssa_2658 = deref_array &(*ssa_2657)[ssa_2654] (ssbo uint) /* &((SSBO *)ssa_2655)->field0[ssa_2654] */ vec1 32 ssa_2659 = intrinsic load_deref (ssa_2658) (access=16) vec1 32 ssa_2661 = iadd ssa_2647, ssa_2660 vec4 32 ssa_2662 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2663 = deref_cast (SSBO *)ssa_2662 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2664 = deref_struct &ssa_2663->field0 (ssbo uint[]) /* &((SSBO *)ssa_2662)->field0 */ vec4 32 ssa_2665 = deref_array &(*ssa_2664)[ssa_2661] (ssbo uint) /* &((SSBO *)ssa_2662)->field0[ssa_2661] */ vec1 32 ssa_2666 = intrinsic load_deref (ssa_2665) (access=16) vec1 32 ssa_2668 = iadd ssa_2647, ssa_2667 vec4 32 ssa_2669 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2670 = deref_cast (SSBO *)ssa_2669 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2671 = deref_struct &ssa_2670->field0 (ssbo uint[]) /* &((SSBO *)ssa_2669)->field0 */ vec4 32 ssa_2672 = deref_array &(*ssa_2671)[ssa_2668] (ssbo uint) /* &((SSBO *)ssa_2669)->field0[ssa_2668] */ vec1 32 ssa_2673 = intrinsic load_deref (ssa_2672) (access=16) vec1 32 ssa_2684 = iadd ssa_2645, ssa_2683 vec1 32 ssa_2686 = ushr ssa_2684, ssa_2685 vec4 32 ssa_2687 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2688 = deref_cast (SSBO *)ssa_2687 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2689 = deref_struct &ssa_2688->field0 (ssbo uint[]) /* &((SSBO *)ssa_2687)->field0 */ vec4 32 ssa_2690 = deref_array &(*ssa_2689)[ssa_2686] (ssbo uint) /* &((SSBO *)ssa_2687)->field0[ssa_2686] */ vec1 32 ssa_2691 = intrinsic load_deref (ssa_2690) (access=16) vec1 32 ssa_2693 = iadd ssa_2686, ssa_2692 vec4 32 ssa_2694 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2695 = deref_cast (SSBO *)ssa_2694 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2696 = deref_struct &ssa_2695->field0 (ssbo uint[]) /* &((SSBO *)ssa_2694)->field0 */ vec4 32 ssa_2697 = deref_array &(*ssa_2696)[ssa_2693] (ssbo uint) /* &((SSBO *)ssa_2694)->field0[ssa_2693] */ vec1 32 ssa_2698 = intrinsic load_deref (ssa_2697) (access=16) vec1 32 ssa_2700 = iadd ssa_2686, ssa_2699 vec4 32 ssa_2701 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2702 = deref_cast (SSBO *)ssa_2701 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2703 = deref_struct &ssa_2702->field0 (ssbo uint[]) /* &((SSBO *)ssa_2701)->field0 */ vec4 32 ssa_2704 = deref_array &(*ssa_2703)[ssa_2700] (ssbo uint) /* &((SSBO *)ssa_2701)->field0[ssa_2700] */ vec1 32 ssa_2705 = intrinsic load_deref (ssa_2704) (access=16) vec1 32 ssa_2707 = iadd ssa_2686, ssa_2706 vec4 32 ssa_2708 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2709 = deref_cast (SSBO *)ssa_2708 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2710 = deref_struct &ssa_2709->field0 (ssbo uint[]) /* &((SSBO *)ssa_2708)->field0 */ vec4 32 ssa_2711 = deref_array &(*ssa_2710)[ssa_2707] (ssbo uint) /* &((SSBO *)ssa_2708)->field0[ssa_2707] */ vec1 32 ssa_2712 = intrinsic load_deref (ssa_2711) (access=16) vec1 32 ssa_2723 = iadd ssa_2645, ssa_2722 vec1 32 ssa_2725 = ushr ssa_2723, ssa_2724 vec4 32 ssa_2726 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2727 = deref_cast (SSBO *)ssa_2726 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2728 = deref_struct &ssa_2727->field0 (ssbo uint[]) /* &((SSBO *)ssa_2726)->field0 */ vec4 32 ssa_2729 = deref_array &(*ssa_2728)[ssa_2725] (ssbo uint) /* &((SSBO *)ssa_2726)->field0[ssa_2725] */ vec1 32 ssa_2730 = intrinsic load_deref (ssa_2729) (access=16) vec1 32 ssa_2732 = iadd ssa_2725, ssa_2731 vec4 32 ssa_2733 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2734 = deref_cast (SSBO *)ssa_2733 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2735 = deref_struct &ssa_2734->field0 (ssbo uint[]) /* &((SSBO *)ssa_2733)->field0 */ vec4 32 ssa_2736 = deref_array &(*ssa_2735)[ssa_2732] (ssbo uint) /* &((SSBO *)ssa_2733)->field0[ssa_2732] */ vec1 32 ssa_2737 = intrinsic load_deref (ssa_2736) (access=16) vec1 32 ssa_2739 = iadd ssa_2725, ssa_2738 vec4 32 ssa_2740 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2741 = deref_cast (SSBO *)ssa_2740 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2742 = deref_struct &ssa_2741->field0 (ssbo uint[]) /* &((SSBO *)ssa_2740)->field0 */ vec4 32 ssa_2743 = deref_array &(*ssa_2742)[ssa_2739] (ssbo uint) /* &((SSBO *)ssa_2740)->field0[ssa_2739] */ vec1 32 ssa_2744 = intrinsic load_deref (ssa_2743) (access=16) vec1 32 ssa_2746 = iadd ssa_2725, ssa_2745 vec4 32 ssa_2747 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2748 = deref_cast (SSBO *)ssa_2747 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2749 = deref_struct &ssa_2748->field0 (ssbo uint[]) /* &((SSBO *)ssa_2747)->field0 */ vec4 32 ssa_2750 = deref_array &(*ssa_2749)[ssa_2746] (ssbo uint) /* &((SSBO *)ssa_2747)->field0[ssa_2746] */ vec1 32 ssa_2751 = intrinsic load_deref (ssa_2750) (access=16) vec1 32 ssa_2761 = iadd ssa_1302, ssa_39.z vec1 32 ssa_2763 = ushr ssa_2761, ssa_2762 vec4 32 ssa_2764 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2765 = deref_cast (SSBO *)ssa_2764 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2766 = deref_struct &ssa_2765->field0 (ssbo uint[]) /* &((SSBO *)ssa_2764)->field0 */ vec4 32 ssa_2767 = deref_array &(*ssa_2766)[ssa_2763] (ssbo uint) /* &((SSBO *)ssa_2764)->field0[ssa_2763] */ vec1 32 ssa_2768 = intrinsic load_deref (ssa_2767) (access=16) vec1 32 ssa_2770 = iadd ssa_2763, ssa_2769 vec4 32 ssa_2771 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2772 = deref_cast (SSBO *)ssa_2771 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2773 = deref_struct &ssa_2772->field0 (ssbo uint[]) /* &((SSBO *)ssa_2771)->field0 */ vec4 32 ssa_2774 = deref_array &(*ssa_2773)[ssa_2770] (ssbo uint) /* &((SSBO *)ssa_2771)->field0[ssa_2770] */ vec1 32 ssa_2775 = intrinsic load_deref (ssa_2774) (access=16) vec1 32 ssa_2777 = iadd ssa_2763, ssa_2776 vec4 32 ssa_2778 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2779 = deref_cast (SSBO *)ssa_2778 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2780 = deref_struct &ssa_2779->field0 (ssbo uint[]) /* &((SSBO *)ssa_2778)->field0 */ vec4 32 ssa_2781 = deref_array &(*ssa_2780)[ssa_2777] (ssbo uint) /* &((SSBO *)ssa_2778)->field0[ssa_2777] */ vec1 32 ssa_2782 = intrinsic load_deref (ssa_2781) (access=16) vec1 32 ssa_2784 = iadd ssa_2763, ssa_2783 vec4 32 ssa_2785 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2786 = deref_cast (SSBO *)ssa_2785 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2787 = deref_struct &ssa_2786->field0 (ssbo uint[]) /* &((SSBO *)ssa_2785)->field0 */ vec4 32 ssa_2788 = deref_array &(*ssa_2787)[ssa_2784] (ssbo uint) /* &((SSBO *)ssa_2785)->field0[ssa_2784] */ vec1 32 ssa_2789 = intrinsic load_deref (ssa_2788) (access=16) vec1 32 ssa_2800 = iadd ssa_2761, ssa_2799 vec1 32 ssa_2802 = ushr ssa_2800, ssa_2801 vec4 32 ssa_2803 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2804 = deref_cast (SSBO *)ssa_2803 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2805 = deref_struct &ssa_2804->field0 (ssbo uint[]) /* &((SSBO *)ssa_2803)->field0 */ vec4 32 ssa_2806 = deref_array &(*ssa_2805)[ssa_2802] (ssbo uint) /* &((SSBO *)ssa_2803)->field0[ssa_2802] */ vec1 32 ssa_2807 = intrinsic load_deref (ssa_2806) (access=16) vec1 32 ssa_2809 = iadd ssa_2802, ssa_2808 vec4 32 ssa_2810 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2811 = deref_cast (SSBO *)ssa_2810 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2812 = deref_struct &ssa_2811->field0 (ssbo uint[]) /* &((SSBO *)ssa_2810)->field0 */ vec4 32 ssa_2813 = deref_array &(*ssa_2812)[ssa_2809] (ssbo uint) /* &((SSBO *)ssa_2810)->field0[ssa_2809] */ vec1 32 ssa_2814 = intrinsic load_deref (ssa_2813) (access=16) vec1 32 ssa_2816 = iadd ssa_2802, ssa_2815 vec4 32 ssa_2817 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2818 = deref_cast (SSBO *)ssa_2817 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2819 = deref_struct &ssa_2818->field0 (ssbo uint[]) /* &((SSBO *)ssa_2817)->field0 */ vec4 32 ssa_2820 = deref_array &(*ssa_2819)[ssa_2816] (ssbo uint) /* &((SSBO *)ssa_2817)->field0[ssa_2816] */ vec1 32 ssa_2821 = intrinsic load_deref (ssa_2820) (access=16) vec1 32 ssa_2823 = iadd ssa_2802, ssa_2822 vec4 32 ssa_2824 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2825 = deref_cast (SSBO *)ssa_2824 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2826 = deref_struct &ssa_2825->field0 (ssbo uint[]) /* &((SSBO *)ssa_2824)->field0 */ vec4 32 ssa_2827 = deref_array &(*ssa_2826)[ssa_2823] (ssbo uint) /* &((SSBO *)ssa_2824)->field0[ssa_2823] */ vec1 32 ssa_2828 = intrinsic load_deref (ssa_2827) (access=16) vec1 32 ssa_2839 = iadd ssa_2761, ssa_2838 vec1 32 ssa_2841 = ushr ssa_2839, ssa_2840 vec4 32 ssa_2842 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2843 = deref_cast (SSBO *)ssa_2842 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2844 = deref_struct &ssa_2843->field0 (ssbo uint[]) /* &((SSBO *)ssa_2842)->field0 */ vec4 32 ssa_2845 = deref_array &(*ssa_2844)[ssa_2841] (ssbo uint) /* &((SSBO *)ssa_2842)->field0[ssa_2841] */ vec1 32 ssa_2846 = intrinsic load_deref (ssa_2845) (access=16) vec1 32 ssa_2848 = iadd ssa_2841, ssa_2847 vec4 32 ssa_2849 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2850 = deref_cast (SSBO *)ssa_2849 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2851 = deref_struct &ssa_2850->field0 (ssbo uint[]) /* &((SSBO *)ssa_2849)->field0 */ vec4 32 ssa_2852 = deref_array &(*ssa_2851)[ssa_2848] (ssbo uint) /* &((SSBO *)ssa_2849)->field0[ssa_2848] */ vec1 32 ssa_2853 = intrinsic load_deref (ssa_2852) (access=16) vec1 32 ssa_2855 = iadd ssa_2841, ssa_2854 vec4 32 ssa_2856 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2857 = deref_cast (SSBO *)ssa_2856 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2858 = deref_struct &ssa_2857->field0 (ssbo uint[]) /* &((SSBO *)ssa_2856)->field0 */ vec4 32 ssa_2859 = deref_array &(*ssa_2858)[ssa_2855] (ssbo uint) /* &((SSBO *)ssa_2856)->field0[ssa_2855] */ vec1 32 ssa_2860 = intrinsic load_deref (ssa_2859) (access=16) vec1 32 ssa_2862 = iadd ssa_2841, ssa_2861 vec4 32 ssa_2863 = intrinsic load_vulkan_descriptor (ssa_5) (desc_type=SSBO /*7*/) vec4 32 ssa_2864 = deref_cast (SSBO *)ssa_2863 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2865 = deref_struct &ssa_2864->field0 (ssbo uint[]) /* &((SSBO *)ssa_2863)->field0 */ vec4 32 ssa_2866 = deref_array &(*ssa_2865)[ssa_2862] (ssbo uint) /* &((SSBO *)ssa_2863)->field0[ssa_2862] */ vec1 32 ssa_2867 = intrinsic load_deref (ssa_2866) (access=16) vec1 32 ssa_2877 = fmul ssa_2652, ssa_346 vec1 32 ssa_2878 = fmul ssa_2691, ssa_346 vec1 32 ssa_2879 = fmul ssa_2730, ssa_346 vec1 32 ssa_2880 = fmul ssa_2659, ssa_346 vec1 32 ssa_2881 = fmul ssa_2698, ssa_346 vec1 32 ssa_2882 = fmul ssa_2737, ssa_346 vec1 32 ssa_2883 = fmul ssa_2666, ssa_346 vec1 32 ssa_2884 = fmul ssa_2705, ssa_346 vec1 32 ssa_2885 = fmul ssa_2744, ssa_346 vec1 32 ssa_2886 = fmul ssa_2673, ssa_346 vec1 32 ssa_2887 = fmul ssa_2712, ssa_346 vec1 32 ssa_2888 = fmul ssa_2751, ssa_346 vec1 32 ssa_2889 = fadd ssa_2633, ssa_2877 vec1 32 ssa_2890 = fadd ssa_2634, ssa_2878 vec1 32 ssa_2891 = fadd ssa_2635, ssa_2879 vec1 32 ssa_2892 = fadd ssa_2636, ssa_2880 vec1 32 ssa_2893 = fadd ssa_2637, ssa_2881 vec1 32 ssa_2894 = fadd ssa_2638, ssa_2882 vec1 32 ssa_2895 = fadd ssa_2639, ssa_2883 vec1 32 ssa_2896 = fadd ssa_2640, ssa_2884 vec1 32 ssa_2897 = fadd ssa_2641, ssa_2885 vec1 32 ssa_2898 = fadd ssa_2642, ssa_2886 vec1 32 ssa_2899 = fadd ssa_2643, ssa_2887 vec1 32 ssa_2900 = fadd ssa_2644, ssa_2888 vec1 32 ssa_2901 = fmul ssa_2768, ssa_350 vec1 32 ssa_2902 = fmul ssa_2807, ssa_350 vec1 32 ssa_2903 = fmul ssa_2846, ssa_350 vec1 32 ssa_2904 = fmul ssa_2775, ssa_350 vec1 32 ssa_2905 = fmul ssa_2814, ssa_350 vec1 32 ssa_2906 = fmul ssa_2853, ssa_350 vec1 32 ssa_2907 = fmul ssa_2782, ssa_350 vec1 32 ssa_2908 = fmul ssa_2821, ssa_350 vec1 32 ssa_2909 = fmul ssa_2860, ssa_350 vec1 32 ssa_2910 = fmul ssa_2789, ssa_350 vec1 32 ssa_2911 = fmul ssa_2828, ssa_350 vec1 32 ssa_2912 = fmul ssa_2867, ssa_350 vec1 32 ssa_2913 = fadd ssa_2889, ssa_2901 vec1 32 ssa_2914 = fadd ssa_2890, ssa_2902 vec1 32 ssa_2915 = fadd ssa_2891, ssa_2903 vec1 32 ssa_2916 = fadd ssa_2892, ssa_2904 vec1 32 ssa_2917 = fadd ssa_2893, ssa_2905 vec1 32 ssa_2918 = fadd ssa_2894, ssa_2906 vec1 32 ssa_2919 = fadd ssa_2895, ssa_2907 vec1 32 ssa_2920 = fadd ssa_2896, ssa_2908 vec1 32 ssa_2921 = fadd ssa_2897, ssa_2909 vec1 32 ssa_2922 = fadd ssa_2898, ssa_2910 vec1 32 ssa_2923 = fadd ssa_2899, ssa_2911 vec1 32 ssa_2924 = fadd ssa_2900, ssa_2912 vec1 32 ssa_2925 = fmul ssa_2913, ssa_315 vec1 32 ssa_2926 = ffma ssa_316, ssa_2916, ssa_2925 vec1 32 ssa_2927 = ffma ssa_317, ssa_2919, ssa_2926 vec1 32 ssa_2928 = fadd ssa_2927, ssa_2922 vec1 32 ssa_2929 = fmul ssa_2914, ssa_315 vec1 32 ssa_2930 = ffma ssa_316, ssa_2917, ssa_2929 vec1 32 ssa_2931 = ffma ssa_317, ssa_2920, ssa_2930 vec1 32 ssa_2932 = fadd ssa_2931, ssa_2923 vec1 32 ssa_2933 = fmul ssa_2915, ssa_315 vec1 32 ssa_2934 = ffma ssa_316, ssa_2918, ssa_2933 vec1 32 ssa_2935 = ffma ssa_317, ssa_2921, ssa_2934 vec1 32 ssa_2936 = fadd ssa_2935, ssa_2924 vec1 32 ssa_3646 = deref_var &phi@24 (function_temp float) intrinsic store_deref (ssa_3646, ssa_2936) (wrmask=x /*1*/, access=0) vec1 32 ssa_3644 = deref_var &phi@23 (function_temp float) intrinsic store_deref (ssa_3644, ssa_2932) (wrmask=x /*1*/, access=0) vec1 32 ssa_3642 = deref_var &phi@22 (function_temp float) intrinsic store_deref (ssa_3642, ssa_2928) (wrmask=x /*1*/, access=0) /* succs: block_30 block_57 */ if ssa_1485 { block block_30: /* preds: block_29 */ vec1 32 ssa_2938 = f2u32 ssa_323.x vec1 32 ssa_2940 = umin ssa_2938, ssa_2939 vec1 1 ssa_2942 = ieq ssa_2940, ssa_2941 vec1 32 ssa_3640 = deref_var &phi@21 (function_temp float) intrinsic store_deref (ssa_3640, ssa_2936) (wrmask=x /*1*/, access=0) vec1 32 ssa_3638 = deref_var &phi@20 (function_temp float) intrinsic store_deref (ssa_3638, ssa_2932) (wrmask=x /*1*/, access=0) vec1 32 ssa_3636 = deref_var &phi@19 (function_temp float) intrinsic store_deref (ssa_3636, ssa_2928) (wrmask=x /*1*/, access=0) /* succs: block_31 block_32 */ if ssa_2942 { block block_31: /* preds: block_30 */ /* succs: block_56 */ } else { block block_32: /* preds: block_30 */ vec1 32 ssa_3625 = deref_var &phi@15 (function_temp uint) intrinsic store_deref (ssa_3625, ssa_3624) (wrmask=x /*1*/, access=0) vec1 1 ssa_2943 = load_const (false) vec1 32 ssa_2944 = deref_var &loop_break@13 (function_temp bool) intrinsic store_deref (ssa_2944, ssa_2943) (wrmask=x /*1*/, access=0) /* succs: block_33 */ loop { block block_33: /* preds: block_32 block_54 */ vec1 1 ssa_2945 = load_const (false) vec1 32 ssa_2946 = deref_var &loop_continue@14 (function_temp bool) intrinsic store_deref (ssa_2946, ssa_2945) (wrmask=x /*1*/, access=0) vec1 32 ssa_2947 = deref_var &phi@15 (function_temp uint) vec1 32 ssa_2948 = intrinsic load_deref (ssa_2947) (access=0) vec1 32 ssa_2950 = ishl ssa_2948, ssa_2949 vec4 32 ssa_2951 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_2952 = deref_cast (BindlessCBV *)ssa_2951 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2953 = deref_struct &ssa_2952->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2951)->field0 */ vec4 32 ssa_2954 = deref_array &(*ssa_2953)[ssa_2950] (ssbo vec4) /* &((BindlessCBV *)ssa_2951)->field0[ssa_2950] */ vec4 32 ssa_2955 = intrinsic load_deref (ssa_2954) (access=16) vec2 32 ssa_2963 = unpack_half_2x16 ssa_2955.x vec1 32 ssa_2966 = ushr ssa_2955.x, ssa_2965 vec2 32 ssa_2967 = unpack_half_2x16 ssa_2966 vec2 32 ssa_2969 = unpack_half_2x16 ssa_2955.y vec1 32 ssa_2972 = ushr ssa_2955.y, ssa_2971 vec2 32 ssa_2973 = unpack_half_2x16 ssa_2972 vec2 32 ssa_2977 = unpack_half_2x16 ssa_2955.z vec1 32 ssa_2980 = ushr ssa_2955.z, ssa_2979 vec2 32 ssa_2981 = unpack_half_2x16 ssa_2980 vec2 32 ssa_2983 = unpack_half_2x16 ssa_2955.w vec1 32 ssa_2986 = ushr ssa_2955.w, ssa_2985 vec2 32 ssa_2987 = unpack_half_2x16 ssa_2986 vec1 32 ssa_2990 = ior ssa_2950, ssa_2989 vec4 32 ssa_2991 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_2992 = deref_cast (BindlessCBV *)ssa_2991 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2993 = deref_struct &ssa_2992->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2991)->field0 */ vec4 32 ssa_2994 = deref_array &(*ssa_2993)[ssa_2990] (ssbo vec4) /* &((BindlessCBV *)ssa_2991)->field0[ssa_2990] */ vec4 32 ssa_2995 = intrinsic load_deref (ssa_2994) (access=16) vec1 32 ssa_3001 = ior ssa_2950, ssa_3000 vec4 32 ssa_3002 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_3003 = deref_cast (BindlessCBV *)ssa_3002 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3004 = deref_struct &ssa_3003->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3002)->field0 */ vec4 32 ssa_3005 = deref_array &(*ssa_3004)[ssa_3001] (ssbo vec4) /* &((BindlessCBV *)ssa_3002)->field0[ssa_3001] */ vec4 32 ssa_3006 = intrinsic load_deref (ssa_3005) (access=16) vec1 32 ssa_3011 = ior ssa_2950, ssa_3010 vec4 32 ssa_3012 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_3013 = deref_cast (BindlessCBV *)ssa_3012 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3014 = deref_struct &ssa_3013->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3012)->field0 */ vec4 32 ssa_3015 = deref_array &(*ssa_3014)[ssa_3011] (ssbo vec4) /* &((BindlessCBV *)ssa_3012)->field0[ssa_3011] */ vec4 32 ssa_3016 = intrinsic load_deref (ssa_3015) (access=16) vec1 1 ssa_3024 = ieq ssa_3016.w, ssa_3023 vec1 32 ssa_3025 = fsub ssa_315, ssa_2995.x vec1 32 ssa_3026 = fsub ssa_316, ssa_2995.y vec1 32 ssa_3027 = fsub ssa_317, ssa_2995.z vec1 32 ssa_3028 = fsub ssa_3006.x, ssa_2995.x vec1 32 ssa_3029 = fsub ssa_3006.y, ssa_2995.y vec1 32 ssa_3030 = fsub ssa_3006.z, ssa_2995.z vec3 32 ssa_3031 = vec3 ssa_3025, ssa_3026, ssa_3027 vec3 32 ssa_3032 = vec3 ssa_3028, ssa_3029, ssa_3030 vec1 32 ssa_3033 = fdot3 ssa_3031, ssa_3032 vec3 32 ssa_3034 = vec3 ssa_3028, ssa_3029, ssa_3030 vec3 32 ssa_3035 = vec3 ssa_3028, ssa_3029, ssa_3030 vec1 32 ssa_3036 = fdot3 ssa_3034, ssa_3035 vec1 32 ssa_3037 = fdiv ssa_3033, ssa_3036 vec1 32 ssa_3038 = fmul ssa_3037, ssa_3028 vec1 32 ssa_3039 = fmul ssa_3037, ssa_3029 vec1 32 ssa_3040 = fmul ssa_3037, ssa_3030 vec1 32 ssa_3041 = fsub ssa_2995.x, ssa_315 vec1 32 ssa_3042 = fadd ssa_3041, ssa_3038 vec1 32 ssa_3043 = fsub ssa_2995.y, ssa_316 vec1 32 ssa_3044 = fadd ssa_3043, ssa_3039 vec1 32 ssa_3045 = fsub ssa_2995.z, ssa_317 vec1 32 ssa_3046 = fadd ssa_3045, ssa_3040 vec3 32 ssa_3047 = vec3 ssa_3042, ssa_3044, ssa_3046 vec3 32 ssa_3048 = vec3 ssa_3042, ssa_3044, ssa_3046 vec1 32 ssa_3049 = fdot3 ssa_3047, ssa_3048 vec1 32 ssa_3050 = fmul ssa_2995.w, ssa_2995.w vec1 1 ssa_3051 = fge! ssa_3050, ssa_3049 vec4 32 ssa_3052 = vec4 ssa_2977.x, ssa_2981.x, ssa_2983.x, ssa_2987.x vec4 32 ssa_3054 = vec4 ssa_315, ssa_316, ssa_317, ssa_3053 vec1 32 ssa_3055 = fdot4 ssa_3052, ssa_3054 /* succs: block_34 block_41 */ if ssa_3024 { block block_34: /* preds: block_33 */ vec4 32 ssa_3056 = intrinsic load_vulkan_descriptor (ssa_9) (desc_type=SSBO /*7*/) vec4 32 ssa_3057 = deref_cast (BindlessCBV *)ssa_3056 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3058 = deref_struct &ssa_3057->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3056)->field0 */ vec4 32 ssa_3059 = deref_array &(*ssa_3058)[ssa_3011] (ssbo vec4) /* &((BindlessCBV *)ssa_3056)->field0[ssa_3011] */ vec4 32 ssa_3060 = intrinsic load_deref (ssa_3059) (access=16) vec4 32 ssa_3061 = vec4 ssa_2963.x, ssa_2967.x, ssa_2969.x, ssa_2973.x vec4 32 ssa_3063 = vec4 ssa_315, ssa_316, ssa_317, ssa_3062 vec1 32 ssa_3064 = fdot4 ssa_3061, ssa_3063 vec1 1 ssa_3066 = flt! ssa_3065, ssa_3064 vec1 1 ssa_3067 = iand ssa_3051, ssa_3066 vec1 1 ssa_3069 = flt! ssa_3068, ssa_3055 vec1 1 ssa_3070 = iand ssa_3067, ssa_3069 /* succs: block_35 block_36 */ if ssa_3070 { block block_35: /* preds: block_34 */ vec4 32 ssa_3075 = vec4 ssa_315, ssa_316, ssa_317, ssa_3074 vec4 32 ssa_3076 = vec4 ssa_2963.x, ssa_2967.x, ssa_2969.x, ssa_2973.x vec1 32 ssa_3077 = fdot4 ssa_3075, ssa_3076 vec1 32 ssa_3078 = fmul ssa_2963.x, ssa_3077 vec1 32 ssa_3080 = fsub ssa_3079, ssa_3078 vec1 32 ssa_3081 = fmul ssa_2967.x, ssa_3077 vec1 32 ssa_3083 = fsub ssa_3082, ssa_3081 vec1 32 ssa_3084 = fmul ssa_2969.x, ssa_3077 vec1 32 ssa_3086 = fsub ssa_3085, ssa_3084 vec3 32 ssa_3087 = vec3 ssa_3080, ssa_3083, ssa_3086 vec3 32 ssa_3088 = vec3 ssa_3080, ssa_3083, ssa_3086 vec1 32 ssa_3089 = fdot3 ssa_3087, ssa_3088 vec1 32 ssa_3091 = fmul ssa_3089, ssa_3090 vec1 32 ssa_3094 = fmax! ssa_3091, ssa_3092 vec1 32 ssa_3095 = fmin! ssa_3094, ssa_3093 vec1 32 ssa_3096 = fsub ssa_3060.x, ssa_2928 vec1 32 ssa_3097 = fsub ssa_3060.y, ssa_2932 vec1 32 ssa_3098 = fsub ssa_3060.z, ssa_2936 vec1 32 ssa_3099 = fmul ssa_3095, ssa_3096 vec1 32 ssa_3100 = fmul ssa_3095, ssa_3097 vec1 32 ssa_3101 = fmul ssa_3095, ssa_3098 vec1 32 ssa_3102 = fadd ssa_3099, ssa_2928 vec1 32 ssa_3103 = fadd ssa_3100, ssa_2932 vec1 32 ssa_3104 = fadd ssa_3101, ssa_2936 vec1 32 ssa_3634 = deref_var &phi@18 (function_temp float) intrinsic store_deref (ssa_3634, ssa_3104) (wrmask=x /*1*/, access=0) vec1 32 ssa_3631 = deref_var &phi@17 (function_temp float) intrinsic store_deref (ssa_3631, ssa_3103) (wrmask=x /*1*/, access=0) vec1 32 ssa_3628 = deref_var &phi@16 (function_temp float) intrinsic store_deref (ssa_3628, ssa_3102) (wrmask=x /*1*/, access=0) break /* succs: block_55 */ } else { block block_36: /* preds: block_34 */ /* succs: block_37 */ } block block_37: /* preds: block_36 */ vec1 32 ssa_3105 = deref_var &loop_break@13 (function_temp bool) vec1 1 ssa_3106 = intrinsic load_deref (ssa_3105) (access=0) /* succs: block_38 block_39 */ if ssa_3106 { block block_38: /* preds: block_37 */ break /* succs: block_55 */ } else { block block_39: /* preds: block_37 */ /* succs: block_40 */ } block block_40: /* preds: block_39 */ /* succs: block_48 */ } else { block block_41: /* preds: block_33 */ vec1 1 ssa_3108 = fge! ssa_3055, ssa_3107 vec1 1 ssa_3109 = iand ssa_3051, ssa_3108 /* succs: block_42 block_43 */ if ssa_3109 { block block_42: /* preds: block_41 */ vec1 32 ssa_3110 = fadd ssa_3038, ssa_2995.x vec1 32 ssa_3111 = fadd ssa_3039, ssa_2995.y vec1 32 ssa_3112 = fadd ssa_3040, ssa_2995.z vec4 32 ssa_3114 = vec4 ssa_315, ssa_316, ssa_317, ssa_3113 vec4 32 ssa_3115 = vec4 ssa_2963.x, ssa_2967.x, ssa_2969.x, ssa_2973.x vec1 32 ssa_3116 = fdot4 ssa_3114, ssa_3115 vec1 32 ssa_3117 = fmul ssa_3116, ssa_2963.x vec1 32 ssa_3118 = fmul ssa_3116, ssa_2967.x vec1 32 ssa_3119 = fmul ssa_3116, ssa_2969.x vec1 32 ssa_3120 = fsub ssa_315, ssa_3117 vec1 32 ssa_3121 = fsub ssa_316, ssa_3118 vec1 32 ssa_3122 = fsub ssa_317, ssa_3119 vec1 32 ssa_3123 = fsub ssa_3120, ssa_3110 vec1 32 ssa_3124 = fsub ssa_3121, ssa_3111 vec1 32 ssa_3125 = fsub ssa_3122, ssa_3112 vec3 32 ssa_3126 = vec3 ssa_3123, ssa_3124, ssa_3125 vec3 32 ssa_3127 = vec3 ssa_3123, ssa_3124, ssa_3125 vec1 32 ssa_3128 = fdot3 ssa_3126, ssa_3127 vec1 32 ssa_3129 = frsq ssa_3128 vec1 32 ssa_3130 = fmul ssa_3129, ssa_2995.w vec1 32 ssa_3131 = fmul ssa_3130, ssa_3123 vec1 32 ssa_3132 = fmul ssa_3130, ssa_3124 vec1 32 ssa_3133 = fmul ssa_3130, ssa_3125 vec1 32 ssa_3134 = fadd ssa_3131, ssa_3110 vec1 32 ssa_3135 = fadd ssa_3132, ssa_3111 vec1 32 ssa_3136 = fadd ssa_3133, ssa_3112 vec1 32 ssa_3137 = fmul ssa_3134, ssa_2913 vec1 32 ssa_3138 = ffma ssa_3135, ssa_2916, ssa_3137 vec1 32 ssa_3139 = ffma ssa_3136, ssa_2919, ssa_3138 vec1 32 ssa_3140 = fadd ssa_3139, ssa_2922 vec1 32 ssa_3141 = fmul ssa_3134, ssa_2914 vec1 32 ssa_3142 = ffma ssa_3135, ssa_2917, ssa_3141 vec1 32 ssa_3143 = ffma ssa_3136, ssa_2920, ssa_3142 vec1 32 ssa_3144 = fadd ssa_3143, ssa_2923 vec1 32 ssa_3145 = fmul ssa_3134, ssa_2915 vec1 32 ssa_3146 = ffma ssa_3135, ssa_2918, ssa_3145 vec1 32 ssa_3147 = ffma ssa_3136, ssa_2921, ssa_3146 vec1 32 ssa_3148 = fadd ssa_3147, ssa_2924 vec1 32 ssa_3635 = deref_var &phi@18 (function_temp float) intrinsic store_deref (ssa_3635, ssa_3148) (wrmask=x /*1*/, access=0) vec1 32 ssa_3632 = deref_var &phi@17 (function_temp float) intrinsic store_deref (ssa_3632, ssa_3144) (wrmask=x /*1*/, access=0) vec1 32 ssa_3629 = deref_var &phi@16 (function_temp float) intrinsic store_deref (ssa_3629, ssa_3140) (wrmask=x /*1*/, access=0) break /* succs: block_55 */ } else { block block_43: /* preds: block_41 */ /* succs: block_44 */ } block block_44: /* preds: block_43 */ vec1 32 ssa_3149 = deref_var &loop_break@13 (function_temp bool) vec1 1 ssa_3150 = intrinsic load_deref (ssa_3149) (access=0) /* succs: block_45 block_46 */ if ssa_3150 { block block_45: /* preds: block_44 */ break /* succs: block_55 */ } else { block block_46: /* preds: block_44 */ /* succs: block_47 */ } block block_47: /* preds: block_46 */ /* succs: block_48 */ } block block_48: /* preds: block_40 block_47 */ vec1 32 ssa_3151 = deref_var &loop_break@13 (function_temp bool) vec1 1 ssa_3152 = intrinsic load_deref (ssa_3151) (access=0) /* succs: block_49 block_50 */ if ssa_3152 { block block_49: /* preds: block_48 */ break /* succs: block_55 */ } else { block block_50: /* preds: block_48 */ /* succs: block_51 */ } block block_51: /* preds: block_50 */ vec1 32 ssa_3154 = iadd ssa_2948, ssa_3153 vec1 1 ssa_3155 = ult ssa_3154, ssa_2940 vec1 32 ssa_3633 = deref_var &phi@18 (function_temp float) intrinsic store_deref (ssa_3633, ssa_2936) (wrmask=x /*1*/, access=0) vec1 32 ssa_3630 = deref_var &phi@17 (function_temp float) intrinsic store_deref (ssa_3630, ssa_2932) (wrmask=x /*1*/, access=0) vec1 32 ssa_3627 = deref_var &phi@16 (function_temp float) intrinsic store_deref (ssa_3627, ssa_2928) (wrmask=x /*1*/, access=0) vec1 32 ssa_3626 = deref_var &phi@15 (function_temp uint) intrinsic store_deref (ssa_3626, ssa_3154) (wrmask=x /*1*/, access=0) /* succs: block_52 block_53 */ if ssa_3155 { block block_52: /* preds: block_51 */ /* succs: block_54 */ } else { block block_53: /* preds: block_51 */ break /* succs: block_55 */ } block block_54: /* preds: block_52 */ continue /* succs: block_33 */ } block block_55: /* preds: block_35 block_38 block_42 block_45 block_49 block_53 */ vec1 32 ssa_3156 = deref_var &phi@16 (function_temp float) vec1 32 ssa_3157 = intrinsic load_deref (ssa_3156) (access=0) vec1 32 ssa_3158 = deref_var &phi@17 (function_temp float) vec1 32 ssa_3159 = intrinsic load_deref (ssa_3158) (access=0) vec1 32 ssa_3160 = deref_var &phi@18 (function_temp float) vec1 32 ssa_3161 = intrinsic load_deref (ssa_3160) (access=0) vec1 32 ssa_3641 = deref_var &phi@21 (function_temp float) intrinsic store_deref (ssa_3641, ssa_3161) (wrmask=x /*1*/, access=0) vec1 32 ssa_3639 = deref_var &phi@20 (function_temp float) intrinsic store_deref (ssa_3639, ssa_3159) (wrmask=x /*1*/, access=0) vec1 32 ssa_3637 = deref_var &phi@19 (function_temp float) intrinsic store_deref (ssa_3637, ssa_3157) (wrmask=x /*1*/, access=0) /* succs: block_56 */ } block block_56: /* preds: block_31 block_55 */ vec1 32 ssa_3162 = deref_var &phi@19 (function_temp float) vec1 32 ssa_3163 = intrinsic load_deref (ssa_3162) (access=0) vec1 32 ssa_3164 = deref_var &phi@20 (function_temp float) vec1 32 ssa_3165 = intrinsic load_deref (ssa_3164) (access=0) vec1 32 ssa_3166 = deref_var &phi@21 (function_temp float) vec1 32 ssa_3167 = intrinsic load_deref (ssa_3166) (access=0) vec1 32 ssa_3647 = deref_var &phi@24 (function_temp float) intrinsic store_deref (ssa_3647, ssa_3167) (wrmask=x /*1*/, access=0) vec1 32 ssa_3645 = deref_var &phi@23 (function_temp float) intrinsic store_deref (ssa_3645, ssa_3165) (wrmask=x /*1*/, access=0) vec1 32 ssa_3643 = deref_var &phi@22 (function_temp float) intrinsic store_deref (ssa_3643, ssa_3163) (wrmask=x /*1*/, access=0) /* succs: block_58 */ } else { block block_57: /* preds: block_29 */ /* succs: block_58 */ } block block_58: /* preds: block_56 block_57 */ vec1 32 ssa_3168 = deref_var &phi@22 (function_temp float) vec1 32 ssa_3169 = intrinsic load_deref (ssa_3168) (access=0) vec1 32 ssa_3170 = deref_var &phi@23 (function_temp float) vec1 32 ssa_3171 = intrinsic load_deref (ssa_3170) (access=0) vec1 32 ssa_3172 = deref_var &phi@24 (function_temp float) vec1 32 ssa_3173 = intrinsic load_deref (ssa_3172) (access=0) vec4 32 ssa_3174 = intrinsic load_vulkan_descriptor (ssa_23) (desc_type=SSBO /*7*/) vec4 32 ssa_3175 = deref_cast (BindlessCBV *)ssa_3174 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3176 = deref_struct &ssa_3175->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3174)->field0 */ vec1 32 ssa_3177 = load_const (0x00000002 = 0.000000) vec4 32 ssa_3178 = deref_array &(*ssa_3176)[2] (ssbo vec4) /* &((BindlessCBV *)ssa_3174)->field0[2] */ vec4 32 ssa_3179 = intrinsic load_deref (ssa_3178) (access=16) vec4 32 ssa_3184 = intrinsic load_vulkan_descriptor (ssa_23) (desc_type=SSBO /*7*/) vec4 32 ssa_3185 = deref_cast (BindlessCBV *)ssa_3184 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3186 = deref_struct &ssa_3185->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3184)->field0 */ vec1 32 ssa_3187 = load_const (0x00000003 = 0.000000) vec4 32 ssa_3188 = deref_array &(*ssa_3186)[3] (ssbo vec4) /* &((BindlessCBV *)ssa_3184)->field0[3] */ vec4 32 ssa_3189 = intrinsic load_deref (ssa_3188) (access=16) vec4 32 ssa_3194 = intrinsic load_vulkan_descriptor (ssa_23) (desc_type=SSBO /*7*/) vec4 32 ssa_3195 = deref_cast (BindlessCBV *)ssa_3194 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3196 = deref_struct &ssa_3195->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3194)->field0 */ vec1 32 ssa_3197 = load_const (0x00000004 = 0.000000) vec4 32 ssa_3198 = deref_array &(*ssa_3196)[4] (ssbo vec4) /* &((BindlessCBV *)ssa_3194)->field0[4] */ vec4 32 ssa_3199 = intrinsic load_deref (ssa_3198) (access=16) vec4 32 ssa_3204 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_3205 = deref_cast (BindlessCBV *)ssa_3204 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3206 = deref_struct &ssa_3205->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3204)->field0 */ vec1 32 ssa_3207 = load_const (0x00000026 = 0.000000) vec4 32 ssa_3208 = deref_array &(*ssa_3206)[38] (ssbo vec4) /* &((BindlessCBV *)ssa_3204)->field0[38] */ vec4 32 ssa_3209 = intrinsic load_deref (ssa_3208) (access=16) vec1 32 ssa_3221 = isub ssa_3179.w, ssa_3209.x vec1 32 ssa_3222 = isub ssa_3189.w, ssa_3209.y vec1 32 ssa_3223 = isub ssa_3199.w, ssa_3209.z vec1 32 ssa_3224 = i2f32 ssa_3221 vec1 32 ssa_3225 = i2f32 ssa_3222 vec1 32 ssa_3226 = i2f32 ssa_3223 vec1 32 ssa_3228 = fmul ssa_3224, ssa_3227 vec1 32 ssa_3230 = fmul ssa_3225, ssa_3229 vec1 32 ssa_3232 = fmul ssa_3226, ssa_3231 vec1 32 ssa_3233 = fmul ssa_3179.x, ssa_3169 vec1 32 ssa_3234 = ffma ssa_3179.y, ssa_3171, ssa_3233 vec1 32 ssa_3235 = ffma ssa_3179.z, ssa_3173, ssa_3234 vec1 32 ssa_3236 = fadd ssa_3228, ssa_3235 vec1 32 ssa_3237 = fmul ssa_3189.x, ssa_3169 vec1 32 ssa_3238 = ffma ssa_3189.y, ssa_3171, ssa_3237 vec1 32 ssa_3239 = ffma ssa_3189.z, ssa_3173, ssa_3238 vec1 32 ssa_3240 = fadd ssa_3230, ssa_3239 vec1 32 ssa_3241 = fmul ssa_3199.x, ssa_3169 vec1 32 ssa_3242 = ffma ssa_3199.y, ssa_3171, ssa_3241 vec1 32 ssa_3243 = ffma ssa_3199.z, ssa_3173, ssa_3242 vec1 32 ssa_3244 = fadd ssa_3243, ssa_3232 vec4 32 ssa_3245 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_3246 = deref_cast (BindlessCBV *)ssa_3245 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3247 = deref_struct &ssa_3246->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3245)->field0 */ vec1 32 ssa_3248 = load_const (0x00000010 = 0.000000) vec4 32 ssa_3249 = deref_array &(*ssa_3247)[16] (ssbo vec4) /* &((BindlessCBV *)ssa_3245)->field0[16] */ vec4 32 ssa_3250 = intrinsic load_deref (ssa_3249) (access=16) vec4 32 ssa_3255 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_3256 = deref_cast (BindlessCBV *)ssa_3255 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3257 = deref_struct &ssa_3256->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3255)->field0 */ vec1 32 ssa_3258 = load_const (0x00000011 = 0.000000) vec4 32 ssa_3259 = deref_array &(*ssa_3257)[17] (ssbo vec4) /* &((BindlessCBV *)ssa_3255)->field0[17] */ vec4 32 ssa_3260 = intrinsic load_deref (ssa_3259) (access=16) vec4 32 ssa_3265 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_3266 = deref_cast (BindlessCBV *)ssa_3265 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3267 = deref_struct &ssa_3266->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3265)->field0 */ vec1 32 ssa_3268 = load_const (0x00000012 = 0.000000) vec4 32 ssa_3269 = deref_array &(*ssa_3267)[18] (ssbo vec4) /* &((BindlessCBV *)ssa_3265)->field0[18] */ vec4 32 ssa_3270 = intrinsic load_deref (ssa_3269) (access=16) vec4 32 ssa_3275 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_3276 = deref_cast (BindlessCBV *)ssa_3275 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3277 = deref_struct &ssa_3276->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3275)->field0 */ vec1 32 ssa_3278 = load_const (0x00000013 = 0.000000) vec4 32 ssa_3279 = deref_array &(*ssa_3277)[19] (ssbo vec4) /* &((BindlessCBV *)ssa_3275)->field0[19] */ vec4 32 ssa_3280 = intrinsic load_deref (ssa_3279) (access=16) vec1 32 ssa_3285 = fmul ssa_3250.x, ssa_3236 vec1 32 ssa_3286 = ffma ssa_3240, ssa_3250.y, ssa_3285 vec1 32 ssa_3287 = ffma ssa_3244, ssa_3250.z, ssa_3286 vec1 32 ssa_3288 = fadd ssa_3287, ssa_3250.w vec1 32 ssa_3289 = fmul ssa_3260.x, ssa_3236 vec1 32 ssa_3290 = ffma ssa_3240, ssa_3260.y, ssa_3289 vec1 32 ssa_3291 = ffma ssa_3244, ssa_3260.z, ssa_3290 vec1 32 ssa_3292 = fadd ssa_3291, ssa_3260.w vec1 32 ssa_3293 = fmul ssa_3270.x, ssa_3236 vec1 32 ssa_3294 = ffma ssa_3240, ssa_3270.y, ssa_3293 vec1 32 ssa_3295 = ffma ssa_3244, ssa_3270.z, ssa_3294 vec1 32 ssa_3296 = fadd ssa_3295, ssa_3270.w vec1 32 ssa_3297 = fmul ssa_3280.x, ssa_3236 vec1 32 ssa_3298 = ffma ssa_3240, ssa_3280.y, ssa_3297 vec1 32 ssa_3299 = ffma ssa_3244, ssa_3280.z, ssa_3298 vec1 32 ssa_3300 = fadd ssa_3299, ssa_3280.w vec1 32 ssa_3301 = fmul ssa_1455, ssa_186.x vec1 32 ssa_3302 = ffma ssa_1456, ssa_193.y, ssa_3301 vec1 32 ssa_3303 = ffma ssa_1457, ssa_200.z, ssa_3302 vec1 32 ssa_3304 = fmul ssa_1455, ssa_214.x vec1 32 ssa_3305 = ffma ssa_1456, ssa_221.y, ssa_3304 vec1 32 ssa_3306 = ffma ssa_1457, ssa_228.z, ssa_3305 vec1 32 ssa_3307 = fmul ssa_1455, ssa_242.x vec1 32 ssa_3308 = ffma ssa_1456, ssa_249.y, ssa_3307 vec1 32 ssa_3309 = ffma ssa_1457, ssa_256.z, ssa_3308 vec1 32 ssa_3310 = fmul ssa_1458, ssa_186.x vec1 32 ssa_3311 = ffma ssa_1459, ssa_193.y, ssa_3310 vec1 32 ssa_3312 = ffma ssa_1460, ssa_200.z, ssa_3311 vec1 32 ssa_3313 = fmul ssa_1458, ssa_214.x vec1 32 ssa_3314 = ffma ssa_1459, ssa_221.y, ssa_3313 vec1 32 ssa_3315 = ffma ssa_1460, ssa_228.z, ssa_3314 vec1 32 ssa_3316 = fmul ssa_1458, ssa_242.x vec1 32 ssa_3317 = ffma ssa_1459, ssa_249.y, ssa_3316 vec1 32 ssa_3318 = ffma ssa_1460, ssa_256.z, ssa_3317 vec1 32 ssa_3319 = fmul ssa_1461, ssa_186.x vec1 32 ssa_3320 = ffma ssa_1462, ssa_193.y, ssa_3319 vec1 32 ssa_3321 = ffma ssa_1463, ssa_200.z, ssa_3320 vec1 32 ssa_3322 = fmul ssa_1461, ssa_214.x vec1 32 ssa_3323 = ffma ssa_1462, ssa_221.y, ssa_3322 vec1 32 ssa_3324 = ffma ssa_1463, ssa_228.z, ssa_3323 vec1 32 ssa_3325 = fmul ssa_1461, ssa_242.x vec1 32 ssa_3326 = ffma ssa_1462, ssa_249.y, ssa_3325 vec1 32 ssa_3327 = ffma ssa_1463, ssa_256.z, ssa_3326 vec1 32 ssa_3329 = fmul ssa_64.x, ssa_3328 vec1 32 ssa_3331 = fmul ssa_69.y, ssa_3330 vec1 32 ssa_3333 = fmul ssa_74.z, ssa_3332 vec1 32 ssa_3335 = fadd ssa_3329, ssa_3334 vec1 32 ssa_3337 = fadd ssa_3331, ssa_3336 vec1 32 ssa_3339 = fadd ssa_3333, ssa_3338 vec1 32 ssa_3341 = fmul ssa_44.x, ssa_3340 vec1 32 ssa_3343 = fmul ssa_49.y, ssa_3342 vec1 32 ssa_3345 = fmul ssa_54.z, ssa_3344 vec1 32 ssa_3347 = fadd ssa_3341, ssa_3346 vec1 32 ssa_3349 = fadd ssa_3343, ssa_3348 vec1 32 ssa_3351 = fadd ssa_3345, ssa_3350 vec1 32 ssa_3353 = fmul ssa_59.w, ssa_3352 vec1 32 ssa_3355 = fadd ssa_3353, ssa_3354 vec1 32 ssa_3356 = fmul ssa_3337, ssa_3351 vec1 32 ssa_3357 = fmul ssa_3339, ssa_3349 vec1 32 ssa_3358 = fsub ssa_3356, ssa_3357 vec1 32 ssa_3359 = fmul ssa_3339, ssa_3347 vec1 32 ssa_3360 = fmul ssa_3335, ssa_3351 vec1 32 ssa_3361 = fsub ssa_3359, ssa_3360 vec1 32 ssa_3362 = fmul ssa_3335, ssa_3349 vec1 32 ssa_3363 = fmul ssa_3337, ssa_3347 vec1 32 ssa_3364 = fsub ssa_3362, ssa_3363 vec1 32 ssa_3365 = fmul ssa_3358, ssa_3355 vec1 32 ssa_3366 = fmul ssa_3361, ssa_3355 vec1 32 ssa_3367 = fmul ssa_3364, ssa_3355 vec1 32 ssa_3368 = fmul ssa_3303, ssa_3347 vec1 32 ssa_3369 = ffma ssa_3349, ssa_3312, ssa_3368 vec1 32 ssa_3370 = ffma ssa_3351, ssa_3321, ssa_3369 vec1 32 ssa_3371 = fmul ssa_3306, ssa_3347 vec1 32 ssa_3372 = ffma ssa_3349, ssa_3315, ssa_3371 vec1 32 ssa_3373 = ffma ssa_3351, ssa_3324, ssa_3372 vec1 32 ssa_3374 = fmul ssa_3309, ssa_3347 vec1 32 ssa_3375 = ffma ssa_3349, ssa_3318, ssa_3374 vec1 32 ssa_3376 = ffma ssa_3351, ssa_3327, ssa_3375 vec1 32 ssa_3377 = fmul ssa_3303, ssa_3365 vec1 32 ssa_3378 = ffma ssa_3366, ssa_3312, ssa_3377 vec1 32 ssa_3379 = ffma ssa_3367, ssa_3321, ssa_3378 vec1 32 ssa_3380 = fmul ssa_3306, ssa_3365 vec1 32 ssa_3381 = ffma ssa_3366, ssa_3315, ssa_3380 vec1 32 ssa_3382 = ffma ssa_3367, ssa_3324, ssa_3381 vec1 32 ssa_3383 = fmul ssa_3309, ssa_3365 vec1 32 ssa_3384 = ffma ssa_3366, ssa_3318, ssa_3383 vec1 32 ssa_3385 = ffma ssa_3367, ssa_3327, ssa_3384 vec1 32 ssa_3386 = fmul ssa_3303, ssa_3335 vec1 32 ssa_3387 = ffma ssa_3337, ssa_3312, ssa_3386 vec1 32 ssa_3388 = ffma ssa_3339, ssa_3321, ssa_3387 vec1 32 ssa_3389 = fmul ssa_3306, ssa_3335 vec1 32 ssa_3390 = ffma ssa_3337, ssa_3315, ssa_3389 vec1 32 ssa_3391 = ffma ssa_3339, ssa_3324, ssa_3390 vec1 32 ssa_3392 = fmul ssa_3309, ssa_3335 vec1 32 ssa_3393 = ffma ssa_3337, ssa_3318, ssa_3392 vec1 32 ssa_3394 = ffma ssa_3339, ssa_3327, ssa_3393 vec3 32 ssa_3395 = vec3 ssa_3388, ssa_3391, ssa_3394 vec3 32 ssa_3396 = vec3 ssa_3388, ssa_3391, ssa_3394 vec1 32 ssa_3397 = fdot3 ssa_3395, ssa_3396 vec1 32 ssa_3398 = frsq ssa_3397 vec1 32 ssa_3399 = fmul ssa_3398, ssa_3388 vec1 32 ssa_3400 = fmul ssa_3398, ssa_3391 vec1 32 ssa_3401 = fmul ssa_3398, ssa_3394 vec3 32 ssa_3402 = vec3 ssa_3379, ssa_3382, ssa_3385 vec3 32 ssa_3403 = vec3 ssa_3379, ssa_3382, ssa_3385 vec1 32 ssa_3404 = fdot3 ssa_3402, ssa_3403 vec1 32 ssa_3405 = frsq ssa_3404 vec1 32 ssa_3406 = fmul ssa_3405, ssa_3379 vec1 32 ssa_3407 = fmul ssa_3405, ssa_3382 vec1 32 ssa_3408 = fmul ssa_3405, ssa_3385 vec3 32 ssa_3409 = vec3 ssa_3370, ssa_3373, ssa_3376 vec3 32 ssa_3410 = vec3 ssa_3370, ssa_3373, ssa_3376 vec1 32 ssa_3411 = fdot3 ssa_3409, ssa_3410 vec1 32 ssa_3412 = frsq ssa_3411 vec1 32 ssa_3413 = fmul ssa_3412, ssa_3370 vec1 32 ssa_3414 = fmul ssa_3412, ssa_3373 vec1 32 ssa_3415 = fmul ssa_3412, ssa_3376 vec4 32 ssa_3416 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_3417 = deref_cast (BindlessCBV *)ssa_3416 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3418 = deref_struct &ssa_3417->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3416)->field0 */ vec1 32 ssa_3419 = load_const (0x00000033 = 0.000000) vec4 32 ssa_3420 = deref_array &(*ssa_3418)[51] (ssbo vec4) /* &((BindlessCBV *)ssa_3416)->field0[51] */ vec4 32 ssa_3421 = intrinsic load_deref (ssa_3420) (access=16) vec1 32 ssa_3424 = fmul ssa_3421.x, ssa_1816 vec1 32 ssa_3425 = fmul ssa_3421.y, ssa_1816 vec1 32 ssa_3426 = fsub ssa_1804, ssa_3424 vec1 32 ssa_3427 = fsub ssa_1808, ssa_3425 vec4 32 ssa_3428 = intrinsic load_vulkan_descriptor (ssa_19) (desc_type=SSBO /*7*/) vec4 32 ssa_3429 = deref_cast (BindlessCBV *)ssa_3428 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_3430 = deref_struct &ssa_3429->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_3428)->field0 */ vec1 32 ssa_3431 = load_const (0x00000032 = 0.000000) vec4 32 ssa_3432 = deref_array &(*ssa_3430)[50] (ssbo vec4) /* &((BindlessCBV *)ssa_3428)->field0[50] */ vec4 32 ssa_3433 = intrinsic load_deref (ssa_3432) (access=16) vec4 32 ssa_3440 = vec4 ssa_1758, ssa_1759, ssa_1760, ssa_3439 vec1 32 ssa_3441 = fdot4 ssa_3433, ssa_3440 vec1 32 ssa_3442 = deref_var &SV_Position (shader_out vec4) vec4 32 ssa_3445 = intrinsic load_deref (ssa_3442) (access=0) vec4 32 ssa_3446 = vec4! ssa_1804, ssa_3445.y, ssa_3445.z, ssa_3445.w intrinsic store_deref (ssa_3442, ssa_3446) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3447 = deref_var &SV_Position (shader_out vec4) vec4 32 ssa_3450 = intrinsic load_deref (ssa_3447) (access=0) vec4 32 ssa_3451 = vec4! ssa_3450.x, ssa_1808, ssa_3450.z, ssa_3450.w intrinsic store_deref (ssa_3447, ssa_3451) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3452 = deref_var &SV_Position (shader_out vec4) vec4 32 ssa_3455 = intrinsic load_deref (ssa_3452) (access=0) vec4 32 ssa_3456 = vec4! ssa_3455.x, ssa_3455.y, ssa_1812, ssa_3455.w intrinsic store_deref (ssa_3452, ssa_3456) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3457 = deref_var &SV_Position (shader_out vec4) vec4 32 ssa_3460 = intrinsic load_deref (ssa_3457) (access=0) vec4 32 ssa_3461 = vec4! ssa_3460.x, ssa_3460.y, ssa_3460.z, ssa_1816 intrinsic store_deref (ssa_3457, ssa_3461) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3462 = deref_var &TEXCOORD@2 (shader_out vec4) vec4 32 ssa_3465 = intrinsic load_deref (ssa_3462) (access=0) vec4 32 ssa_3466 = vec4 ssa_79.x, ssa_3465.y, ssa_3465.z, ssa_3465.w intrinsic store_deref (ssa_3462, ssa_3466) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3467 = deref_var &TEXCOORD@2 (shader_out vec4) vec4 32 ssa_3470 = intrinsic load_deref (ssa_3467) (access=0) vec4 32 ssa_3471 = vec4 ssa_3470.x, ssa_84.y, ssa_3470.z, ssa_3470.w intrinsic store_deref (ssa_3467, ssa_3471) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3472 = deref_var &TEXCOORD@2 (shader_out vec4) vec4 32 ssa_3475 = intrinsic load_deref (ssa_3472) (access=0) vec4 32 ssa_3476 = vec4 ssa_3475.x, ssa_3475.y, ssa_3399, ssa_3475.w intrinsic store_deref (ssa_3472, ssa_3476) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3477 = deref_var &TEXCOORD@2 (shader_out vec4) vec4 32 ssa_3480 = intrinsic load_deref (ssa_3477) (access=0) vec4 32 ssa_3481 = vec4 ssa_3480.x, ssa_3480.y, ssa_3480.z, ssa_3400 intrinsic store_deref (ssa_3477, ssa_3481) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3482 = deref_var &TEXCOORD_1 (shader_out vec4) vec4 32 ssa_3485 = intrinsic load_deref (ssa_3482) (access=0) vec4 32 ssa_3486 = vec4 ssa_3401, ssa_3485.y, ssa_3485.z, ssa_3485.w intrinsic store_deref (ssa_3482, ssa_3486) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3487 = deref_var &TEXCOORD_1 (shader_out vec4) vec4 32 ssa_3490 = intrinsic load_deref (ssa_3487) (access=0) vec4 32 ssa_3491 = vec4 ssa_3490.x, ssa_3406, ssa_3490.z, ssa_3490.w intrinsic store_deref (ssa_3487, ssa_3491) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3492 = deref_var &TEXCOORD_1 (shader_out vec4) vec4 32 ssa_3495 = intrinsic load_deref (ssa_3492) (access=0) vec4 32 ssa_3496 = vec4 ssa_3495.x, ssa_3495.y, ssa_3407, ssa_3495.w intrinsic store_deref (ssa_3492, ssa_3496) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3497 = deref_var &TEXCOORD_1 (shader_out vec4) vec4 32 ssa_3500 = intrinsic load_deref (ssa_3497) (access=0) vec4 32 ssa_3501 = vec4 ssa_3500.x, ssa_3500.y, ssa_3500.z, ssa_3408 intrinsic store_deref (ssa_3497, ssa_3501) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3502 = deref_var &TEXCOORD_2 (shader_out vec4) vec4 32 ssa_3505 = intrinsic load_deref (ssa_3502) (access=0) vec4 32 ssa_3506 = vec4 ssa_3413, ssa_3505.y, ssa_3505.z, ssa_3505.w intrinsic store_deref (ssa_3502, ssa_3506) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3507 = deref_var &TEXCOORD_2 (shader_out vec4) vec4 32 ssa_3510 = intrinsic load_deref (ssa_3507) (access=0) vec4 32 ssa_3511 = vec4 ssa_3510.x, ssa_3414, ssa_3510.z, ssa_3510.w intrinsic store_deref (ssa_3507, ssa_3511) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3512 = deref_var &TEXCOORD_2 (shader_out vec4) vec4 32 ssa_3515 = intrinsic load_deref (ssa_3512) (access=0) vec4 32 ssa_3516 = vec4 ssa_3515.x, ssa_3515.y, ssa_3415, ssa_3515.w intrinsic store_deref (ssa_3512, ssa_3516) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3517 = deref_var &TEXCOORD_2 (shader_out vec4) vec4 32 ssa_3520 = intrinsic load_deref (ssa_3517) (access=0) vec4 32 ssa_3521 = vec4 ssa_3520.x, ssa_3520.y, ssa_3520.z, ssa_3335 intrinsic store_deref (ssa_3517, ssa_3521) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3522 = deref_var &TEXCOORD_3 (shader_out vec4) vec4 32 ssa_3525 = intrinsic load_deref (ssa_3522) (access=0) vec4 32 ssa_3526 = vec4 ssa_3337, ssa_3525.y, ssa_3525.z, ssa_3525.w intrinsic store_deref (ssa_3522, ssa_3526) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3527 = deref_var &TEXCOORD_3 (shader_out vec4) vec4 32 ssa_3530 = intrinsic load_deref (ssa_3527) (access=0) vec4 32 ssa_3531 = vec4 ssa_3530.x, ssa_3339, ssa_3530.z, ssa_3530.w intrinsic store_deref (ssa_3527, ssa_3531) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3532 = deref_var &TEXCOORD_3 (shader_out vec4) vec4 32 ssa_3535 = intrinsic load_deref (ssa_3532) (access=0) vec4 32 ssa_3536 = vec4 ssa_3535.x, ssa_3535.y, ssa_1758, ssa_3535.w intrinsic store_deref (ssa_3532, ssa_3536) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3537 = deref_var &TEXCOORD_3 (shader_out vec4) vec4 32 ssa_3540 = intrinsic load_deref (ssa_3537) (access=0) vec4 32 ssa_3541 = vec4 ssa_3540.x, ssa_3540.y, ssa_3540.z, ssa_1759 intrinsic store_deref (ssa_3537, ssa_3541) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3542 = deref_var &TEXCOORD_4 (shader_out vec4) vec4 32 ssa_3545 = intrinsic load_deref (ssa_3542) (access=0) vec4 32 ssa_3546 = vec4 ssa_1760, ssa_3545.y, ssa_3545.z, ssa_3545.w intrinsic store_deref (ssa_3542, ssa_3546) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3547 = deref_var &TEXCOORD_4 (shader_out vec4) vec4 32 ssa_3550 = intrinsic load_deref (ssa_3547) (access=0) vec4 32 ssa_3551 = vec4 ssa_3550.x, ssa_1748, ssa_3550.z, ssa_3550.w intrinsic store_deref (ssa_3547, ssa_3551) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3552 = deref_var &TEXCOORD_4 (shader_out vec4) vec4 32 ssa_3555 = intrinsic load_deref (ssa_3552) (access=0) vec4 32 ssa_3556 = vec4 ssa_3555.x, ssa_3555.y, ssa_3426, ssa_3555.w intrinsic store_deref (ssa_3552, ssa_3556) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3557 = deref_var &TEXCOORD_4 (shader_out vec4) vec4 32 ssa_3560 = intrinsic load_deref (ssa_3557) (access=0) vec4 32 ssa_3561 = vec4 ssa_3560.x, ssa_3560.y, ssa_3560.z, ssa_3427 intrinsic store_deref (ssa_3557, ssa_3561) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3562 = deref_var &TEXCOORD_5 (shader_out vec4) vec4 32 ssa_3565 = intrinsic load_deref (ssa_3562) (access=0) vec4 32 ssa_3566 = vec4 ssa_1812, ssa_3565.y, ssa_3565.z, ssa_3565.w intrinsic store_deref (ssa_3562, ssa_3566) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3567 = deref_var &TEXCOORD_5 (shader_out vec4) vec4 32 ssa_3570 = intrinsic load_deref (ssa_3567) (access=0) vec4 32 ssa_3571 = vec4 ssa_3570.x, ssa_1816, ssa_3570.z, ssa_3570.w intrinsic store_deref (ssa_3567, ssa_3571) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3572 = deref_var &TEXCOORD_5 (shader_out vec4) vec4 32 ssa_3575 = intrinsic load_deref (ssa_3572) (access=0) vec4 32 ssa_3576 = vec4 ssa_3575.x, ssa_3575.y, ssa_3288, ssa_3575.w intrinsic store_deref (ssa_3572, ssa_3576) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3577 = deref_var &TEXCOORD_5 (shader_out vec4) vec4 32 ssa_3580 = intrinsic load_deref (ssa_3577) (access=0) vec4 32 ssa_3581 = vec4 ssa_3580.x, ssa_3580.y, ssa_3580.z, ssa_3292 intrinsic store_deref (ssa_3577, ssa_3581) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_3582 = deref_var &TEXCOORD_6 (shader_out vec3) vec3 32 ssa_3585 = intrinsic load_deref (ssa_3582) (access=0) vec3 32 ssa_3586 = vec3 ssa_3296, ssa_3585.y, ssa_3585.z intrinsic store_deref (ssa_3582, ssa_3586) (wrmask=xyz /*7*/, access=0) vec1 32 ssa_3587 = deref_var &TEXCOORD_6 (shader_out vec3) vec3 32 ssa_3590 = intrinsic load_deref (ssa_3587) (access=0) vec3 32 ssa_3591 = vec3 ssa_3590.x, ssa_3300, ssa_3590.z intrinsic store_deref (ssa_3587, ssa_3591) (wrmask=xyz /*7*/, access=0) vec1 32 ssa_3592 = deref_var &TEXCOORD_6 (shader_out vec3) vec3 32 ssa_3595 = intrinsic load_deref (ssa_3592) (access=0) vec3 32 ssa_3596 = vec3 ssa_3595.x, ssa_3595.y, ssa_25 intrinsic store_deref (ssa_3592, ssa_3596) (wrmask=xyz /*7*/, access=0) vec1 32 ssa_3597 = deref_var &@3 (shader_out float[1]) vec1 32 ssa_3598 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3599 = deref_array &(*ssa_3597)[0] (shader_out float) /* &@3[0] */ intrinsic store_deref (ssa_3599, ssa_3441) (wrmask=x /*1*/, access=0) /* succs: block_59 */ block block_59: } NIR (from SPIR-V) for MESA_SHADER_FRAGMENT shader: shader: MESA_SHADER_FRAGMENT source_sha1: {0x54969a6b, 0xf3bf0d03, 0x19f918a7, 0x34f0be6e, 0x8aacdd2a} stage: 4 next_stage: 0 num_ssbos: 5 subgroup_size: 2 origin_upper_left: true inputs: 0 outputs: 0 uniforms: 108 decl_var push_const INTERP_MODE_NONE RootConstants registers decl_var uniform INTERP_MODE_NONE restrict texture2D[] @0 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict texture2DArray[] @1 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict texture3D[] @2 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict utexture2D[] @3 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @4 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @5 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @6 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @7 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict BindlessCBV[] @8 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict sampler[] @9 (~0, 0, 0) decl_var system INTERP_MODE_NONE vec4 SV_Position decl_var shader_in INTERP_MODE_NONE vec4 TEXCOORD (VARYING_SLOT_VAR1.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 TEXCOORD_1 (VARYING_SLOT_VAR2.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 TEXCOORD_2 (VARYING_SLOT_VAR3.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 TEXCOORD_3 (VARYING_SLOT_VAR4.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 TEXCOORD_4 (VARYING_SLOT_VAR5.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec4 TEXCOORD_5 (VARYING_SLOT_VAR6.xyzw, 0, 0) decl_var shader_in INTERP_MODE_NONE vec3 TEXCOORD_6 (VARYING_SLOT_VAR7.xyz, 0, 0) decl_var system INTERP_MODE_NONE bool SV_IsFrontFace decl_var shader_out INTERP_MODE_NONE vec4 SV_Target (FRAG_RESULT_DATA0.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_1 (FRAG_RESULT_DATA1.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_2 (FRAG_RESULT_DATA2.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_3 (FRAG_RESULT_DATA3.xyzw, 0, 0) decl_function main (0 params) impl main { decl_var INTERP_MODE_NONE bool loop_break decl_var INTERP_MODE_NONE bool loop_continue decl_var INTERP_MODE_NONE float phi decl_var INTERP_MODE_NONE float phi@10 decl_var INTERP_MODE_NONE float phi@11 decl_var INTERP_MODE_NONE float phi@12 decl_var INTERP_MODE_NONE float phi@13 decl_var INTERP_MODE_NONE float phi@14 decl_var INTERP_MODE_NONE float phi@15 decl_var INTERP_MODE_NONE float phi@16 decl_var INTERP_MODE_NONE float phi@17 decl_var INTERP_MODE_NONE float phi@18 decl_var INTERP_MODE_NONE float phi@19 decl_var INTERP_MODE_NONE float phi@20 decl_var INTERP_MODE_NONE uint phi@21 decl_var INTERP_MODE_NONE float phi@22 decl_var INTERP_MODE_NONE float phi@23 decl_var INTERP_MODE_NONE float phi@24 decl_var INTERP_MODE_NONE float phi@25 decl_var INTERP_MODE_NONE float phi@26 decl_var INTERP_MODE_NONE float phi@27 decl_var INTERP_MODE_NONE float phi@28 decl_var INTERP_MODE_NONE float phi@29 decl_var INTERP_MODE_NONE float phi@30 decl_var INTERP_MODE_NONE float phi@31 decl_var INTERP_MODE_NONE float phi@32 decl_var INTERP_MODE_NONE float phi@33 decl_var INTERP_MODE_NONE float phi@34 decl_var INTERP_MODE_NONE float phi@35 decl_var INTERP_MODE_NONE float phi@36 decl_var INTERP_MODE_NONE float phi@37 decl_var INTERP_MODE_NONE float phi@38 decl_var INTERP_MODE_NONE float phi@39 decl_var INTERP_MODE_NONE float phi@40 decl_var INTERP_MODE_NONE float phi@41 decl_var INTERP_MODE_NONE float phi@42 decl_var INTERP_MODE_NONE float phi@43 decl_var INTERP_MODE_NONE float phi@44 decl_var INTERP_MODE_NONE float phi@45 decl_var INTERP_MODE_NONE float phi@46 decl_var INTERP_MODE_NONE float phi@47 decl_var INTERP_MODE_NONE float phi@48 decl_var INTERP_MODE_NONE float phi@49 decl_var INTERP_MODE_NONE float phi@50 decl_var INTERP_MODE_NONE float phi@51 decl_var INTERP_MODE_NONE float phi@52 decl_var INTERP_MODE_NONE float phi@53 decl_var INTERP_MODE_NONE float phi@54 decl_var INTERP_MODE_NONE float phi@55 decl_var INTERP_MODE_NONE float phi@56 decl_var INTERP_MODE_NONE float phi@57 decl_var INTERP_MODE_NONE uint phi@58 decl_var INTERP_MODE_NONE float phi@59 decl_var INTERP_MODE_NONE float phi@60 decl_var INTERP_MODE_NONE float phi@61 decl_var INTERP_MODE_NONE float phi@62 decl_var INTERP_MODE_NONE float phi@63 decl_var INTERP_MODE_NONE float phi@64 decl_var INTERP_MODE_NONE float phi@65 decl_var INTERP_MODE_NONE float phi@66 decl_var INTERP_MODE_NONE float phi@67 decl_var INTERP_MODE_NONE float phi@68 decl_var INTERP_MODE_NONE bool loop_break@69 decl_var INTERP_MODE_NONE bool loop_continue@70 decl_var INTERP_MODE_NONE float phi@71 decl_var INTERP_MODE_NONE float phi@72 decl_var INTERP_MODE_NONE float phi@73 decl_var INTERP_MODE_NONE bool phi@74 decl_var INTERP_MODE_NONE bool loop_break@75 decl_var INTERP_MODE_NONE bool loop_continue@76 decl_var INTERP_MODE_NONE float phi@77 decl_var INTERP_MODE_NONE uint phi@78 decl_var INTERP_MODE_NONE float phi@79 decl_var INTERP_MODE_NONE float phi@80 decl_var INTERP_MODE_NONE uint phi@81 decl_var INTERP_MODE_NONE float phi@82 decl_var INTERP_MODE_NONE float phi@83 decl_var INTERP_MODE_NONE float phi@84 decl_var INTERP_MODE_NONE float phi@85 decl_var INTERP_MODE_NONE float phi@86 decl_var INTERP_MODE_NONE float phi@87 decl_var INTERP_MODE_NONE float phi@88 decl_var INTERP_MODE_NONE float phi@89 decl_var INTERP_MODE_NONE float phi@90 decl_var INTERP_MODE_NONE float phi@91 decl_var INTERP_MODE_NONE float phi@92 decl_var INTERP_MODE_NONE float phi@93 decl_var INTERP_MODE_NONE float phi@94 decl_var INTERP_MODE_NONE float phi@95 decl_var INTERP_MODE_NONE float phi@96 decl_var INTERP_MODE_NONE float phi@97 decl_var INTERP_MODE_NONE float phi@98 decl_var INTERP_MODE_NONE float phi@99 decl_var INTERP_MODE_NONE float phi@100 decl_var INTERP_MODE_NONE float phi@101 decl_var INTERP_MODE_NONE float phi@102 decl_var INTERP_MODE_NONE float phi@103 decl_var INTERP_MODE_NONE float phi@104 decl_var INTERP_MODE_NONE float phi@105 decl_var INTERP_MODE_NONE float phi@106 decl_var INTERP_MODE_NONE float phi@107 decl_var INTERP_MODE_NONE float phi@108 decl_var INTERP_MODE_NONE float phi@109 decl_var INTERP_MODE_NONE float phi@110 decl_var INTERP_MODE_NONE float phi@111 decl_var INTERP_MODE_NONE float phi@112 decl_var INTERP_MODE_NONE float phi@113 block block_0: /* preds: */ vec1 32 ssa_0 = load_const (0x00000000 = 0.000000) vec1 32 ssa_1 = load_const (0x00000000 = 0.000000) vec1 32 ssa_2 = load_const (0x00000000 = 0.000000) vec1 32 ssa_3 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_4 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_5 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_6 = load_const (0x00000000 = 0.000000) vec1 32 ssa_7 = load_const (0x00000000 = 0.000000) vec1 32 ssa_8 = load_const (0x00000000 = 0.000000) vec1 32 ssa_9 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_10 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_11 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_12 = load_const (0x00000000 = 0.000000) vec1 32 ssa_13 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_14 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_15 = load_const (0x00000000 = 0.000000) vec1 32 ssa_16 = load_const (0x00000000 = 0.000000) vec1 32 ssa_17 = load_const (0x00000000 = 0.000000) vec1 1 ssa_18 = load_const (true) vec1 32 ssa_19 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_20 = load_const (0x00000000 = 0.000000) vec1 32 ssa_21 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_22 = load_const (0x00000000 = 0.000000) vec1 32 ssa_23 = load_const (0x00000000 = 0.000000) vec1 32 ssa_24 = load_const (0x00000000 = 0.000000) vec1 32 ssa_25 = load_const (0x00000000 = 0.000000) vec1 32 ssa_26 = load_const (0x00000000 = 0.000000) vec1 32 ssa_27 = load_const (0x00000000 = 0.000000) vec1 32 ssa_28 = load_const (0x00000000 = 0.000000) vec1 32 ssa_29 = load_const (0x00000000 = 0.000000) vec1 32 ssa_30 = load_const (0x00000000 = 0.000000) vec1 32 ssa_31 = load_const (0x00000000 = 0.000000) vec1 32 ssa_32 = load_const (0x00000000 = 0.000000) vec1 32 ssa_33 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_34 = load_const (0x00000000 = 0.000000) vec1 32 ssa_35 = load_const (0x00000000 = 0.000000) vec1 32 ssa_36 = load_const (0x00000000 = 0.000000) vec1 32 ssa_37 = load_const (0x00000000 = 0.000000) vec1 32 ssa_38 = load_const (0x00000000 = 0.000000) vec1 32 ssa_39 = load_const (0x00000000 = 0.000000) vec1 32 ssa_40 = load_const (0x00000000 = 0.000000) vec1 32 ssa_41 = load_const (0x00000000 = 0.000000) vec1 32 ssa_42 = load_const (0x00000000 = 0.000000) vec1 32 ssa_43 = load_const (0x00000000 = 0.000000) vec1 32 ssa_44 = load_const (0x00000000 = 0.000000) vec1 32 ssa_45 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_46 = load_const (0x3eaaaaab = 0.333333) vec1 32 ssa_47 = load_const (0x00000000 = 0.000000) vec1 32 ssa_48 = load_const (0x447a0000 = 1000.000000) vec1 32 ssa_49 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_50 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_51 = load_const (0x3b808081 = 0.003922) vec1 32 ssa_52 = load_const (0x427c0000 = 63.000000) vec1 32 ssa_53 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_54 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_55 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_56 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_57 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_58 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_59 = load_const (0x3eaaaaab = 0.333333) vec1 32 ssa_60 = load_const (0x40000000 = 2.000000) vec1 32 ssa_61 = load_const (0x40000000 = 2.000000) vec1 32 ssa_62 = load_const (0x40000000 = 2.000000) vec1 32 ssa_63 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_64 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_65 = load_const (0x00000000 = 0.000000) vec1 32 ssa_66 = load_const (0x411ffffe = 9.999998) vec1 32 ssa_67 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_68 = load_const (0x40d55558 = 6.666668) vec1 32 ssa_69 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_70 = load_const (0x00000000 = 0.000000) vec1 32 ssa_71 = load_const (0x3e199998 = 0.150000) vec1 32 ssa_72 = load_const (0x3ecccccd = 0.400000) vec1 32 ssa_73 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_74 = load_const (0x00000000 = 0.000000) vec1 32 ssa_75 = load_const (0x411ffffe = 9.999998) vec1 32 ssa_76 = load_const (0xbf400000 = -0.750000) vec1 32 ssa_77 = load_const (0x41700000 = 15.000000) vec1 32 ssa_78 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_79 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_80 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_81 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_82 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_83 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_84 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_85 = load_const (0x00000000 = 0.000000) vec1 32 ssa_86 = load_const (0xc0000000 = -2.000000) vec1 32 ssa_87 = load_const (0xbf400000 = -0.750000) vec1 32 ssa_88 = load_const (0x3e7d70a4 = 0.247500) vec1 32 ssa_89 = load_const (0x3c75c290 = 0.015000) vec1 32 ssa_90 = load_const (0x3ea8f5c3 = 0.330000) vec1 32 ssa_91 = load_const (0x3e7d70a4 = 0.247500) vec1 32 ssa_92 = load_const (0x3f400000 = 0.750000) vec1 32 ssa_93 = load_const (0x3e800000 = 0.250000) vec1 32 ssa_94 = load_const (0x3f400000 = 0.750000) vec1 32 ssa_95 = load_const (0x3dcccccd = 0.100000) vec1 32 ssa_96 = load_const (0x41f00000 = 30.000000) vec1 32 ssa_97 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_98 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_99 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_100 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_101 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_102 = load_const (0x3e800000 = 0.250000) vec1 32 ssa_103 = load_const (0x3e800000 = 0.250000) vec1 32 ssa_104 = load_const (0x3e800000 = 0.250000) vec1 32 ssa_105 = load_const (0x41800000 = 16.000000) vec1 32 ssa_106 = load_const (0x3fc00000 = 1.500000) vec1 32 ssa_107 = load_const (0x3fc00000 = 1.500000) vec1 32 ssa_108 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_109 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_110 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_111 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_112 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_113 = load_const (0x00000000 = 0.000000) vec1 32 ssa_114 = load_const (0x40a00001 = 5.000000) vec1 32 ssa_115 = load_const (0xbe99999a = -0.300000) vec1 32 ssa_116 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_117 = load_const (0x00000000 = 0.000000) vec1 32 ssa_118 = load_const (0x411ffffe = 9.999998) vec1 32 ssa_119 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_120 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_121 = load_const (0x00000000 = 0.000000) vec1 32 ssa_122 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_123 = load_const (0x3d23d70a = 0.040000) vec1 32 ssa_124 = load_const (0x3d23d70a = 0.040000) vec1 32 ssa_125 = load_const (0x3e4ccccd = 0.200000) vec1 32 ssa_126 = load_const (0x3e4ccccd = 0.200000) vec1 32 ssa_127 = load_const (0x42700000 = 60.000000) vec1 32 ssa_128 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_129 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_130 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_131 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_132 = load_const (0x00000000 = 0.000000) vec1 32 ssa_133 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_134 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_135 = load_const (0x00000000 = 0.000000) vec1 32 ssa_136 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_137 = load_const (0x42f00000 = 120.000000) vec1 32 ssa_138 = load_const (0x3a83126f = 0.001000) vec1 32 ssa_139 = load_const (0x3c23d70a = 0.010000) vec1 32 ssa_140 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_141 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_142 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_143 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_144 = load_const (0x00000000 = 0.000000) vec1 32 ssa_145 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_146 = load_const (0x00000000 = 0.000000) vec1 32 ssa_147 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_148 = load_const (0x00000000 = 0.000000) vec1 32 ssa_149 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_150 = load_const (0x00000000 = 0.000000) vec1 32 ssa_151 = load_const (0x43008000 = 128.500000) vec1 32 ssa_152 = load_const (0x43008000 = 128.500000) vec1 32 ssa_153 = load_const (0x41aaaaab = 21.333334) vec1 32 ssa_154 = load_const (0x41aaaaab = 21.333334) vec1 32 ssa_155 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_156 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_157 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_158 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_159 = load_const (0x42800000 = 64.000000) vec1 32 ssa_160 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_161 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_162 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_163 = load_const (0x3b000080 = 0.001953) vec1 32 ssa_164 = load_const (0x3d400000 = 0.046875) vec1 32 ssa_165 = load_const (0x41aaaaab = 21.333334) vec1 32 ssa_166 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_167 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_168 = load_const (0x00000000 = 0.000000) vec1 32 ssa_169 = load_const (0x00000000 = 0.000000) vec1 32 ssa_170 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_171 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_172 = load_const (0x3daaaaab = 0.083333) vec1 32 ssa_173 = load_const (0x3daaaaab = 0.083333) vec1 32 ssa_174 = load_const (0x3d400000 = 0.046875) vec1 32 ssa_175 = load_const (0x3d400000 = 0.046875) vec1 32 ssa_176 = load_const (0x41aaaaab = 21.333334) vec1 32 ssa_177 = load_const (0x41aaaaab = 21.333334) vec1 32 ssa_178 = load_const (0x3dcccccd = 0.100000) vec1 32 ssa_179 = load_const (0x3dcccccd = 0.100000) vec1 32 ssa_180 = load_const (0x3dcccccd = 0.100000) vec1 32 ssa_181 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_182 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_183 = load_const (0x00000000 = 0.000000) vec1 32 ssa_184 = load_const (0x00000000 = 0.000000) vec1 32 ssa_185 = load_const (0x00000000 = 0.000000) vec1 32 ssa_186 = load_const (0x3e800000 = 0.250000) vec1 32 ssa_187 = load_const (0x00000004 = 0.000000) vec1 32 ssa_188 = load_const (0x00000001 = 0.000000) vec1 32 ssa_189 = load_const (0x00000000 = 0.000000) vec1 32 ssa_190 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_191 = load_const (0x3883126f = 0.000063) vec1 32 ssa_192 = load_const (0x45fa0000 = 8000.000000) vec1 32 ssa_193 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_194 = load_const (0x40000000 = 2.000000) vec1 32 ssa_195 = load_const (0x00000001 = 0.000000) vec1 32 ssa_196 = load_const (0x00000002 = 0.000000) vec1 32 ssa_197 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_198 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_199 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_200 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_201 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_202 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_203 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_204 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_205 = load_const (0x00000000 = 0.000000) vec1 32 ssa_206 = load_const (0x467a0000 = 16000.000000) vec1 32 ssa_207 = load_const (0x00000000 = 0.000000) vec1 32 ssa_208 = load_const (0x45fa0000 = 8000.000000) vec1 32 ssa_209 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_210 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_211 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_212 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_213 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_214 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_215 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_216 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_217 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_218 = load_const (0x00000000 = 0.000000) vec1 32 ssa_219 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_220 = load_const (0x00000000 = 0.000000) vec1 32 ssa_221 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_222 = load_const (0x00000000 = 0.000000) vec1 32 ssa_223 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_224 = load_const (0x00000000 = 0.000000) vec1 32 ssa_225 = load_const (0x41200000 = 10.000000) vec1 32 ssa_226 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_227 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_228 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_229 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_230 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_231 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_232 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_233 = load_const (0x00000000 = 0.000000) vec1 32 ssa_234 = load_const (0x37a7c5ac = 0.000020) vec1 32 ssa_235 = load_const (0x80000000 = -0.000000) vec1 32 ssa_236 = load_const (0x80000000 = -0.000000) vec1 32 ssa_237 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_238 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_239 = load_const (0x00000000 = 0.000000) vec1 32 ssa_240 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_241 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_242 = load_const (0x00000000 = 0.000000) vec1 32 ssa_243 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_244 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_245 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_246 = load_const (0x40000000 = 2.000000) vec1 32 ssa_247 = load_const (0x40000000 = 2.000000) vec1 32 ssa_248 = load_const (0xffffff7e = -nan) vec1 32 ssa_249 = load_const (0x00000082 = 0.000000) vec1 32 ssa_250 = load_const (0x00000010 = 0.000000) vec1 32 ssa_251 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_252 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_253 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_254 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_255 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_256 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_257 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_258 = load_const (0x00000000 = 0.000000) vec1 32 ssa_259 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_260 = load_const (0x00000000 = 0.000000) vec1 32 ssa_261 = load_const (0xffffff7e = -nan) vec1 32 ssa_262 = load_const (0x00000082 = 0.000000) vec1 32 ssa_263 = load_const (0x0000ffff = 0.000000) vec1 32 ssa_264 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_265 = load_const (0x00000000 = 0.000000) vec1 32 ssa_266 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_267 = load_const (0x00000000 = 0.000000) vec1 32 ssa_268 = load_const (0xffffff7e = -nan) vec1 32 ssa_269 = load_const (0x00000082 = 0.000000) vec1 32 ssa_270 = load_const (0x0000ffff = 0.000000) vec1 32 ssa_271 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_272 = load_const (0x00000000 = 0.000000) vec1 32 ssa_273 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_274 = load_const (0x00000000 = 0.000000) vec1 32 ssa_275 = load_const (0xffffff7e = -nan) vec1 32 ssa_276 = load_const (0x00000082 = 0.000000) vec1 32 ssa_277 = load_const (0x00000010 = 0.000000) vec1 32 ssa_278 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_279 = load_const (0x00000000 = 0.000000) vec1 32 ssa_280 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_281 = load_const (0x00000000 = 0.000000) vec1 32 ssa_282 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_283 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_284 = load_const (0x40000000 = 2.000000) vec1 32 ssa_285 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_286 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_287 = load_const (0x00000000 = 0.000000) vec1 32 ssa_288 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_289 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_290 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_291 = load_const (0x40000000 = 2.000000) vec1 32 ssa_292 = load_const (0x40000000 = 2.000000) vec1 32 ssa_293 = load_const (0xffffff7e = -nan) vec1 32 ssa_294 = load_const (0x00000082 = 0.000000) vec1 32 ssa_295 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_296 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_297 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_298 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_299 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_300 = load_const (0x00000000 = 0.000000) vec1 32 ssa_301 = load_const (0x0000001b = 0.000000) vec1 32 ssa_302 = load_const (0x00000020 = 0.000000) vec1 32 ssa_303 = load_const (0x0000001a = 0.000000) vec1 32 ssa_304 = load_const (0x00000020 = 0.000000) vec1 32 ssa_305 = load_const (0x00000019 = 0.000000) vec1 32 ssa_306 = load_const (0x00000020 = 0.000000) vec1 32 ssa_307 = load_const (0x00000018 = 0.000000) vec1 32 ssa_308 = load_const (0x00000020 = 0.000000) vec1 32 ssa_309 = load_const (0x0000000b = 0.000000) vec1 32 ssa_310 = load_const (0x00000010 = 0.000000) vec1 32 ssa_311 = load_const (0x0000000a = 0.000000) vec1 32 ssa_312 = load_const (0x00000010 = 0.000000) vec1 32 ssa_313 = load_const (0x00000013 = 0.000000) vec1 32 ssa_314 = load_const (0x00000020 = 0.000000) vec1 32 ssa_315 = load_const (0x00000012 = 0.000000) vec1 32 ssa_316 = load_const (0x00000020 = 0.000000) vec1 32 ssa_317 = load_const (0x00000011 = 0.000000) vec1 32 ssa_318 = load_const (0x00000020 = 0.000000) vec1 32 ssa_319 = load_const (0x00000010 = 0.000000) vec1 32 ssa_320 = load_const (0x00000020 = 0.000000) vec1 32 ssa_321 = load_const (0x0000000f = 0.000000) vec1 32 ssa_322 = load_const (0x00000020 = 0.000000) vec1 32 ssa_323 = load_const (0x0000000e = 0.000000) vec1 32 ssa_324 = load_const (0x00000020 = 0.000000) vec1 32 ssa_325 = load_const (0x0000000d = 0.000000) vec1 32 ssa_326 = load_const (0x00000020 = 0.000000) vec1 32 ssa_327 = load_const (0x0000000c = 0.000000) vec1 32 ssa_328 = load_const (0x00000020 = 0.000000) vec1 32 ssa_329 = load_const (0x0000000b = 0.000000) vec1 32 ssa_330 = load_const (0x00000020 = 0.000000) vec1 32 ssa_331 = load_const (0x0000000a = 0.000000) vec1 32 ssa_332 = load_const (0x00000020 = 0.000000) vec1 32 ssa_333 = load_const (0x00000009 = 0.000000) vec1 32 ssa_334 = load_const (0x00000020 = 0.000000) vec1 32 ssa_335 = load_const (0x00000008 = 0.000000) vec1 32 ssa_336 = load_const (0x00000020 = 0.000000) vec1 32 ssa_337 = load_const (0x00000007 = 0.000000) vec1 32 ssa_338 = load_const (0x00000020 = 0.000000) vec1 32 ssa_339 = load_const (0x00000006 = 0.000000) vec1 32 ssa_340 = load_const (0x00000020 = 0.000000) vec1 32 ssa_341 = load_const (0x00000005 = 0.000000) vec1 32 ssa_342 = load_const (0x00000020 = 0.000000) vec1 32 ssa_343 = load_const (0x00000004 = 0.000000) vec1 32 ssa_344 = load_const (0x00000020 = 0.000000) vec1 32 ssa_345 = load_const (0x00000003 = 0.000000) vec1 32 ssa_346 = load_const (0x00000020 = 0.000000) vec1 32 ssa_347 = load_const (0x00000002 = 0.000000) vec1 32 ssa_348 = load_const (0x00000001 = 0.000000) vec1 32 ssa_349 = load_const (0x00000020 = 0.000000) vec1 32 ssa_350 = load_const (0xffffffff = -nan) vec1 32 ssa_351 = load_const (0xffffffff = -nan) vec1 32 ssa_352 = load_const (0x0000001f = 0.000000) vec1 32 ssa_353 = load_const (0xffffffff = -nan) vec1 32 ssa_354 = load_const (0x0000001f = 0.000000) vec1 32 ssa_355 = load_const (0xffffffff = -nan) vec1 32 ssa_356 = load_const (0x00000000 = 0.000000) vec1 32 ssa_357 = load_const (0x00000000 = 0.000000) vec1 32 ssa_358 = load_const (0x00000000 = 0.000000) vec1 32 ssa_359 = load_const (0xffffffff = -nan) vec1 32 ssa_360 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_361 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_362 = load_const (0x40000000 = 2.000000) vec1 32 ssa_363 = load_const (0x40000000 = 2.000000) vec1 32 ssa_364 = load_const (0xffffff7e = -nan) vec1 32 ssa_365 = load_const (0x00000082 = 0.000000) vec1 32 ssa_366 = load_const (0x00000010 = 0.000000) vec1 32 ssa_367 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_368 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_369 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_370 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_371 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_372 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_373 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_374 = load_const (0x00000000 = 0.000000) vec1 32 ssa_375 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_376 = load_const (0x00000000 = 0.000000) vec1 32 ssa_377 = load_const (0xffffff7e = -nan) vec1 32 ssa_378 = load_const (0x00000082 = 0.000000) vec1 32 ssa_379 = load_const (0x0000ffff = 0.000000) vec1 32 ssa_380 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_381 = load_const (0x00000000 = 0.000000) vec1 32 ssa_382 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_383 = load_const (0x00000000 = 0.000000) vec1 32 ssa_384 = load_const (0xffffff7e = -nan) vec1 32 ssa_385 = load_const (0x00000082 = 0.000000) vec1 32 ssa_386 = load_const (0x0000ffff = 0.000000) vec1 32 ssa_387 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_388 = load_const (0x00000000 = 0.000000) vec1 32 ssa_389 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_390 = load_const (0x00000000 = 0.000000) vec1 32 ssa_391 = load_const (0xffffff7e = -nan) vec1 32 ssa_392 = load_const (0x00000082 = 0.000000) vec1 32 ssa_393 = load_const (0x00000010 = 0.000000) vec1 32 ssa_394 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_395 = load_const (0x00000000 = 0.000000) vec1 32 ssa_396 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_397 = load_const (0x00000000 = 0.000000) vec1 32 ssa_398 = load_const (0x3f000000 = 0.500000) vec1 32 ssa_399 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_400 = load_const (0x40000000 = 2.000000) vec1 32 ssa_401 = load_const (0xbf000000 = -0.500000) vec1 32 ssa_402 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_403 = load_const (0x00000000 = 0.000000) vec1 32 ssa_404 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_405 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_406 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_407 = load_const (0x40000000 = 2.000000) vec1 32 ssa_408 = load_const (0x40000000 = 2.000000) vec1 32 ssa_409 = load_const (0xffffff7e = -nan) vec1 32 ssa_410 = load_const (0x00000082 = 0.000000) vec1 32 ssa_411 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_412 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_413 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_414 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_415 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_416 = load_const (0x00000000 = 0.000000) vec1 32 ssa_417 = load_const (0x0000001b = 0.000000) vec1 32 ssa_418 = load_const (0x00000020 = 0.000000) vec1 32 ssa_419 = load_const (0x0000001a = 0.000000) vec1 32 ssa_420 = load_const (0x00000020 = 0.000000) vec1 32 ssa_421 = load_const (0x00000019 = 0.000000) vec1 32 ssa_422 = load_const (0x00000020 = 0.000000) vec1 32 ssa_423 = load_const (0x00000018 = 0.000000) vec1 32 ssa_424 = load_const (0x00000020 = 0.000000) vec1 32 ssa_425 = load_const (0x0000000b = 0.000000) vec1 32 ssa_426 = load_const (0x00000010 = 0.000000) vec1 32 ssa_427 = load_const (0x0000000a = 0.000000) vec1 32 ssa_428 = load_const (0x00000010 = 0.000000) vec1 32 ssa_429 = load_const (0x00000013 = 0.000000) vec1 32 ssa_430 = load_const (0x00000020 = 0.000000) vec1 32 ssa_431 = load_const (0x00000012 = 0.000000) vec1 32 ssa_432 = load_const (0x00000020 = 0.000000) vec1 32 ssa_433 = load_const (0x00000011 = 0.000000) vec1 32 ssa_434 = load_const (0x00000020 = 0.000000) vec1 32 ssa_435 = load_const (0x00000010 = 0.000000) vec1 32 ssa_436 = load_const (0x00000020 = 0.000000) vec1 32 ssa_437 = load_const (0x0000000f = 0.000000) vec1 32 ssa_438 = load_const (0x00000020 = 0.000000) vec1 32 ssa_439 = load_const (0x0000000e = 0.000000) vec1 32 ssa_440 = load_const (0x00000020 = 0.000000) vec1 32 ssa_441 = load_const (0x0000000d = 0.000000) vec1 32 ssa_442 = load_const (0x00000020 = 0.000000) vec1 32 ssa_443 = load_const (0x0000000c = 0.000000) vec1 32 ssa_444 = load_const (0x00000020 = 0.000000) vec1 32 ssa_445 = load_const (0x0000000b = 0.000000) vec1 32 ssa_446 = load_const (0x00000020 = 0.000000) vec1 32 ssa_447 = load_const (0x0000000a = 0.000000) vec1 32 ssa_448 = load_const (0x00000020 = 0.000000) vec1 32 ssa_449 = load_const (0x00000009 = 0.000000) vec1 32 ssa_450 = load_const (0x00000020 = 0.000000) vec1 32 ssa_451 = load_const (0x00000008 = 0.000000) vec1 32 ssa_452 = load_const (0x00000020 = 0.000000) vec1 32 ssa_453 = load_const (0x00000007 = 0.000000) vec1 32 ssa_454 = load_const (0x00000020 = 0.000000) vec1 32 ssa_455 = load_const (0x00000006 = 0.000000) vec1 32 ssa_456 = load_const (0x00000020 = 0.000000) vec1 32 ssa_457 = load_const (0x00000005 = 0.000000) vec1 32 ssa_458 = load_const (0x00000020 = 0.000000) vec1 32 ssa_459 = load_const (0x00000004 = 0.000000) vec1 32 ssa_460 = load_const (0x00000020 = 0.000000) vec1 32 ssa_461 = load_const (0x00000003 = 0.000000) vec1 32 ssa_462 = load_const (0x00000020 = 0.000000) vec1 32 ssa_463 = load_const (0x00000002 = 0.000000) vec1 32 ssa_464 = load_const (0x00000001 = 0.000000) vec1 32 ssa_465 = load_const (0x00000020 = 0.000000) vec1 32 ssa_466 = load_const (0x00000000 = 0.000000) vec1 32 ssa_467 = load_const (0x00000000 = 0.000000) vec1 32 ssa_468 = load_const (0x0000000f = 0.000000) vec1 32 ssa_469 = load_const (0x0000000f = 0.000000) vec1 32 ssa_470 = load_const (0x00000018 = 0.000000) vec1 32 ssa_471 = load_const (0x00000014 = 0.000000) vec1 32 ssa_472 = load_const (0x000003ff = 0.000000) vec1 32 ssa_473 = load_const (0x000003ff = 0.000000) vec1 32 ssa_474 = load_const (0x0000000a = 0.000000) vec1 32 ssa_475 = load_const (0xffffffff = -nan) vec1 32 ssa_476 = load_const (0x00000000 = 0.000000) vec1 32 ssa_477 = load_const (0x00000000 = 0.000000) vec1 32 ssa_478 = load_const (0x00000000 = 0.000000) vec1 32 ssa_479 = load_const (0x00000001 = 0.000000) vec1 32 ssa_480 = load_const (0x0000001f = 0.000000) vec1 32 ssa_481 = load_const (0xffffffff = -nan) vec1 32 ssa_482 = load_const (0xffffffff = -nan) vec1 32 ssa_483 = load_const (0x0000001f = 0.000000) vec1 32 ssa_484 = load_const (0xffffffff = -nan) vec1 32 ssa_485 = load_const (0x0000001f = 0.000000) vec1 32 ssa_486 = load_const (0xffffffff = -nan) vec1 32 ssa_487 = load_const (0x00000000 = 0.000000) vec1 32 ssa_488 = load_const (0xffffffff = -nan) vec1 32 ssa_489 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_490 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_491 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_492 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_493 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_494 = load_const (0x00000000 = 0.000000) vec1 32 ssa_495 = load_const (0x3f800000 = 1.000000) vec1 32 ssa_496 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_497 = load_const (0xbf800000 = -1.000000) vec1 32 ssa_498 = load_const (0x40000000 = 2.000000) vec1 32 ssa_499 = load_const (0x40000000 = 2.000000) vec1 32 ssa_500 = load_const (0x00000000 = 0.000000) vec1 32 ssa_501 = load_const (0x00000000 = 0.000000) vec1 32 ssa_502 = load_const (0x0000003f = 0.000000) vec1 32 ssa_503 = load_const (0x0000003f = 0.000000) vec1 32 ssa_504 = load_const (0x0000003f = 0.000000) vec1 32 ssa_505 = load_const (0x00000000 = 0.000000) vec1 32 ssa_506 = load_const (0xffffffff = -nan) vec1 32 ssa_507 = load_const (0x00000000 = 0.000000) vec1 32 ssa_508 = load_const (0x00000001 = 0.000000) vec1 32 ssa_509 = load_const (0x00000004 = 0.000000) vec1 32 ssa_510 = load_const (0x00000005 = 0.000000) vec1 32 ssa_511 = load_const (0x00000001 = 0.000000) vec1 32 ssa_512 = load_const (0x00000002 = 0.000000) vec1 32 ssa_513 = load_const (0x00000003 = 0.000000) vec1 32 ssa_514 = load_const (0x00000001 = 0.000000) vec1 32 ssa_515 = load_const (0x00000010 = 0.000000) vec1 32 ssa_516 = load_const (0x00000001 = 0.000000) vec1 32 ssa_517 = load_const (0x00000004 = 0.000000) vec1 32 ssa_518 = load_const (0x00000005 = 0.000000) vec1 32 ssa_519 = load_const (0x00000006 = 0.000000) vec1 32 ssa_520 = load_const (0x00000007 = 0.000000) vec1 32 ssa_521 = load_const (0x0000000e = 0.000000) vec1 32 ssa_522 = load_const (0x00000010 = 0.000000) vec1 32 ssa_523 = load_const (0x00000002 = 0.000000) vec1 32 ssa_524 = load_const (0x00000002 = 0.000000) vec1 32 ssa_525 = load_const (0x00000003 = 0.000000) vec1 32 ssa_526 = load_const (0x00000003 = 0.000000) vec1 32 ssa_527 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_528 = deref_struct &ssa_527->field18 (push_const uint) /* ®isters.field18 */ vec1 32 ssa_529 = intrinsic load_deref (ssa_528) (access=0) vec1 32 ssa_530 = iadd ssa_529, ssa_526 vec4 32 ssa_531 = intrinsic vulkan_resource_index (ssa_530) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_532 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_533 = deref_struct &ssa_532->field18 (push_const uint) /* ®isters.field18 */ vec1 32 ssa_534 = intrinsic load_deref (ssa_533) (access=0) vec1 32 ssa_535 = iadd ssa_534, ssa_525 vec4 32 ssa_536 = intrinsic vulkan_resource_index (ssa_535) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_537 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_538 = deref_struct &ssa_537->field18 (push_const uint) /* ®isters.field18 */ vec1 32 ssa_539 = intrinsic load_deref (ssa_538) (access=0) vec1 32 ssa_540 = iadd ssa_539, ssa_524 vec4 32 ssa_541 = intrinsic vulkan_resource_index (ssa_540) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_542 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_543 = deref_struct &ssa_542->field18 (push_const uint) /* ®isters.field18 */ vec1 32 ssa_544 = intrinsic load_deref (ssa_543) (access=0) vec1 32 ssa_545 = iadd ssa_544, ssa_523 vec4 32 ssa_546 = intrinsic vulkan_resource_index (ssa_545) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_547 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_548 = deref_struct &ssa_547->field21 (push_const uint) /* ®isters.field21 */ vec1 32 ssa_549 = intrinsic load_deref (ssa_548) (access=0) vec1 32 ssa_550 = iadd ssa_549, ssa_522 vec1 32 ssa_551 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_552 = deref_struct &ssa_551->field21 (push_const uint) /* ®isters.field21 */ vec1 32 ssa_553 = intrinsic load_deref (ssa_552) (access=0) vec1 32 ssa_554 = iadd ssa_553, ssa_521 vec1 32 ssa_555 = deref_var &@1 (uniform texture2DArray[]) vec1 32 ssa_556 = deref_array &(*ssa_555)[ssa_554] (uniform texture2DArray) /* &@1[ssa_554] */ vec1 32 ssa_557 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_558 = deref_struct &ssa_557->field21 (push_const uint) /* ®isters.field21 */ vec1 32 ssa_559 = intrinsic load_deref (ssa_558) (access=0) vec1 32 ssa_560 = iadd ssa_559, ssa_520 vec1 32 ssa_561 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_562 = deref_struct &ssa_561->field21 (push_const uint) /* ®isters.field21 */ vec1 32 ssa_563 = intrinsic load_deref (ssa_562) (access=0) vec1 32 ssa_564 = iadd ssa_563, ssa_519 vec1 32 ssa_565 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_566 = deref_struct &ssa_565->field21 (push_const uint) /* ®isters.field21 */ vec1 32 ssa_567 = intrinsic load_deref (ssa_566) (access=0) vec1 32 ssa_568 = iadd ssa_567, ssa_518 vec1 32 ssa_569 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_570 = deref_struct &ssa_569->field21 (push_const uint) /* ®isters.field21 */ vec1 32 ssa_571 = intrinsic load_deref (ssa_570) (access=0) vec1 32 ssa_572 = iadd ssa_571, ssa_517 vec1 32 ssa_573 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_574 = deref_struct &ssa_573->field21 (push_const uint) /* ®isters.field21 */ vec1 32 ssa_575 = intrinsic load_deref (ssa_574) (access=0) vec1 32 ssa_576 = iadd ssa_575, ssa_516 vec1 32 ssa_577 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_578 = deref_struct &ssa_577->field20 (push_const uint) /* ®isters.field20 */ vec1 32 ssa_579 = intrinsic load_deref (ssa_578) (access=0) vec1 32 ssa_580 = iadd ssa_579, ssa_515 vec1 32 ssa_581 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_582 = deref_struct &ssa_581->field18 (push_const uint) /* ®isters.field18 */ vec1 32 ssa_583 = intrinsic load_deref (ssa_582) (access=0) vec1 32 ssa_584 = iadd ssa_583, ssa_514 vec1 32 ssa_585 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_586 = deref_struct &ssa_585->field18 (push_const uint) /* ®isters.field18 */ vec1 32 ssa_587 = intrinsic load_deref (ssa_586) (access=0) vec1 32 ssa_588 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_589 = deref_array &(*ssa_588)[ssa_587] (uniform texture2D) /* &@0[ssa_587] */ vec1 32 ssa_590 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_591 = deref_struct &ssa_590->field25 (push_const uint) /* ®isters.field25 */ vec1 32 ssa_592 = intrinsic load_deref (ssa_591) (access=0) vec1 32 ssa_593 = iadd ssa_592, ssa_513 vec1 32 ssa_594 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_595 = deref_struct &ssa_594->field25 (push_const uint) /* ®isters.field25 */ vec1 32 ssa_596 = intrinsic load_deref (ssa_595) (access=0) vec1 32 ssa_597 = iadd ssa_596, ssa_512 vec1 32 ssa_598 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_599 = deref_struct &ssa_598->field25 (push_const uint) /* ®isters.field25 */ vec1 32 ssa_600 = intrinsic load_deref (ssa_599) (access=0) vec1 32 ssa_601 = iadd ssa_600, ssa_511 vec1 32 ssa_602 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_603 = deref_struct &ssa_602->field25 (push_const uint) /* ®isters.field25 */ vec1 32 ssa_604 = intrinsic load_deref (ssa_603) (access=0) vec1 32 ssa_605 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_606 = deref_struct &ssa_605->field24 (push_const uint) /* ®isters.field24 */ vec1 32 ssa_607 = intrinsic load_deref (ssa_606) (access=0) vec1 32 ssa_608 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_609 = deref_array &(*ssa_608)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec1 32 ssa_610 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_611 = deref_struct &ssa_610->field17 (push_const uint) /* ®isters.field17 */ vec1 32 ssa_612 = intrinsic load_deref (ssa_611) (access=0) vec1 32 ssa_613 = iadd ssa_612, ssa_510 vec4 32 ssa_614 = intrinsic vulkan_resource_index (ssa_613) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_615 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_616 = deref_struct &ssa_615->field14 (push_const uint) /* ®isters.field14 */ vec1 32 ssa_617 = intrinsic load_deref (ssa_616) (access=0) vec4 32 ssa_618 = intrinsic vulkan_resource_index (ssa_617) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_619 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_620 = deref_struct &ssa_619->field17 (push_const uint) /* ®isters.field17 */ vec1 32 ssa_621 = intrinsic load_deref (ssa_620) (access=0) vec1 32 ssa_622 = iadd ssa_621, ssa_509 vec4 32 ssa_623 = intrinsic vulkan_resource_index (ssa_622) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_624 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_625 = deref_struct &ssa_624->field13 (push_const uint) /* ®isters.field13 */ vec1 32 ssa_626 = intrinsic load_deref (ssa_625) (access=0) vec1 32 ssa_627 = iadd ssa_626, ssa_508 vec4 32 ssa_628 = intrinsic vulkan_resource_index (ssa_627) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_629 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_630 = deref_struct &ssa_629->field13 (push_const uint) /* ®isters.field13 */ vec1 32 ssa_631 = intrinsic load_deref (ssa_630) (access=0) vec4 32 ssa_632 = intrinsic vulkan_resource_index (ssa_631) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_633 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_634 = deref_struct &ssa_633->field16 (push_const uint) /* ®isters.field16 */ vec1 32 ssa_635 = intrinsic load_deref (ssa_634) (access=0) vec4 32 ssa_636 = intrinsic vulkan_resource_index (ssa_635) (desc_set=1, binding=2, desc_type=SSBO /*7*/) vec1 32 ssa_637 = deref_var &SV_IsFrontFace (system bool) vec1 1 ssa_638 = intrinsic load_deref (ssa_637) (access=0) vec1 32 ssa_639 = bcsel ssa_638, ssa_506, ssa_507 vec1 1 ssa_640 = ieq ssa_639, ssa_505 vec1 32 ssa_641 = deref_var &TEXCOORD_6 (shader_in vec3) vec3 32 ssa_642 = intrinsic load_deref (ssa_641) (access=0) vec1 32 ssa_643 = deref_var &TEXCOORD_6 (shader_in vec3) vec3 32 ssa_644 = intrinsic load_deref (ssa_643) (access=0) vec1 32 ssa_645 = deref_var &TEXCOORD_6 (shader_in vec3) vec3 32 ssa_646 = intrinsic load_deref (ssa_645) (access=0) vec1 32 ssa_647 = deref_var &TEXCOORD_5 (shader_in vec4) vec4 32 ssa_648 = intrinsic load_deref (ssa_647) (access=0) vec1 32 ssa_649 = deref_var &TEXCOORD_5 (shader_in vec4) vec4 32 ssa_650 = intrinsic load_deref (ssa_649) (access=0) vec1 32 ssa_651 = deref_var &TEXCOORD_5 (shader_in vec4) vec4 32 ssa_652 = intrinsic load_deref (ssa_651) (access=0) vec1 32 ssa_653 = deref_var &TEXCOORD_5 (shader_in vec4) vec4 32 ssa_654 = intrinsic load_deref (ssa_653) (access=0) vec1 32 ssa_655 = deref_var &TEXCOORD_4 (shader_in vec4) vec4 32 ssa_656 = intrinsic load_deref (ssa_655) (access=0) vec1 32 ssa_657 = deref_var &TEXCOORD_4 (shader_in vec4) vec4 32 ssa_658 = intrinsic load_deref (ssa_657) (access=0) vec1 32 ssa_659 = deref_var &TEXCOORD_4 (shader_in vec4) vec4 32 ssa_660 = intrinsic load_deref (ssa_659) (access=0) vec1 32 ssa_661 = deref_var &TEXCOORD_4 (shader_in vec4) vec4 32 ssa_662 = intrinsic load_deref (ssa_661) (access=0) vec1 32 ssa_663 = deref_var &TEXCOORD_3 (shader_in vec4) vec4 32 ssa_664 = intrinsic load_deref (ssa_663) (access=0) vec1 32 ssa_665 = deref_var &TEXCOORD_3 (shader_in vec4) vec4 32 ssa_666 = intrinsic load_deref (ssa_665) (access=0) vec1 32 ssa_667 = deref_var &TEXCOORD_2 (shader_in vec4) vec4 32 ssa_668 = intrinsic load_deref (ssa_667) (access=0) vec1 32 ssa_669 = deref_var &TEXCOORD_2 (shader_in vec4) vec4 32 ssa_670 = intrinsic load_deref (ssa_669) (access=0) vec1 32 ssa_671 = deref_var &TEXCOORD_2 (shader_in vec4) vec4 32 ssa_672 = intrinsic load_deref (ssa_671) (access=0) vec1 32 ssa_673 = deref_var &TEXCOORD_1 (shader_in vec4) vec4 32 ssa_674 = intrinsic load_deref (ssa_673) (access=0) vec1 32 ssa_675 = deref_var &TEXCOORD_1 (shader_in vec4) vec4 32 ssa_676 = intrinsic load_deref (ssa_675) (access=0) vec1 32 ssa_677 = deref_var &TEXCOORD_1 (shader_in vec4) vec4 32 ssa_678 = intrinsic load_deref (ssa_677) (access=0) vec1 32 ssa_679 = deref_var &TEXCOORD_1 (shader_in vec4) vec4 32 ssa_680 = intrinsic load_deref (ssa_679) (access=0) vec1 32 ssa_681 = deref_var &TEXCOORD (shader_in vec4) vec4 32 ssa_682 = intrinsic load_deref (ssa_681) (access=0) vec1 32 ssa_683 = deref_var &TEXCOORD (shader_in vec4) vec4 32 ssa_684 = intrinsic load_deref (ssa_683) (access=0) vec1 32 ssa_685 = deref_var &TEXCOORD (shader_in vec4) vec4 32 ssa_686 = intrinsic load_deref (ssa_685) (access=0) vec1 32 ssa_687 = deref_var &TEXCOORD (shader_in vec4) vec4 32 ssa_688 = intrinsic load_deref (ssa_687) (access=0) vec1 32 ssa_689 = deref_var &SV_Position (system vec4) vec4 32 ssa_690 = intrinsic load_deref (ssa_689) (access=0) vec1 32 ssa_691 = deref_var &SV_Position (system vec4) vec4 32 ssa_692 = intrinsic load_deref (ssa_691) (access=0) vec1 32 ssa_693 = f2u32 ssa_690.x vec1 32 ssa_694 = f2u32 ssa_692.y vec4 32 ssa_695 = intrinsic load_vulkan_descriptor (ssa_636) (desc_type=SSBO /*7*/) vec4 32 ssa_696 = deref_cast (BindlessCBV *)ssa_695 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_697 = deref_struct &ssa_696->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_695)->field0 */ vec1 32 ssa_698 = load_const (0x00000000 = 0.000000) vec4 32 ssa_699 = deref_array &(*ssa_697)[0] (ssbo vec4) /* &((BindlessCBV *)ssa_695)->field0[0] */ vec4 32 ssa_700 = intrinsic load_deref (ssa_699) (access=16) vec1 32 ssa_701 = iand ssa_693, ssa_504 vec1 32 ssa_702 = iand ssa_694, ssa_503 vec4 32 ssa_703 = intrinsic load_vulkan_descriptor (ssa_632) (desc_type=SSBO /*7*/) vec4 32 ssa_704 = deref_cast (BindlessCBV *)ssa_703 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_705 = deref_struct &ssa_704->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_703)->field0 */ vec1 32 ssa_706 = load_const (0x0000001c = 0.000000) vec4 32 ssa_707 = deref_array &(*ssa_705)[28] (ssbo vec4) /* &((BindlessCBV *)ssa_703)->field0[28] */ vec4 32 ssa_708 = intrinsic load_deref (ssa_707) (access=16) vec1 32 ssa_709 = iand ssa_708.y, ssa_502 vec3 32 ssa_710 = vec3 ssa_701, ssa_702, ssa_709 vec4 32 ssa_712 = (float32)txf ssa_556 (texture_deref), ssa_710 (coord), ssa_501 (lod) vec1 32 ssa_713 = fmul ssa_712.x, ssa_700.z vec1 32 ssa_714 = fadd ssa_713, ssa_700.w vec1 1 ssa_715 = flt! ssa_714, ssa_500 /* succs: block_1 block_2 */ if ssa_715 { block block_1: /* preds: block_0 */ intrinsic demote () () /* succs: block_3 */ } else { block block_2: /* preds: block_0 */ /* succs: block_3 */ } block block_3: /* preds: block_1 block_2 */ vec4 32 ssa_716 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_717 = deref_cast (BindlessCBV *)ssa_716 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_718 = deref_struct &ssa_717->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_716)->field0 */ vec1 32 ssa_719 = load_const (0x00000001 = 0.000000) vec4 32 ssa_720 = deref_array &(*ssa_718)[1] (ssbo vec4) /* &((BindlessCBV *)ssa_716)->field0[1] */ vec4 32 ssa_721 = intrinsic load_deref (ssa_720) (access=16) vec1 32 ssa_722 = fmul ssa_721.x, ssa_682.x vec1 32 ssa_723 = fmul ssa_721.y, ssa_684.y vec4 32 ssa_724 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_725 = deref_cast (BindlessCBV *)ssa_724 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_726 = deref_struct &ssa_725->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_724)->field0 */ vec1 32 ssa_727 = load_const (0x00000002 = 0.000000) vec4 32 ssa_728 = deref_array &(*ssa_726)[2] (ssbo vec4) /* &((BindlessCBV *)ssa_724)->field0[2] */ vec4 32 ssa_729 = intrinsic load_deref (ssa_728) (access=16) vec1 32 ssa_730 = fadd ssa_722, ssa_729.x vec1 32 ssa_731 = fadd ssa_723, ssa_729.y vec2 32 ssa_734 = vec2 ssa_730, ssa_731 vec4 32 ssa_737 = (float32)tex ssa_589 (texture_deref), ssa_609 (sampler_deref), ssa_734 (coord) vec1 32 ssa_738 = fmul ssa_737.x, ssa_499 vec1 32 ssa_739 = fmul ssa_737.y, ssa_498 vec1 32 ssa_740 = fadd ssa_738, ssa_497 vec1 32 ssa_741 = fadd ssa_739, ssa_496 vec2 32 ssa_742 = vec2 ssa_740, ssa_741 vec2 32 ssa_743 = vec2 ssa_740, ssa_741 vec1 32 ssa_744 = fdot2 ssa_742, ssa_743 vec1 32 ssa_745 = fsub ssa_495, ssa_744 vec1 32 ssa_746 = fmax! ssa_745, ssa_494 vec1 32 ssa_747 = fsqrt ssa_746 vec4 32 ssa_748 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_749 = deref_cast (BindlessCBV *)ssa_748 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_750 = deref_struct &ssa_749->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_748)->field0 */ vec1 32 ssa_751 = load_const (0x00000000 = 0.000000) vec4 32 ssa_752 = deref_array &(*ssa_750)[0] (ssbo vec4) /* &((BindlessCBV *)ssa_748)->field0[0] */ vec4 32 ssa_753 = intrinsic load_deref (ssa_752) (access=16) vec1 32 ssa_754 = fadd ssa_747, ssa_493 vec1 32 ssa_755 = fmul ssa_753.x, ssa_740 vec1 32 ssa_756 = fmul ssa_753.x, ssa_741 vec1 32 ssa_757 = fmul ssa_753.x, ssa_754 vec1 32 ssa_758 = fadd ssa_757, ssa_492 vec3 32 ssa_759 = vec3 ssa_755, ssa_756, ssa_758 vec3 32 ssa_760 = vec3 ssa_755, ssa_756, ssa_758 vec1 32 ssa_761 = fdot3 ssa_759, ssa_760 vec1 32 ssa_762 = frsq ssa_761 vec1 32 ssa_763 = fmul ssa_755, ssa_762 vec1 32 ssa_764 = fmul ssa_756, ssa_762 vec1 32 ssa_765 = fmul ssa_758, ssa_762 vec4 32 ssa_766 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_767 = deref_cast (BindlessCBV *)ssa_766 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_768 = deref_struct &ssa_767->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_766)->field0 */ vec1 32 ssa_769 = load_const (0x00000003 = 0.000000) vec4 32 ssa_770 = deref_array &(*ssa_768)[3] (ssbo vec4) /* &((BindlessCBV *)ssa_766)->field0[3] */ vec4 32 ssa_771 = intrinsic load_deref (ssa_770) (access=16) vec1 32 ssa_772 = f2u32 ssa_771.x vec4 32 ssa_773 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_774 = deref_cast (BindlessCBV *)ssa_773 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_775 = deref_struct &ssa_774->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_773)->field0 */ vec1 32 ssa_776 = load_const (0x00000004 = 0.000000) vec4 32 ssa_777 = deref_array &(*ssa_775)[4] (ssbo vec4) /* &((BindlessCBV *)ssa_773)->field0[4] */ vec4 32 ssa_778 = intrinsic load_deref (ssa_777) (access=16) vec4 32 ssa_779 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_780 = deref_cast (BindlessCBV *)ssa_779 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_781 = deref_struct &ssa_780->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_779)->field0 */ vec1 32 ssa_782 = load_const (0x00000005 = 0.000000) vec4 32 ssa_783 = deref_array &(*ssa_781)[5] (ssbo vec4) /* &((BindlessCBV *)ssa_779)->field0[5] */ vec4 32 ssa_784 = intrinsic load_deref (ssa_783) (access=16) vec4 32 ssa_785 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_786 = deref_cast (BindlessCBV *)ssa_785 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_787 = deref_struct &ssa_786->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_785)->field0 */ vec1 32 ssa_788 = load_const (0x00000006 = 0.000000) vec4 32 ssa_789 = deref_array &(*ssa_787)[6] (ssbo vec4) /* &((BindlessCBV *)ssa_785)->field0[6] */ vec4 32 ssa_790 = intrinsic load_deref (ssa_789) (access=16) vec4 32 ssa_791 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_792 = deref_cast (BindlessCBV *)ssa_791 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_793 = deref_struct &ssa_792->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_791)->field0 */ vec1 32 ssa_794 = load_const (0x00000007 = 0.000000) vec4 32 ssa_795 = deref_array &(*ssa_793)[7] (ssbo vec4) /* &((BindlessCBV *)ssa_791)->field0[7] */ vec4 32 ssa_796 = intrinsic load_deref (ssa_795) (access=16) vec4 32 ssa_797 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_798 = deref_cast (BindlessCBV *)ssa_797 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_799 = deref_struct &ssa_798->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_797)->field0 */ vec1 32 ssa_800 = load_const (0x00000008 = 0.000000) vec4 32 ssa_801 = deref_array &(*ssa_799)[8] (ssbo vec4) /* &((BindlessCBV *)ssa_797)->field0[8] */ vec4 32 ssa_802 = intrinsic load_deref (ssa_801) (access=16) vec1 32 ssa_803 = f2u32 ssa_802.x vec1 32 ssa_804 = f2u32 ssa_802.y vec4 32 ssa_805 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_806 = deref_cast (BindlessCBV *)ssa_805 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_807 = deref_struct &ssa_806->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_805)->field0 */ vec1 32 ssa_808 = load_const (0x00000009 = 0.000000) vec4 32 ssa_809 = deref_array &(*ssa_807)[9] (ssbo vec4) /* &((BindlessCBV *)ssa_805)->field0[9] */ vec4 32 ssa_810 = intrinsic load_deref (ssa_809) (access=16) vec1 32 ssa_811 = fddx_coarse ssa_682.x vec1 32 ssa_812 = fddx_coarse ssa_684.y vec1 32 ssa_813 = fddy_coarse ssa_682.x vec1 32 ssa_814 = fddy_coarse ssa_684.y vec4 32 ssa_815 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_816 = deref_cast (BindlessCBV *)ssa_815 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_817 = deref_struct &ssa_816->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_815)->field0 */ vec1 32 ssa_818 = load_const (0x0000000a = 0.000000) vec4 32 ssa_819 = deref_array &(*ssa_817)[10] (ssbo vec4) /* &((BindlessCBV *)ssa_815)->field0[10] */ vec4 32 ssa_820 = intrinsic load_deref (ssa_819) (access=16) vec4 32 ssa_821 = intrinsic load_vulkan_descriptor (ssa_618) (desc_type=SSBO /*7*/) vec4 32 ssa_822 = deref_cast (BindlessCBV *)ssa_821 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_823 = deref_struct &ssa_822->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_821)->field0 */ vec1 32 ssa_824 = load_const (0x0000000b = 0.000000) vec4 32 ssa_825 = deref_array &(*ssa_823)[11] (ssbo vec4) /* &((BindlessCBV *)ssa_821)->field0[11] */ vec4 32 ssa_826 = intrinsic load_deref (ssa_825) (access=16) vec1 32 ssa_827 = ffract ssa_682.x vec1 32 ssa_828 = ffract ssa_684.y vec1 32 ssa_829 = fmul ssa_827, ssa_778.x vec1 32 ssa_830 = fsub ssa_491, ssa_684.y vec1 32 ssa_831 = ffract ssa_830 vec1 32 ssa_832 = fmul ssa_827, ssa_784.x vec1 32 ssa_833 = fmul ssa_831, ssa_784.y vec1 32 ssa_834 = f2i32 ssa_832 vec1 32 ssa_835 = f2i32 ssa_833 vec1 32 ssa_836 = fceil ssa_784.x vec1 32 ssa_837 = f2i32 ssa_836 vec1 32 ssa_838 = imul ssa_835, ssa_837 vec1 32 ssa_839 = iadd ssa_838, ssa_834 vec4 32 ssa_840 = intrinsic load_vulkan_descriptor (ssa_546) (desc_type=SSBO /*7*/) vec4 32 ssa_841 = deref_cast (SSBO *)ssa_840 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_842 = deref_struct &ssa_841->field0 (ssbo uvec2[]) /* &((SSBO *)ssa_840)->field0 */ vec4 32 ssa_843 = deref_array &(*ssa_842)[ssa_839] (ssbo uvec2) /* &((SSBO *)ssa_840)->field0[ssa_839] */ vec2 32 ssa_844 = intrinsic load_deref (ssa_843) (access=16) vec1 32 ssa_845 = fceil ssa_784.y vec1 32 ssa_846 = f2u32 ssa_836 vec1 32 ssa_847 = f2u32 ssa_845 vec1 32 ssa_848 = fmul ssa_827, ssa_784.z vec1 32 ssa_849 = fmul ssa_831, ssa_784.w vec1 32 ssa_850 = f2i32 ssa_848 vec1 32 ssa_851 = f2i32 ssa_849 vec1 32 ssa_852 = fceil ssa_784.z vec1 32 ssa_853 = f2i32 ssa_852 vec1 32 ssa_854 = imul ssa_853, ssa_851 vec1 32 ssa_855 = imul ssa_847, ssa_846 vec1 32 ssa_856 = iadd ssa_855, ssa_850 vec1 32 ssa_857 = iadd ssa_856, ssa_854 vec4 32 ssa_858 = intrinsic load_vulkan_descriptor (ssa_546) (desc_type=SSBO /*7*/) vec4 32 ssa_859 = deref_cast (SSBO *)ssa_858 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_860 = deref_struct &ssa_859->field0 (ssbo uvec2[]) /* &((SSBO *)ssa_858)->field0 */ vec4 32 ssa_861 = deref_array &(*ssa_860)[ssa_857] (ssbo uvec2) /* &((SSBO *)ssa_858)->field0[ssa_857] */ vec2 32 ssa_862 = intrinsic load_deref (ssa_861) (access=16) vec1 32 ssa_863 = ior ssa_862.y, ssa_844.y vec1 32 ssa_864 = iand ssa_863, ssa_810.x vec1 32 ssa_865 = intrinsic reduce (ssa_864) (reduction_op=ior /*300*/, cluster_size=0) vec1 32 ssa_866 = fdiv ssa_827, ssa_790.x vec1 32 ssa_867 = fdiv ssa_831, ssa_790.x vec1 32 ssa_868 = fdiv ssa_490, ssa_796.x vec1 32 ssa_869 = fdiv ssa_489, ssa_796.y vec1 32 ssa_870 = fmul ssa_868, ssa_790.x vec1 32 ssa_871 = fmul ssa_869, ssa_790.x vec1 32 ssa_872 = iadd ssa_865, ssa_488 vec1 32 ssa_873 = iand ssa_872, ssa_865 vec1 1 ssa_874 = ieq ssa_873, ssa_487 vec1 32 ssa_875 = deref_var &phi@58 (function_temp uint) intrinsic store_deref (ssa_875, ssa_865) (wrmask=x /*1*/, access=0) vec1 32 ssa_876 = deref_var &phi@57 (function_temp float) intrinsic store_deref (ssa_876, ssa_21) (wrmask=x /*1*/, access=0) vec1 32 ssa_877 = deref_var &phi@56 (function_temp float) intrinsic store_deref (ssa_877, ssa_22) (wrmask=x /*1*/, access=0) vec1 32 ssa_878 = deref_var &phi@55 (function_temp float) intrinsic store_deref (ssa_878, ssa_23) (wrmask=x /*1*/, access=0) vec1 32 ssa_879 = deref_var &phi@54 (function_temp float) intrinsic store_deref (ssa_879, ssa_24) (wrmask=x /*1*/, access=0) vec1 32 ssa_880 = deref_var &phi@53 (function_temp float) intrinsic store_deref (ssa_880, ssa_25) (wrmask=x /*1*/, access=0) vec1 32 ssa_881 = deref_var &phi@52 (function_temp float) intrinsic store_deref (ssa_881, ssa_26) (wrmask=x /*1*/, access=0) vec1 32 ssa_882 = deref_var &phi@51 (function_temp float) intrinsic store_deref (ssa_882, ssa_27) (wrmask=x /*1*/, access=0) vec1 32 ssa_883 = deref_var &phi@50 (function_temp float) intrinsic store_deref (ssa_883, ssa_28) (wrmask=x /*1*/, access=0) vec1 32 ssa_884 = deref_var &phi@49 (function_temp float) intrinsic store_deref (ssa_884, ssa_29) (wrmask=x /*1*/, access=0) vec1 32 ssa_885 = deref_var &phi@48 (function_temp float) intrinsic store_deref (ssa_885, ssa_30) (wrmask=x /*1*/, access=0) vec1 32 ssa_886 = deref_var &phi@47 (function_temp float) intrinsic store_deref (ssa_886, ssa_31) (wrmask=x /*1*/, access=0) vec1 32 ssa_887 = deref_var &phi@46 (function_temp float) intrinsic store_deref (ssa_887, ssa_32) (wrmask=x /*1*/, access=0) /* succs: block_4 block_5 */ if ssa_874 { block block_4: /* preds: block_3 */ /* succs: block_17 */ } else { block block_5: /* preds: block_3 */ vec1 32 ssa_888 = deref_var &phi@21 (function_temp uint) intrinsic store_deref (ssa_888, ssa_865) (wrmask=x /*1*/, access=0) vec1 32 ssa_889 = deref_var &phi@20 (function_temp float) intrinsic store_deref (ssa_889, ssa_33) (wrmask=x /*1*/, access=0) vec1 32 ssa_890 = deref_var &phi@19 (function_temp float) intrinsic store_deref (ssa_890, ssa_34) (wrmask=x /*1*/, access=0) vec1 32 ssa_891 = deref_var &phi@18 (function_temp float) intrinsic store_deref (ssa_891, ssa_35) (wrmask=x /*1*/, access=0) vec1 32 ssa_892 = deref_var &phi@17 (function_temp float) intrinsic store_deref (ssa_892, ssa_36) (wrmask=x /*1*/, access=0) vec1 32 ssa_893 = deref_var &phi@16 (function_temp float) intrinsic store_deref (ssa_893, ssa_37) (wrmask=x /*1*/, access=0) vec1 32 ssa_894 = deref_var &phi@15 (function_temp float) intrinsic store_deref (ssa_894, ssa_38) (wrmask=x /*1*/, access=0) vec1 32 ssa_895 = deref_var &phi@14 (function_temp float) intrinsic store_deref (ssa_895, ssa_39) (wrmask=x /*1*/, access=0) vec1 32 ssa_896 = deref_var &phi@13 (function_temp float) intrinsic store_deref (ssa_896, ssa_40) (wrmask=x /*1*/, access=0) vec1 32 ssa_897 = deref_var &phi@12 (function_temp float) intrinsic store_deref (ssa_897, ssa_41) (wrmask=x /*1*/, access=0) vec1 32 ssa_898 = deref_var &phi@11 (function_temp float) intrinsic store_deref (ssa_898, ssa_42) (wrmask=x /*1*/, access=0) vec1 32 ssa_899 = deref_var &phi@10 (function_temp float) intrinsic store_deref (ssa_899, ssa_43) (wrmask=x /*1*/, access=0) vec1 32 ssa_900 = deref_var &phi (function_temp float) intrinsic store_deref (ssa_900, ssa_44) (wrmask=x /*1*/, access=0) vec1 1 ssa_901 = load_const (false) vec1 32 ssa_902 = deref_var &loop_break (function_temp bool) intrinsic store_deref (ssa_902, ssa_901) (wrmask=x /*1*/, access=0) /* succs: block_6 */ loop { block block_6: /* preds: block_5 block_15 */ vec1 1 ssa_903 = load_const (false) vec1 32 ssa_904 = deref_var &loop_continue (function_temp bool) intrinsic store_deref (ssa_904, ssa_903) (wrmask=x /*1*/, access=0) vec1 32 ssa_905 = deref_var &phi (function_temp float) vec1 32 ssa_906 = intrinsic load_deref (ssa_905) (access=0) vec1 32 ssa_907 = deref_var &phi@10 (function_temp float) vec1 32 ssa_908 = intrinsic load_deref (ssa_907) (access=0) vec1 32 ssa_909 = deref_var &phi@11 (function_temp float) vec1 32 ssa_910 = intrinsic load_deref (ssa_909) (access=0) vec1 32 ssa_911 = deref_var &phi@12 (function_temp float) vec1 32 ssa_912 = intrinsic load_deref (ssa_911) (access=0) vec1 32 ssa_913 = deref_var &phi@13 (function_temp float) vec1 32 ssa_914 = intrinsic load_deref (ssa_913) (access=0) vec1 32 ssa_915 = deref_var &phi@14 (function_temp float) vec1 32 ssa_916 = intrinsic load_deref (ssa_915) (access=0) vec1 32 ssa_917 = deref_var &phi@15 (function_temp float) vec1 32 ssa_918 = intrinsic load_deref (ssa_917) (access=0) vec1 32 ssa_919 = deref_var &phi@16 (function_temp float) vec1 32 ssa_920 = intrinsic load_deref (ssa_919) (access=0) vec1 32 ssa_921 = deref_var &phi@17 (function_temp float) vec1 32 ssa_922 = intrinsic load_deref (ssa_921) (access=0) vec1 32 ssa_923 = deref_var &phi@18 (function_temp float) vec1 32 ssa_924 = intrinsic load_deref (ssa_923) (access=0) vec1 32 ssa_925 = deref_var &phi@19 (function_temp float) vec1 32 ssa_926 = intrinsic load_deref (ssa_925) (access=0) vec1 32 ssa_927 = deref_var &phi@20 (function_temp float) vec1 32 ssa_928 = intrinsic load_deref (ssa_927) (access=0) vec1 32 ssa_929 = deref_var &phi@21 (function_temp uint) vec1 32 ssa_930 = intrinsic load_deref (ssa_929) (access=0) vec1 32 ssa_931 = ufind_msb ssa_930 vec1 1 ssa_932 = ieq ssa_931, ssa_486 vec1 32 ssa_933 = isub ssa_485, ssa_931 vec1 32 ssa_934 = bcsel ssa_932, ssa_484, ssa_933 vec1 32 ssa_935 = isub ssa_483, ssa_934 vec1 1 ssa_936 = ieq ssa_934, ssa_482 vec1 32 ssa_937 = bcsel ssa_936, ssa_481, ssa_935 vec1 32 ssa_938 = iand ssa_937, ssa_480 vec1 32 ssa_939 = ishl ssa_479, ssa_938 vec1 32 ssa_940 = ixor ssa_939, ssa_930 vec1 1 ssa_941 = flt! ssa_478, ssa_928 vec1 32 ssa_942 = iand ssa_939, ssa_864 vec1 1 ssa_943 = ine ssa_942, ssa_477 vec1 1 ssa_944 = iand ssa_941, ssa_943 vec1 32 ssa_945 = deref_var &phi@45 (function_temp float) intrinsic store_deref (ssa_945, ssa_918) (wrmask=x /*1*/, access=0) vec1 32 ssa_946 = deref_var &phi@44 (function_temp float) intrinsic store_deref (ssa_946, ssa_928) (wrmask=x /*1*/, access=0) vec1 32 ssa_947 = deref_var &phi@43 (function_temp float) intrinsic store_deref (ssa_947, ssa_926) (wrmask=x /*1*/, access=0) vec1 32 ssa_948 = deref_var &phi@42 (function_temp float) intrinsic store_deref (ssa_948, ssa_924) (wrmask=x /*1*/, access=0) vec1 32 ssa_949 = deref_var &phi@41 (function_temp float) intrinsic store_deref (ssa_949, ssa_922) (wrmask=x /*1*/, access=0) vec1 32 ssa_950 = deref_var &phi@40 (function_temp float) intrinsic store_deref (ssa_950, ssa_920) (wrmask=x /*1*/, access=0) vec1 32 ssa_951 = deref_var &phi@39 (function_temp float) intrinsic store_deref (ssa_951, ssa_916) (wrmask=x /*1*/, access=0) vec1 32 ssa_952 = deref_var &phi@38 (function_temp float) intrinsic store_deref (ssa_952, ssa_912) (wrmask=x /*1*/, access=0) vec1 32 ssa_953 = deref_var &phi@37 (function_temp float) intrinsic store_deref (ssa_953, ssa_910) (wrmask=x /*1*/, access=0) vec1 32 ssa_954 = deref_var &phi@36 (function_temp float) intrinsic store_deref (ssa_954, ssa_908) (wrmask=x /*1*/, access=0) vec1 32 ssa_955 = deref_var &phi@35 (function_temp float) intrinsic store_deref (ssa_955, ssa_906) (wrmask=x /*1*/, access=0) vec1 32 ssa_956 = deref_var &phi@34 (function_temp float) intrinsic store_deref (ssa_956, ssa_914) (wrmask=x /*1*/, access=0) /* succs: block_7 block_11 */ if ssa_944 { block block_7: /* preds: block_6 */ vec1 32 ssa_957 = iand ssa_939, ssa_844.y vec1 1 ssa_958 = ine ssa_957, ssa_476 vec1 32 ssa_959 = bcsel ssa_958, ssa_844.x, ssa_862.x vec1 32 ssa_960 = bcsel ssa_958, ssa_844.y, ssa_862.y vec1 32 ssa_961 = iadd ssa_939, ssa_475 vec1 32 ssa_962 = iand ssa_960, ssa_961 vec1 32 ssa_963 = bit_count ssa_962 vec1 32 ssa_964 = iadd ssa_963, ssa_959 vec4 32 ssa_965 = intrinsic load_vulkan_descriptor (ssa_541) (desc_type=SSBO /*7*/) vec4 32 ssa_966 = deref_cast (SSBO *)ssa_965 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_967 = deref_struct &ssa_966->field0 (ssbo uint[]) /* &((SSBO *)ssa_965)->field0 */ vec4 32 ssa_968 = deref_array &(*ssa_967)[ssa_964] (ssbo uint) /* &((SSBO *)ssa_965)->field0[ssa_964] */ vec1 32 ssa_969 = intrinsic load_deref (ssa_968) (access=16) vec1 32 ssa_970 = ushr ssa_969, ssa_474 vec1 32 ssa_971 = iand ssa_969, ssa_473 vec1 32 ssa_972 = iand ssa_970, ssa_472 vec1 32 ssa_973 = u2f32 ssa_971 vec1 32 ssa_974 = u2f32 ssa_972 vec1 32 ssa_975 = ushr ssa_969, ssa_471 vec1 32 ssa_976 = ushr ssa_969, ssa_470 vec1 32 ssa_977 = iand ssa_975, ssa_469 vec1 32 ssa_978 = iand ssa_976, ssa_468 vec1 32 ssa_979 = ushr ssa_803, ssa_977 vec1 32 ssa_980 = ushr ssa_804, ssa_978 vec1 32 ssa_981 = u2f32 ssa_979 vec1 32 ssa_982 = u2f32 ssa_980 vec1 32 ssa_983 = fmul ssa_981, ssa_866 vec1 32 ssa_984 = fmul ssa_982, ssa_867 vec1 32 ssa_985 = ffract ssa_983 vec1 32 ssa_986 = ffract ssa_984 vec1 32 ssa_987 = fmul ssa_870, ssa_985 vec1 32 ssa_988 = fmul ssa_871, ssa_986 vec1 32 ssa_989 = fadd ssa_987, ssa_868 vec1 32 ssa_990 = fadd ssa_988, ssa_869 vec1 32 ssa_991 = fmul ssa_973, ssa_796.z vec1 32 ssa_992 = fmul ssa_974, ssa_796.w vec1 32 ssa_993 = fadd ssa_989, ssa_991 vec1 32 ssa_994 = fadd ssa_990, ssa_992 vec1 32 ssa_995 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_996 = deref_array &(*ssa_995)[ssa_584] (uniform texture2D) /* &@0[ssa_584] */ vec1 32 ssa_998 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_999 = deref_array &(*ssa_998)[ssa_593] (uniform sampler) /* &@9[ssa_593] */ vec2 32 ssa_1001 = vec2 ssa_993, ssa_994 vec4 32 ssa_1004 = (float32)txl ssa_996 (texture_deref), ssa_999 (sampler_deref), ssa_1001 (coord), ssa_467 (lod) vec1 1 ssa_1005 = flt! ssa_466, ssa_1004.x vec1 32 ssa_1006 = deref_var &phi@33 (function_temp float) intrinsic store_deref (ssa_1006, ssa_918) (wrmask=x /*1*/, access=0) vec1 32 ssa_1007 = deref_var &phi@32 (function_temp float) intrinsic store_deref (ssa_1007, ssa_928) (wrmask=x /*1*/, access=0) vec1 32 ssa_1008 = deref_var &phi@31 (function_temp float) intrinsic store_deref (ssa_1008, ssa_926) (wrmask=x /*1*/, access=0) vec1 32 ssa_1009 = deref_var &phi@30 (function_temp float) intrinsic store_deref (ssa_1009, ssa_924) (wrmask=x /*1*/, access=0) vec1 32 ssa_1010 = deref_var &phi@29 (function_temp float) intrinsic store_deref (ssa_1010, ssa_922) (wrmask=x /*1*/, access=0) vec1 32 ssa_1011 = deref_var &phi@28 (function_temp float) intrinsic store_deref (ssa_1011, ssa_920) (wrmask=x /*1*/, access=0) vec1 32 ssa_1012 = deref_var &phi@27 (function_temp float) intrinsic store_deref (ssa_1012, ssa_916) (wrmask=x /*1*/, access=0) vec1 32 ssa_1013 = deref_var &phi@26 (function_temp float) intrinsic store_deref (ssa_1013, ssa_912) (wrmask=x /*1*/, access=0) vec1 32 ssa_1014 = deref_var &phi@25 (function_temp float) intrinsic store_deref (ssa_1014, ssa_910) (wrmask=x /*1*/, access=0) vec1 32 ssa_1015 = deref_var &phi@24 (function_temp float) intrinsic store_deref (ssa_1015, ssa_908) (wrmask=x /*1*/, access=0) vec1 32 ssa_1016 = deref_var &phi@23 (function_temp float) intrinsic store_deref (ssa_1016, ssa_906) (wrmask=x /*1*/, access=0) vec1 32 ssa_1017 = deref_var &phi@22 (function_temp float) intrinsic store_deref (ssa_1017, ssa_914) (wrmask=x /*1*/, access=0) /* succs: block_8 block_9 */ if ssa_1005 { block block_8: /* preds: block_7 */ vec1 32 ssa_1018 = iadd ssa_937, ssa_772 vec1 32 ssa_1019 = imul ssa_1018, ssa_465 vec4 32 ssa_1020 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1021 = deref_cast (SSBO *)ssa_1020 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1022 = deref_struct &ssa_1021->field0 (ssbo uint[]) /* &((SSBO *)ssa_1020)->field0 */ vec4 32 ssa_1023 = deref_array &(*ssa_1022)[ssa_1019] (ssbo uint) /* &((SSBO *)ssa_1020)->field0[ssa_1019] */ vec1 32 ssa_1024 = intrinsic load_deref (ssa_1023) (access=16) vec1 32 ssa_1025 = iadd ssa_1019, ssa_464 vec4 32 ssa_1026 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1027 = deref_cast (SSBO *)ssa_1026 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1028 = deref_struct &ssa_1027->field0 (ssbo uint[]) /* &((SSBO *)ssa_1026)->field0 */ vec4 32 ssa_1029 = deref_array &(*ssa_1028)[ssa_1025] (ssbo uint) /* &((SSBO *)ssa_1026)->field0[ssa_1025] */ vec1 32 ssa_1030 = intrinsic load_deref (ssa_1029) (access=16) vec1 32 ssa_1031 = iadd ssa_1019, ssa_463 vec4 32 ssa_1032 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1033 = deref_cast (SSBO *)ssa_1032 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1034 = deref_struct &ssa_1033->field0 (ssbo uint[]) /* &((SSBO *)ssa_1032)->field0 */ vec4 32 ssa_1035 = deref_array &(*ssa_1034)[ssa_1031] (ssbo uint) /* &((SSBO *)ssa_1032)->field0[ssa_1031] */ vec1 32 ssa_1036 = intrinsic load_deref (ssa_1035) (access=16) vec1 32 ssa_1037 = imul ssa_1018, ssa_462 vec1 32 ssa_1038 = iadd ssa_1037, ssa_461 vec4 32 ssa_1039 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1040 = deref_cast (SSBO *)ssa_1039 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1041 = deref_struct &ssa_1040->field0 (ssbo uint[]) /* &((SSBO *)ssa_1039)->field0 */ vec4 32 ssa_1042 = deref_array &(*ssa_1041)[ssa_1038] (ssbo uint) /* &((SSBO *)ssa_1039)->field0[ssa_1038] */ vec1 32 ssa_1043 = intrinsic load_deref (ssa_1042) (access=16) vec1 32 ssa_1044 = imul ssa_1018, ssa_460 vec1 32 ssa_1045 = iadd ssa_1044, ssa_459 vec4 32 ssa_1046 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1047 = deref_cast (SSBO *)ssa_1046 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1048 = deref_struct &ssa_1047->field0 (ssbo uint[]) /* &((SSBO *)ssa_1046)->field0 */ vec4 32 ssa_1049 = deref_array &(*ssa_1048)[ssa_1045] (ssbo uint) /* &((SSBO *)ssa_1046)->field0[ssa_1045] */ vec1 32 ssa_1050 = intrinsic load_deref (ssa_1049) (access=16) vec1 32 ssa_1051 = imul ssa_1018, ssa_458 vec1 32 ssa_1052 = iadd ssa_1051, ssa_457 vec4 32 ssa_1053 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1054 = deref_cast (SSBO *)ssa_1053 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1055 = deref_struct &ssa_1054->field0 (ssbo uint[]) /* &((SSBO *)ssa_1053)->field0 */ vec4 32 ssa_1056 = deref_array &(*ssa_1055)[ssa_1052] (ssbo uint) /* &((SSBO *)ssa_1053)->field0[ssa_1052] */ vec1 32 ssa_1057 = intrinsic load_deref (ssa_1056) (access=16) vec1 32 ssa_1058 = imul ssa_1018, ssa_456 vec1 32 ssa_1059 = iadd ssa_1058, ssa_455 vec4 32 ssa_1060 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1061 = deref_cast (SSBO *)ssa_1060 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1062 = deref_struct &ssa_1061->field0 (ssbo uint[]) /* &((SSBO *)ssa_1060)->field0 */ vec4 32 ssa_1063 = deref_array &(*ssa_1062)[ssa_1059] (ssbo uint) /* &((SSBO *)ssa_1060)->field0[ssa_1059] */ vec1 32 ssa_1064 = intrinsic load_deref (ssa_1063) (access=16) vec1 32 ssa_1065 = imul ssa_1018, ssa_454 vec1 32 ssa_1066 = iadd ssa_1065, ssa_453 vec4 32 ssa_1067 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1068 = deref_cast (SSBO *)ssa_1067 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1069 = deref_struct &ssa_1068->field0 (ssbo uint[]) /* &((SSBO *)ssa_1067)->field0 */ vec4 32 ssa_1070 = deref_array &(*ssa_1069)[ssa_1066] (ssbo uint) /* &((SSBO *)ssa_1067)->field0[ssa_1066] */ vec1 32 ssa_1071 = intrinsic load_deref (ssa_1070) (access=16) vec1 32 ssa_1072 = imul ssa_1018, ssa_452 vec1 32 ssa_1073 = iadd ssa_1072, ssa_451 vec4 32 ssa_1074 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1075 = deref_cast (SSBO *)ssa_1074 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1076 = deref_struct &ssa_1075->field0 (ssbo uint[]) /* &((SSBO *)ssa_1074)->field0 */ vec4 32 ssa_1077 = deref_array &(*ssa_1076)[ssa_1073] (ssbo uint) /* &((SSBO *)ssa_1074)->field0[ssa_1073] */ vec1 32 ssa_1078 = intrinsic load_deref (ssa_1077) (access=16) vec1 32 ssa_1079 = imul ssa_1018, ssa_450 vec1 32 ssa_1080 = iadd ssa_1079, ssa_449 vec4 32 ssa_1081 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1082 = deref_cast (SSBO *)ssa_1081 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1083 = deref_struct &ssa_1082->field0 (ssbo uint[]) /* &((SSBO *)ssa_1081)->field0 */ vec4 32 ssa_1084 = deref_array &(*ssa_1083)[ssa_1080] (ssbo uint) /* &((SSBO *)ssa_1081)->field0[ssa_1080] */ vec1 32 ssa_1085 = intrinsic load_deref (ssa_1084) (access=16) vec1 32 ssa_1086 = imul ssa_1018, ssa_448 vec1 32 ssa_1087 = iadd ssa_1086, ssa_447 vec4 32 ssa_1088 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1089 = deref_cast (SSBO *)ssa_1088 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1090 = deref_struct &ssa_1089->field0 (ssbo uint[]) /* &((SSBO *)ssa_1088)->field0 */ vec4 32 ssa_1091 = deref_array &(*ssa_1090)[ssa_1087] (ssbo uint) /* &((SSBO *)ssa_1088)->field0[ssa_1087] */ vec1 32 ssa_1092 = intrinsic load_deref (ssa_1091) (access=16) vec1 32 ssa_1093 = imul ssa_1018, ssa_446 vec1 32 ssa_1094 = iadd ssa_1093, ssa_445 vec4 32 ssa_1095 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1096 = deref_cast (SSBO *)ssa_1095 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1097 = deref_struct &ssa_1096->field0 (ssbo uint[]) /* &((SSBO *)ssa_1095)->field0 */ vec4 32 ssa_1098 = deref_array &(*ssa_1097)[ssa_1094] (ssbo uint) /* &((SSBO *)ssa_1095)->field0[ssa_1094] */ vec1 32 ssa_1099 = intrinsic load_deref (ssa_1098) (access=16) vec1 32 ssa_1100 = imul ssa_1018, ssa_444 vec1 32 ssa_1101 = iadd ssa_1100, ssa_443 vec4 32 ssa_1102 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1103 = deref_cast (SSBO *)ssa_1102 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1104 = deref_struct &ssa_1103->field0 (ssbo uint[]) /* &((SSBO *)ssa_1102)->field0 */ vec4 32 ssa_1105 = deref_array &(*ssa_1104)[ssa_1101] (ssbo uint) /* &((SSBO *)ssa_1102)->field0[ssa_1101] */ vec1 32 ssa_1106 = intrinsic load_deref (ssa_1105) (access=16) vec1 32 ssa_1107 = imul ssa_1018, ssa_442 vec1 32 ssa_1108 = iadd ssa_1107, ssa_441 vec4 32 ssa_1109 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1110 = deref_cast (SSBO *)ssa_1109 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1111 = deref_struct &ssa_1110->field0 (ssbo uint[]) /* &((SSBO *)ssa_1109)->field0 */ vec4 32 ssa_1112 = deref_array &(*ssa_1111)[ssa_1108] (ssbo uint) /* &((SSBO *)ssa_1109)->field0[ssa_1108] */ vec1 32 ssa_1113 = intrinsic load_deref (ssa_1112) (access=16) vec1 32 ssa_1114 = imul ssa_1018, ssa_440 vec1 32 ssa_1115 = iadd ssa_1114, ssa_439 vec4 32 ssa_1116 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1117 = deref_cast (SSBO *)ssa_1116 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1118 = deref_struct &ssa_1117->field0 (ssbo uint[]) /* &((SSBO *)ssa_1116)->field0 */ vec4 32 ssa_1119 = deref_array &(*ssa_1118)[ssa_1115] (ssbo uint) /* &((SSBO *)ssa_1116)->field0[ssa_1115] */ vec1 32 ssa_1120 = intrinsic load_deref (ssa_1119) (access=16) vec1 32 ssa_1121 = imul ssa_1018, ssa_438 vec1 32 ssa_1122 = iadd ssa_1121, ssa_437 vec4 32 ssa_1123 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1124 = deref_cast (SSBO *)ssa_1123 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1125 = deref_struct &ssa_1124->field0 (ssbo uint[]) /* &((SSBO *)ssa_1123)->field0 */ vec4 32 ssa_1126 = deref_array &(*ssa_1125)[ssa_1122] (ssbo uint) /* &((SSBO *)ssa_1123)->field0[ssa_1122] */ vec1 32 ssa_1127 = intrinsic load_deref (ssa_1126) (access=16) vec1 32 ssa_1128 = imul ssa_1018, ssa_436 vec1 32 ssa_1129 = iadd ssa_1128, ssa_435 vec4 32 ssa_1130 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1131 = deref_cast (SSBO *)ssa_1130 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1132 = deref_struct &ssa_1131->field0 (ssbo uint[]) /* &((SSBO *)ssa_1130)->field0 */ vec4 32 ssa_1133 = deref_array &(*ssa_1132)[ssa_1129] (ssbo uint) /* &((SSBO *)ssa_1130)->field0[ssa_1129] */ vec1 32 ssa_1134 = intrinsic load_deref (ssa_1133) (access=16) vec1 32 ssa_1135 = imul ssa_1018, ssa_434 vec1 32 ssa_1136 = iadd ssa_1135, ssa_433 vec4 32 ssa_1137 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1138 = deref_cast (SSBO *)ssa_1137 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1139 = deref_struct &ssa_1138->field0 (ssbo uint[]) /* &((SSBO *)ssa_1137)->field0 */ vec4 32 ssa_1140 = deref_array &(*ssa_1139)[ssa_1136] (ssbo uint) /* &((SSBO *)ssa_1137)->field0[ssa_1136] */ vec1 32 ssa_1141 = intrinsic load_deref (ssa_1140) (access=16) vec1 32 ssa_1142 = imul ssa_1018, ssa_432 vec1 32 ssa_1143 = iadd ssa_1142, ssa_431 vec4 32 ssa_1144 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1145 = deref_cast (SSBO *)ssa_1144 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1146 = deref_struct &ssa_1145->field0 (ssbo uint[]) /* &((SSBO *)ssa_1144)->field0 */ vec4 32 ssa_1147 = deref_array &(*ssa_1146)[ssa_1143] (ssbo uint) /* &((SSBO *)ssa_1144)->field0[ssa_1143] */ vec1 32 ssa_1148 = intrinsic load_deref (ssa_1147) (access=16) vec1 32 ssa_1149 = imul ssa_1018, ssa_430 vec1 32 ssa_1150 = iadd ssa_1149, ssa_429 vec4 32 ssa_1151 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1152 = deref_cast (SSBO *)ssa_1151 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1153 = deref_struct &ssa_1152->field0 (ssbo uint[]) /* &((SSBO *)ssa_1151)->field0 */ vec4 32 ssa_1154 = deref_array &(*ssa_1153)[ssa_1150] (ssbo uint) /* &((SSBO *)ssa_1151)->field0[ssa_1150] */ vec1 32 ssa_1155 = intrinsic load_deref (ssa_1154) (access=16) vec1 32 ssa_1156 = imul ssa_1018, ssa_428 vec1 32 ssa_1157 = iadd ssa_1156, ssa_427 vec4 32 ssa_1158 = intrinsic load_vulkan_descriptor (ssa_536) (desc_type=SSBO /*7*/) vec4 32 ssa_1159 = deref_cast (SSBO *)ssa_1158 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1160 = deref_struct &ssa_1159->field0 (ssbo uvec2[]) /* &((SSBO *)ssa_1158)->field0 */ vec4 32 ssa_1161 = deref_array &(*ssa_1160)[ssa_1157] (ssbo uvec2) /* &((SSBO *)ssa_1158)->field0[ssa_1157] */ vec2 32 ssa_1162 = intrinsic load_deref (ssa_1161) (access=16) vec1 32 ssa_1163 = imul ssa_1018, ssa_426 vec1 32 ssa_1164 = iadd ssa_1163, ssa_425 vec4 32 ssa_1165 = intrinsic load_vulkan_descriptor (ssa_536) (desc_type=SSBO /*7*/) vec4 32 ssa_1166 = deref_cast (SSBO *)ssa_1165 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1167 = deref_struct &ssa_1166->field0 (ssbo uvec2[]) /* &((SSBO *)ssa_1165)->field0 */ vec4 32 ssa_1168 = deref_array &(*ssa_1167)[ssa_1164] (ssbo uvec2) /* &((SSBO *)ssa_1165)->field0[ssa_1164] */ vec2 32 ssa_1169 = intrinsic load_deref (ssa_1168) (access=16) vec1 32 ssa_1170 = imul ssa_1018, ssa_424 vec1 32 ssa_1171 = iadd ssa_1170, ssa_423 vec4 32 ssa_1172 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1173 = deref_cast (SSBO *)ssa_1172 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1174 = deref_struct &ssa_1173->field0 (ssbo uint[]) /* &((SSBO *)ssa_1172)->field0 */ vec4 32 ssa_1175 = deref_array &(*ssa_1174)[ssa_1171] (ssbo uint) /* &((SSBO *)ssa_1172)->field0[ssa_1171] */ vec1 32 ssa_1176 = intrinsic load_deref (ssa_1175) (access=16) vec1 32 ssa_1177 = imul ssa_1018, ssa_422 vec1 32 ssa_1178 = iadd ssa_1177, ssa_421 vec4 32 ssa_1179 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1180 = deref_cast (SSBO *)ssa_1179 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1181 = deref_struct &ssa_1180->field0 (ssbo uint[]) /* &((SSBO *)ssa_1179)->field0 */ vec4 32 ssa_1182 = deref_array &(*ssa_1181)[ssa_1178] (ssbo uint) /* &((SSBO *)ssa_1179)->field0[ssa_1178] */ vec1 32 ssa_1183 = intrinsic load_deref (ssa_1182) (access=16) vec1 32 ssa_1184 = imul ssa_1018, ssa_420 vec1 32 ssa_1185 = iadd ssa_1184, ssa_419 vec4 32 ssa_1186 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1187 = deref_cast (SSBO *)ssa_1186 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1188 = deref_struct &ssa_1187->field0 (ssbo uint[]) /* &((SSBO *)ssa_1186)->field0 */ vec4 32 ssa_1189 = deref_array &(*ssa_1188)[ssa_1185] (ssbo uint) /* &((SSBO *)ssa_1186)->field0[ssa_1185] */ vec1 32 ssa_1190 = intrinsic load_deref (ssa_1189) (access=16) vec1 32 ssa_1191 = imul ssa_1018, ssa_418 vec1 32 ssa_1192 = iadd ssa_1191, ssa_417 vec4 32 ssa_1193 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1194 = deref_cast (SSBO *)ssa_1193 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1195 = deref_struct &ssa_1194->field0 (ssbo uint[]) /* &((SSBO *)ssa_1193)->field0 */ vec4 32 ssa_1196 = deref_array &(*ssa_1195)[ssa_1192] (ssbo uint) /* &((SSBO *)ssa_1193)->field0[ssa_1192] */ vec1 32 ssa_1197 = intrinsic load_deref (ssa_1196) (access=16) vec4 32 ssa_1198 = intrinsic load_vulkan_descriptor (ssa_623) (desc_type=SSBO /*7*/) vec4 32 ssa_1199 = deref_cast (BindlessCBV *)ssa_1198 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1200 = deref_struct &ssa_1199->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1198)->field0 */ vec1 32 ssa_1201 = load_const (0x0000004f = 0.000000) vec4 32 ssa_1202 = deref_array &(*ssa_1200)[79] (ssbo vec4) /* &((BindlessCBV *)ssa_1198)->field0[79] */ vec4 32 ssa_1203 = intrinsic load_deref (ssa_1202) (access=16) vec4 32 ssa_1204 = intrinsic load_vulkan_descriptor (ssa_623) (desc_type=SSBO /*7*/) vec4 32 ssa_1205 = deref_cast (BindlessCBV *)ssa_1204 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1206 = deref_struct &ssa_1205->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1204)->field0 */ vec1 32 ssa_1207 = load_const (0x00000050 = 0.000000) vec4 32 ssa_1208 = deref_array &(*ssa_1206)[80] (ssbo vec4) /* &((BindlessCBV *)ssa_1204)->field0[80] */ vec4 32 ssa_1209 = intrinsic load_deref (ssa_1208) (access=16) vec1 32 ssa_1210 = fmul ssa_1209.y, ssa_1203.z vec1 32 ssa_1211 = fmul ssa_1209.y, ssa_1203.w vec1 32 ssa_1212 = fmul ssa_1209.x, ssa_1209.w vec1 32 ssa_1213 = fmax! ssa_1212, ssa_416 vec1 32 ssa_1214 = fmin! ssa_1213, ssa_415 vec1 32 ssa_1215 = fadd ssa_1210, ssa_414 vec1 32 ssa_1216 = fadd ssa_1211, ssa_413 vec1 32 ssa_1217 = fmul ssa_1214, ssa_1215 vec1 32 ssa_1218 = fmul ssa_1214, ssa_1216 vec1 32 ssa_1219 = fadd ssa_1217, ssa_412 vec1 32 ssa_1220 = fadd ssa_1218, ssa_411 vec1 32 ssa_1221 = fmul ssa_1134, ssa_829 vec1 32 ssa_1222 = fmul ssa_1134, ssa_828 vec1 32 ssa_1223 = fadd ssa_1169.x, ssa_1221 vec1 32 ssa_1224 = fadd ssa_1169.y, ssa_1222 vec1 32 ssa_1225 = fmul ssa_1134, ssa_811 vec1 32 ssa_1226 = fmul ssa_1134, ssa_812 vec1 32 ssa_1227 = fmul ssa_1134, ssa_813 vec1 32 ssa_1228 = fmul ssa_1134, ssa_814 vec1 32 ssa_1229 = fmul ssa_1141, ssa_829 vec1 32 ssa_1230 = fmul ssa_1141, ssa_828 vec1 32 ssa_1231 = fadd ssa_1162.x, ssa_1229 vec1 32 ssa_1232 = fadd ssa_1162.y, ssa_1230 vec1 32 ssa_1233 = fmul ssa_1141, ssa_826.x vec1 32 ssa_1234 = fmul ssa_1233, ssa_812 vec1 32 ssa_1235 = fmul ssa_1233, ssa_814 vec1 32 ssa_1236 = fmul ssa_811, ssa_778.x vec1 32 ssa_1237 = fmul ssa_1236, ssa_1233 vec1 32 ssa_1238 = fmul ssa_1237, ssa_1219 vec1 32 ssa_1239 = fmul ssa_1234, ssa_1219 vec1 32 ssa_1240 = fmul ssa_813, ssa_778.x vec1 32 ssa_1241 = fmul ssa_1240, ssa_1233 vec1 32 ssa_1242 = fmul ssa_1241, ssa_1220 vec1 32 ssa_1243 = fmul ssa_1235, ssa_1220 vec1 32 ssa_1244 = iadd ssa_1183, ssa_410 vec1 32 ssa_1245 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1246 = deref_struct &ssa_1245->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1247 = intrinsic load_deref (ssa_1246) (access=0) vec1 32 ssa_1248 = iadd ssa_1247, ssa_409 vec1 32 ssa_1249 = iadd ssa_1248, ssa_1244 vec1 32 ssa_1250 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1251 = deref_array &(*ssa_1250)[ssa_1249] (uniform texture2D) /* &@0[ssa_1249] */ vec1 32 ssa_1253 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1254 = deref_array &(*ssa_1253)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1256 = vec2 ssa_1231, ssa_1232 vec2 32 ssa_1257 = vec2 ssa_1238, ssa_1239 vec2 32 ssa_1258 = vec2 ssa_1242, ssa_1243 vec4 32 ssa_1261 = (float32)txd ssa_1251 (texture_deref), ssa_1254 (sampler_deref), ssa_1256 (coord), ssa_1257 (ddx), ssa_1258 (ddy) vec1 32 ssa_1262 = fmul ssa_1261.x, ssa_408 vec1 32 ssa_1263 = fmul ssa_1261.y, ssa_407 vec1 32 ssa_1264 = fadd ssa_1262, ssa_406 vec1 32 ssa_1265 = fadd ssa_1263, ssa_405 vec1 32 ssa_1266 = fsub ssa_404, ssa_1261.w vec1 32 ssa_1267 = fsub ssa_1004.x, ssa_1266 vec1 32 ssa_1268 = fmul ssa_1267, ssa_1148 vec1 32 ssa_1269 = fadd ssa_1268, ssa_1266 vec1 32 ssa_1270 = fmax! ssa_1269, ssa_403 vec1 32 ssa_1271 = fmin! ssa_1270, ssa_402 vec1 32 ssa_1272 = fadd ssa_1271, ssa_401 vec1 32 ssa_1273 = fabs ssa_1272 vec1 32 ssa_1274 = fmul ssa_1273, ssa_400 vec1 32 ssa_1275 = fsub ssa_399, ssa_1274 vec1 32 ssa_1276 = flog2 ssa_1275 vec1 32 ssa_1277 = fmul ssa_1276, ssa_398 vec1 32 ssa_1278 = fexp2 ssa_1277 vec1 32 ssa_1279 = fmax! ssa_1278, ssa_397 vec1 32 ssa_1280 = fmin! ssa_1279, ssa_396 vec1 32 ssa_1281 = fmul ssa_1280, ssa_1043 vec1 32 ssa_1282 = fsub ssa_1281, ssa_906 vec1 32 ssa_1283 = fmax! ssa_1282, ssa_395 vec1 32 ssa_1284 = fmin! ssa_1283, ssa_394 vec1 32 ssa_1285 = fmul ssa_1271, ssa_1043 vec1 32 ssa_1286 = fadd ssa_1285, ssa_906 vec1 32 ssa_1287 = fmul ssa_1264, ssa_1155 vec1 32 ssa_1288 = fmul ssa_1265, ssa_1155 vec1 32 ssa_1289 = fsub ssa_1287, ssa_910 vec1 32 ssa_1290 = fsub ssa_1288, ssa_912 vec1 32 ssa_1291 = fmul ssa_1284, ssa_1289 vec1 32 ssa_1292 = fmul ssa_1284, ssa_1290 vec1 32 ssa_1293 = fadd ssa_1291, ssa_910 vec1 32 ssa_1294 = fadd ssa_1292, ssa_912 vec1 32 ssa_1295 = fabs ssa_1155 vec1 32 ssa_1296 = fmul ssa_1295, ssa_1284 vec1 32 ssa_1297 = fmax! ssa_908, ssa_1296 vec1 32 ssa_1298 = fmin! ssa_928, ssa_1285 vec1 32 ssa_1299 = fsub ssa_928, ssa_1298 vec1 32 ssa_1300 = ushr ssa_1197, ssa_393 vec1 32 ssa_1301 = iadd ssa_1300, ssa_392 vec1 32 ssa_1302 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1303 = deref_struct &ssa_1302->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1304 = intrinsic load_deref (ssa_1303) (access=0) vec1 32 ssa_1305 = iadd ssa_1304, ssa_391 vec1 32 ssa_1306 = iadd ssa_1305, ssa_1301 vec1 32 ssa_1307 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1308 = deref_array &(*ssa_1307)[ssa_1306] (uniform texture2D) /* &@0[ssa_1306] */ vec1 32 ssa_1310 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1311 = deref_array &(*ssa_1310)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1313 = vec2 ssa_1223, ssa_1224 vec2 32 ssa_1314 = vec2 ssa_1225, ssa_1226 vec2 32 ssa_1315 = vec2 ssa_1227, ssa_1228 vec4 32 ssa_1318 = (float32)txd ssa_1308 (texture_deref), ssa_1311 (sampler_deref), ssa_1313 (coord), ssa_1314 (ddx), ssa_1315 (ddy) vec1 32 ssa_1319 = fmul ssa_1318.x, ssa_1078 vec1 32 ssa_1320 = fadd ssa_1319, ssa_1085 vec1 32 ssa_1321 = fmax! ssa_1320, ssa_390 vec1 32 ssa_1322 = fmin! ssa_1321, ssa_389 vec1 32 ssa_1323 = fmul ssa_1322, ssa_1092 vec1 32 ssa_1324 = fadd ssa_1323, ssa_1099 vec1 32 ssa_1325 = fmax! ssa_1324, ssa_388 vec1 32 ssa_1326 = fmin! ssa_1325, ssa_387 vec1 32 ssa_1327 = fmul ssa_1326, ssa_1298 vec1 32 ssa_1328 = fadd ssa_1327, ssa_918 vec1 32 ssa_1329 = iand ssa_1197, ssa_386 vec1 32 ssa_1330 = iadd ssa_1329, ssa_385 vec1 32 ssa_1331 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1332 = deref_struct &ssa_1331->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1333 = intrinsic load_deref (ssa_1332) (access=0) vec1 32 ssa_1334 = iadd ssa_1333, ssa_384 vec1 32 ssa_1335 = iadd ssa_1334, ssa_1330 vec1 32 ssa_1336 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1337 = deref_array &(*ssa_1336)[ssa_1335] (uniform texture2D) /* &@0[ssa_1335] */ vec1 32 ssa_1339 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1340 = deref_array &(*ssa_1339)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1342 = vec2 ssa_1223, ssa_1224 vec2 32 ssa_1343 = vec2 ssa_1225, ssa_1226 vec2 32 ssa_1344 = vec2 ssa_1227, ssa_1228 vec4 32 ssa_1347 = (float32)txd ssa_1337 (texture_deref), ssa_1340 (sampler_deref), ssa_1342 (coord), ssa_1343 (ddx), ssa_1344 (ddy) vec1 32 ssa_1348 = fmul ssa_1347.x, ssa_1050 vec1 32 ssa_1349 = fadd ssa_1348, ssa_1057 vec1 32 ssa_1350 = fmax! ssa_1349, ssa_383 vec1 32 ssa_1351 = fmin! ssa_1350, ssa_382 vec1 32 ssa_1352 = fmul ssa_1351, ssa_1064 vec1 32 ssa_1353 = fadd ssa_1352, ssa_1071 vec1 32 ssa_1354 = fmax! ssa_1353, ssa_381 vec1 32 ssa_1355 = fmin! ssa_1354, ssa_380 vec1 32 ssa_1356 = fmul ssa_1355, ssa_1298 vec1 32 ssa_1357 = fadd ssa_1356, ssa_920 vec1 32 ssa_1358 = iand ssa_1190, ssa_379 vec1 32 ssa_1359 = iadd ssa_1358, ssa_378 vec1 32 ssa_1360 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1361 = deref_struct &ssa_1360->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1362 = intrinsic load_deref (ssa_1361) (access=0) vec1 32 ssa_1363 = iadd ssa_1362, ssa_377 vec1 32 ssa_1364 = iadd ssa_1363, ssa_1359 vec1 32 ssa_1365 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1366 = deref_array &(*ssa_1365)[ssa_1364] (uniform texture2D) /* &@0[ssa_1364] */ vec1 32 ssa_1368 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1369 = deref_array &(*ssa_1368)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1371 = vec2 ssa_1223, ssa_1224 vec2 32 ssa_1372 = vec2 ssa_1225, ssa_1226 vec2 32 ssa_1373 = vec2 ssa_1227, ssa_1228 vec4 32 ssa_1376 = (float32)txd ssa_1366 (texture_deref), ssa_1369 (sampler_deref), ssa_1371 (coord), ssa_1372 (ddx), ssa_1373 (ddy) vec1 32 ssa_1377 = fmul ssa_1347.x, ssa_1106 vec1 32 ssa_1378 = fadd ssa_1377, ssa_1113 vec1 32 ssa_1379 = fmax! ssa_1378, ssa_376 vec1 32 ssa_1380 = fmin! ssa_1379, ssa_375 vec1 32 ssa_1381 = fmul ssa_1380, ssa_1120 vec1 32 ssa_1382 = fadd ssa_1381, ssa_1127 vec1 32 ssa_1383 = fmax! ssa_1382, ssa_374 vec1 32 ssa_1384 = fmin! ssa_1383, ssa_373 vec1 32 ssa_1385 = fadd ssa_1024, ssa_372 vec1 32 ssa_1386 = fadd ssa_1030, ssa_371 vec1 32 ssa_1387 = fadd ssa_1036, ssa_370 vec1 32 ssa_1388 = fmul ssa_1384, ssa_1385 vec1 32 ssa_1389 = fmul ssa_1384, ssa_1386 vec1 32 ssa_1390 = fmul ssa_1384, ssa_1387 vec1 32 ssa_1391 = fadd ssa_1388, ssa_369 vec1 32 ssa_1392 = fadd ssa_1389, ssa_368 vec1 32 ssa_1393 = fadd ssa_1390, ssa_367 vec1 32 ssa_1394 = fmul ssa_1376.x, ssa_1298 vec1 32 ssa_1395 = fmul ssa_1394, ssa_1391 vec1 32 ssa_1396 = fmul ssa_1376.y, ssa_1298 vec1 32 ssa_1397 = fmul ssa_1396, ssa_1392 vec1 32 ssa_1398 = fmul ssa_1376.z, ssa_1298 vec1 32 ssa_1399 = fmul ssa_1398, ssa_1393 vec1 32 ssa_1400 = fadd ssa_1395, ssa_922 vec1 32 ssa_1401 = fadd ssa_1397, ssa_924 vec1 32 ssa_1402 = fadd ssa_1399, ssa_926 vec1 32 ssa_1403 = fmul ssa_1225, ssa_820.x vec1 32 ssa_1404 = fmul ssa_1226, ssa_820.x vec1 32 ssa_1405 = fmul ssa_1227, ssa_820.x vec1 32 ssa_1406 = fmul ssa_1228, ssa_820.x vec1 32 ssa_1407 = ushr ssa_1190, ssa_366 vec1 32 ssa_1408 = iadd ssa_1407, ssa_365 vec1 32 ssa_1409 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1410 = deref_struct &ssa_1409->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1411 = intrinsic load_deref (ssa_1410) (access=0) vec1 32 ssa_1412 = iadd ssa_1411, ssa_364 vec1 32 ssa_1413 = iadd ssa_1412, ssa_1408 vec1 32 ssa_1414 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1415 = deref_array &(*ssa_1414)[ssa_1413] (uniform texture2D) /* &@0[ssa_1413] */ vec1 32 ssa_1417 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1418 = deref_array &(*ssa_1417)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1420 = vec2 ssa_1223, ssa_1224 vec2 32 ssa_1421 = vec2 ssa_1403, ssa_1404 vec2 32 ssa_1422 = vec2 ssa_1405, ssa_1406 vec4 32 ssa_1425 = (float32)txd ssa_1415 (texture_deref), ssa_1418 (sampler_deref), ssa_1420 (coord), ssa_1421 (ddx), ssa_1422 (ddy) vec1 32 ssa_1426 = fmul ssa_1425.x, ssa_363 vec1 32 ssa_1427 = fmul ssa_1425.y, ssa_362 vec1 32 ssa_1428 = fadd ssa_1426, ssa_361 vec1 32 ssa_1429 = fadd ssa_1427, ssa_360 vec1 32 ssa_1430 = fmul ssa_1298, ssa_1176 vec1 32 ssa_1431 = fmul ssa_1430, ssa_1428 vec1 32 ssa_1432 = fmul ssa_1430, ssa_1429 vec1 32 ssa_1433 = fadd ssa_1431, ssa_914 vec1 32 ssa_1434 = fadd ssa_1432, ssa_916 vec1 32 ssa_1435 = deref_var &phi@33 (function_temp float) intrinsic store_deref (ssa_1435, ssa_1328) (wrmask=x /*1*/, access=0) vec1 32 ssa_1436 = deref_var &phi@32 (function_temp float) intrinsic store_deref (ssa_1436, ssa_1299) (wrmask=x /*1*/, access=0) vec1 32 ssa_1437 = deref_var &phi@31 (function_temp float) intrinsic store_deref (ssa_1437, ssa_1402) (wrmask=x /*1*/, access=0) vec1 32 ssa_1438 = deref_var &phi@30 (function_temp float) intrinsic store_deref (ssa_1438, ssa_1401) (wrmask=x /*1*/, access=0) vec1 32 ssa_1439 = deref_var &phi@29 (function_temp float) intrinsic store_deref (ssa_1439, ssa_1400) (wrmask=x /*1*/, access=0) vec1 32 ssa_1440 = deref_var &phi@28 (function_temp float) intrinsic store_deref (ssa_1440, ssa_1357) (wrmask=x /*1*/, access=0) vec1 32 ssa_1441 = deref_var &phi@27 (function_temp float) intrinsic store_deref (ssa_1441, ssa_1434) (wrmask=x /*1*/, access=0) vec1 32 ssa_1442 = deref_var &phi@26 (function_temp float) intrinsic store_deref (ssa_1442, ssa_1294) (wrmask=x /*1*/, access=0) vec1 32 ssa_1443 = deref_var &phi@25 (function_temp float) intrinsic store_deref (ssa_1443, ssa_1293) (wrmask=x /*1*/, access=0) vec1 32 ssa_1444 = deref_var &phi@24 (function_temp float) intrinsic store_deref (ssa_1444, ssa_1297) (wrmask=x /*1*/, access=0) vec1 32 ssa_1445 = deref_var &phi@23 (function_temp float) intrinsic store_deref (ssa_1445, ssa_1286) (wrmask=x /*1*/, access=0) vec1 32 ssa_1446 = deref_var &phi@22 (function_temp float) intrinsic store_deref (ssa_1446, ssa_1433) (wrmask=x /*1*/, access=0) /* succs: block_10 */ } else { block block_9: /* preds: block_7 */ /* succs: block_10 */ } block block_10: /* preds: block_8 block_9 */ vec1 32 ssa_1447 = deref_var &phi@22 (function_temp float) vec1 32 ssa_1448 = intrinsic load_deref (ssa_1447) (access=0) vec1 32 ssa_1449 = deref_var &phi@23 (function_temp float) vec1 32 ssa_1450 = intrinsic load_deref (ssa_1449) (access=0) vec1 32 ssa_1451 = deref_var &phi@24 (function_temp float) vec1 32 ssa_1452 = intrinsic load_deref (ssa_1451) (access=0) vec1 32 ssa_1453 = deref_var &phi@25 (function_temp float) vec1 32 ssa_1454 = intrinsic load_deref (ssa_1453) (access=0) vec1 32 ssa_1455 = deref_var &phi@26 (function_temp float) vec1 32 ssa_1456 = intrinsic load_deref (ssa_1455) (access=0) vec1 32 ssa_1457 = deref_var &phi@27 (function_temp float) vec1 32 ssa_1458 = intrinsic load_deref (ssa_1457) (access=0) vec1 32 ssa_1459 = deref_var &phi@28 (function_temp float) vec1 32 ssa_1460 = intrinsic load_deref (ssa_1459) (access=0) vec1 32 ssa_1461 = deref_var &phi@29 (function_temp float) vec1 32 ssa_1462 = intrinsic load_deref (ssa_1461) (access=0) vec1 32 ssa_1463 = deref_var &phi@30 (function_temp float) vec1 32 ssa_1464 = intrinsic load_deref (ssa_1463) (access=0) vec1 32 ssa_1465 = deref_var &phi@31 (function_temp float) vec1 32 ssa_1466 = intrinsic load_deref (ssa_1465) (access=0) vec1 32 ssa_1467 = deref_var &phi@32 (function_temp float) vec1 32 ssa_1468 = intrinsic load_deref (ssa_1467) (access=0) vec1 32 ssa_1469 = deref_var &phi@33 (function_temp float) vec1 32 ssa_1470 = intrinsic load_deref (ssa_1469) (access=0) vec1 32 ssa_1471 = deref_var &phi@45 (function_temp float) intrinsic store_deref (ssa_1471, ssa_1470) (wrmask=x /*1*/, access=0) vec1 32 ssa_1472 = deref_var &phi@44 (function_temp float) intrinsic store_deref (ssa_1472, ssa_1468) (wrmask=x /*1*/, access=0) vec1 32 ssa_1473 = deref_var &phi@43 (function_temp float) intrinsic store_deref (ssa_1473, ssa_1466) (wrmask=x /*1*/, access=0) vec1 32 ssa_1474 = deref_var &phi@42 (function_temp float) intrinsic store_deref (ssa_1474, ssa_1464) (wrmask=x /*1*/, access=0) vec1 32 ssa_1475 = deref_var &phi@41 (function_temp float) intrinsic store_deref (ssa_1475, ssa_1462) (wrmask=x /*1*/, access=0) vec1 32 ssa_1476 = deref_var &phi@40 (function_temp float) intrinsic store_deref (ssa_1476, ssa_1460) (wrmask=x /*1*/, access=0) vec1 32 ssa_1477 = deref_var &phi@39 (function_temp float) intrinsic store_deref (ssa_1477, ssa_1458) (wrmask=x /*1*/, access=0) vec1 32 ssa_1478 = deref_var &phi@38 (function_temp float) intrinsic store_deref (ssa_1478, ssa_1456) (wrmask=x /*1*/, access=0) vec1 32 ssa_1479 = deref_var &phi@37 (function_temp float) intrinsic store_deref (ssa_1479, ssa_1454) (wrmask=x /*1*/, access=0) vec1 32 ssa_1480 = deref_var &phi@36 (function_temp float) intrinsic store_deref (ssa_1480, ssa_1452) (wrmask=x /*1*/, access=0) vec1 32 ssa_1481 = deref_var &phi@35 (function_temp float) intrinsic store_deref (ssa_1481, ssa_1450) (wrmask=x /*1*/, access=0) vec1 32 ssa_1482 = deref_var &phi@34 (function_temp float) intrinsic store_deref (ssa_1482, ssa_1448) (wrmask=x /*1*/, access=0) /* succs: block_12 */ } else { block block_11: /* preds: block_6 */ /* succs: block_12 */ } block block_12: /* preds: block_10 block_11 */ vec1 32 ssa_1483 = deref_var &phi@34 (function_temp float) vec1 32 ssa_1484 = intrinsic load_deref (ssa_1483) (access=0) vec1 32 ssa_1485 = deref_var &phi@35 (function_temp float) vec1 32 ssa_1486 = intrinsic load_deref (ssa_1485) (access=0) vec1 32 ssa_1487 = deref_var &phi@36 (function_temp float) vec1 32 ssa_1488 = intrinsic load_deref (ssa_1487) (access=0) vec1 32 ssa_1489 = deref_var &phi@37 (function_temp float) vec1 32 ssa_1490 = intrinsic load_deref (ssa_1489) (access=0) vec1 32 ssa_1491 = deref_var &phi@38 (function_temp float) vec1 32 ssa_1492 = intrinsic load_deref (ssa_1491) (access=0) vec1 32 ssa_1493 = deref_var &phi@39 (function_temp float) vec1 32 ssa_1494 = intrinsic load_deref (ssa_1493) (access=0) vec1 32 ssa_1495 = deref_var &phi@40 (function_temp float) vec1 32 ssa_1496 = intrinsic load_deref (ssa_1495) (access=0) vec1 32 ssa_1497 = deref_var &phi@41 (function_temp float) vec1 32 ssa_1498 = intrinsic load_deref (ssa_1497) (access=0) vec1 32 ssa_1499 = deref_var &phi@42 (function_temp float) vec1 32 ssa_1500 = intrinsic load_deref (ssa_1499) (access=0) vec1 32 ssa_1501 = deref_var &phi@43 (function_temp float) vec1 32 ssa_1502 = intrinsic load_deref (ssa_1501) (access=0) vec1 32 ssa_1503 = deref_var &phi@44 (function_temp float) vec1 32 ssa_1504 = intrinsic load_deref (ssa_1503) (access=0) vec1 32 ssa_1505 = deref_var &phi@45 (function_temp float) vec1 32 ssa_1506 = intrinsic load_deref (ssa_1505) (access=0) vec1 32 ssa_1507 = iadd ssa_940, ssa_359 vec1 32 ssa_1508 = iand ssa_1507, ssa_940 vec1 1 ssa_1509 = ieq ssa_1508, ssa_358 vec1 32 ssa_1510 = deref_var &phi@21 (function_temp uint) intrinsic store_deref (ssa_1510, ssa_940) (wrmask=x /*1*/, access=0) vec1 32 ssa_1511 = deref_var &phi@20 (function_temp float) intrinsic store_deref (ssa_1511, ssa_1504) (wrmask=x /*1*/, access=0) vec1 32 ssa_1512 = deref_var &phi@19 (function_temp float) intrinsic store_deref (ssa_1512, ssa_1502) (wrmask=x /*1*/, access=0) vec1 32 ssa_1513 = deref_var &phi@18 (function_temp float) intrinsic store_deref (ssa_1513, ssa_1500) (wrmask=x /*1*/, access=0) vec1 32 ssa_1514 = deref_var &phi@17 (function_temp float) intrinsic store_deref (ssa_1514, ssa_1498) (wrmask=x /*1*/, access=0) vec1 32 ssa_1515 = deref_var &phi@16 (function_temp float) intrinsic store_deref (ssa_1515, ssa_1496) (wrmask=x /*1*/, access=0) vec1 32 ssa_1516 = deref_var &phi@15 (function_temp float) intrinsic store_deref (ssa_1516, ssa_1506) (wrmask=x /*1*/, access=0) vec1 32 ssa_1517 = deref_var &phi@14 (function_temp float) intrinsic store_deref (ssa_1517, ssa_1494) (wrmask=x /*1*/, access=0) vec1 32 ssa_1518 = deref_var &phi@13 (function_temp float) intrinsic store_deref (ssa_1518, ssa_1484) (wrmask=x /*1*/, access=0) vec1 32 ssa_1519 = deref_var &phi@12 (function_temp float) intrinsic store_deref (ssa_1519, ssa_1492) (wrmask=x /*1*/, access=0) vec1 32 ssa_1520 = deref_var &phi@11 (function_temp float) intrinsic store_deref (ssa_1520, ssa_1490) (wrmask=x /*1*/, access=0) vec1 32 ssa_1521 = deref_var &phi@10 (function_temp float) intrinsic store_deref (ssa_1521, ssa_1488) (wrmask=x /*1*/, access=0) vec1 32 ssa_1522 = deref_var &phi (function_temp float) intrinsic store_deref (ssa_1522, ssa_1486) (wrmask=x /*1*/, access=0) /* succs: block_13 block_14 */ if ssa_1509 { block block_13: /* preds: block_12 */ break /* succs: block_16 */ } else { block block_14: /* preds: block_12 */ /* succs: block_15 */ } block block_15: /* preds: block_14 */ continue /* succs: block_6 */ } block block_16: /* preds: block_13 */ vec1 32 ssa_1523 = deref_var &phi@58 (function_temp uint) intrinsic store_deref (ssa_1523, ssa_940) (wrmask=x /*1*/, access=0) vec1 32 ssa_1524 = deref_var &phi@57 (function_temp float) intrinsic store_deref (ssa_1524, ssa_1504) (wrmask=x /*1*/, access=0) vec1 32 ssa_1525 = deref_var &phi@56 (function_temp float) intrinsic store_deref (ssa_1525, ssa_1502) (wrmask=x /*1*/, access=0) vec1 32 ssa_1526 = deref_var &phi@55 (function_temp float) intrinsic store_deref (ssa_1526, ssa_1500) (wrmask=x /*1*/, access=0) vec1 32 ssa_1527 = deref_var &phi@54 (function_temp float) intrinsic store_deref (ssa_1527, ssa_1498) (wrmask=x /*1*/, access=0) vec1 32 ssa_1528 = deref_var &phi@53 (function_temp float) intrinsic store_deref (ssa_1528, ssa_1496) (wrmask=x /*1*/, access=0) vec1 32 ssa_1529 = deref_var &phi@52 (function_temp float) intrinsic store_deref (ssa_1529, ssa_1506) (wrmask=x /*1*/, access=0) vec1 32 ssa_1530 = deref_var &phi@51 (function_temp float) intrinsic store_deref (ssa_1530, ssa_1494) (wrmask=x /*1*/, access=0) vec1 32 ssa_1531 = deref_var &phi@50 (function_temp float) intrinsic store_deref (ssa_1531, ssa_1484) (wrmask=x /*1*/, access=0) vec1 32 ssa_1532 = deref_var &phi@49 (function_temp float) intrinsic store_deref (ssa_1532, ssa_1492) (wrmask=x /*1*/, access=0) vec1 32 ssa_1533 = deref_var &phi@48 (function_temp float) intrinsic store_deref (ssa_1533, ssa_1490) (wrmask=x /*1*/, access=0) vec1 32 ssa_1534 = deref_var &phi@47 (function_temp float) intrinsic store_deref (ssa_1534, ssa_1488) (wrmask=x /*1*/, access=0) vec1 32 ssa_1535 = deref_var &phi@46 (function_temp float) intrinsic store_deref (ssa_1535, ssa_1486) (wrmask=x /*1*/, access=0) /* succs: block_17 */ } block block_17: /* preds: block_4 block_16 */ vec1 32 ssa_1536 = deref_var &phi@46 (function_temp float) vec1 32 ssa_1537 = intrinsic load_deref (ssa_1536) (access=0) vec1 32 ssa_1538 = deref_var &phi@47 (function_temp float) vec1 32 ssa_1539 = intrinsic load_deref (ssa_1538) (access=0) vec1 32 ssa_1540 = deref_var &phi@48 (function_temp float) vec1 32 ssa_1541 = intrinsic load_deref (ssa_1540) (access=0) vec1 32 ssa_1542 = deref_var &phi@49 (function_temp float) vec1 32 ssa_1543 = intrinsic load_deref (ssa_1542) (access=0) vec1 32 ssa_1544 = deref_var &phi@50 (function_temp float) vec1 32 ssa_1545 = intrinsic load_deref (ssa_1544) (access=0) vec1 32 ssa_1546 = deref_var &phi@51 (function_temp float) vec1 32 ssa_1547 = intrinsic load_deref (ssa_1546) (access=0) vec1 32 ssa_1548 = deref_var &phi@52 (function_temp float) vec1 32 ssa_1549 = intrinsic load_deref (ssa_1548) (access=0) vec1 32 ssa_1550 = deref_var &phi@53 (function_temp float) vec1 32 ssa_1551 = intrinsic load_deref (ssa_1550) (access=0) vec1 32 ssa_1552 = deref_var &phi@54 (function_temp float) vec1 32 ssa_1553 = intrinsic load_deref (ssa_1552) (access=0) vec1 32 ssa_1554 = deref_var &phi@55 (function_temp float) vec1 32 ssa_1555 = intrinsic load_deref (ssa_1554) (access=0) vec1 32 ssa_1556 = deref_var &phi@56 (function_temp float) vec1 32 ssa_1557 = intrinsic load_deref (ssa_1556) (access=0) vec1 32 ssa_1558 = deref_var &phi@57 (function_temp float) vec1 32 ssa_1559 = intrinsic load_deref (ssa_1558) (access=0) vec1 32 ssa_1560 = deref_var &phi@58 (function_temp uint) vec1 32 ssa_1561 = intrinsic load_deref (ssa_1560) (access=0) vec1 1 ssa_1562 = ine ssa_1561, ssa_357 vec1 1 ssa_1563 = flt! ssa_356, ssa_1559 vec1 1 ssa_1564 = iand ssa_1563, ssa_1562 vec1 32 ssa_1565 = deref_var &phi@68 (function_temp float) intrinsic store_deref (ssa_1565, ssa_1557) (wrmask=x /*1*/, access=0) vec1 32 ssa_1566 = deref_var &phi@67 (function_temp float) intrinsic store_deref (ssa_1566, ssa_1555) (wrmask=x /*1*/, access=0) vec1 32 ssa_1567 = deref_var &phi@66 (function_temp float) intrinsic store_deref (ssa_1567, ssa_1553) (wrmask=x /*1*/, access=0) vec1 32 ssa_1568 = deref_var &phi@65 (function_temp float) intrinsic store_deref (ssa_1568, ssa_1551) (wrmask=x /*1*/, access=0) vec1 32 ssa_1569 = deref_var &phi@64 (function_temp float) intrinsic store_deref (ssa_1569, ssa_1549) (wrmask=x /*1*/, access=0) vec1 32 ssa_1570 = deref_var &phi@63 (function_temp float) intrinsic store_deref (ssa_1570, ssa_1547) (wrmask=x /*1*/, access=0) vec1 32 ssa_1571 = deref_var &phi@62 (function_temp float) intrinsic store_deref (ssa_1571, ssa_1545) (wrmask=x /*1*/, access=0) vec1 32 ssa_1572 = deref_var &phi@61 (function_temp float) intrinsic store_deref (ssa_1572, ssa_1543) (wrmask=x /*1*/, access=0) vec1 32 ssa_1573 = deref_var &phi@60 (function_temp float) intrinsic store_deref (ssa_1573, ssa_1541) (wrmask=x /*1*/, access=0) vec1 32 ssa_1574 = deref_var &phi@59 (function_temp float) intrinsic store_deref (ssa_1574, ssa_1539) (wrmask=x /*1*/, access=0) /* succs: block_18 block_19 */ if ssa_1564 { block block_18: /* preds: block_17 */ vec1 32 ssa_1575 = ufind_msb ssa_1561 vec1 1 ssa_1576 = ieq ssa_1575, ssa_355 vec1 32 ssa_1577 = isub ssa_354, ssa_1575 vec1 32 ssa_1578 = bcsel ssa_1576, ssa_353, ssa_1577 vec1 32 ssa_1579 = isub ssa_352, ssa_1578 vec1 1 ssa_1580 = ieq ssa_1578, ssa_351 vec1 32 ssa_1581 = bcsel ssa_1580, ssa_350, ssa_1579 vec1 32 ssa_1582 = iadd ssa_1581, ssa_772 vec1 32 ssa_1583 = imul ssa_1582, ssa_349 vec4 32 ssa_1584 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1585 = deref_cast (SSBO *)ssa_1584 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1586 = deref_struct &ssa_1585->field0 (ssbo uint[]) /* &((SSBO *)ssa_1584)->field0 */ vec4 32 ssa_1587 = deref_array &(*ssa_1586)[ssa_1583] (ssbo uint) /* &((SSBO *)ssa_1584)->field0[ssa_1583] */ vec1 32 ssa_1588 = intrinsic load_deref (ssa_1587) (access=16) vec1 32 ssa_1589 = iadd ssa_1583, ssa_348 vec4 32 ssa_1590 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1591 = deref_cast (SSBO *)ssa_1590 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1592 = deref_struct &ssa_1591->field0 (ssbo uint[]) /* &((SSBO *)ssa_1590)->field0 */ vec4 32 ssa_1593 = deref_array &(*ssa_1592)[ssa_1589] (ssbo uint) /* &((SSBO *)ssa_1590)->field0[ssa_1589] */ vec1 32 ssa_1594 = intrinsic load_deref (ssa_1593) (access=16) vec1 32 ssa_1595 = iadd ssa_1583, ssa_347 vec4 32 ssa_1596 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1597 = deref_cast (SSBO *)ssa_1596 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1598 = deref_struct &ssa_1597->field0 (ssbo uint[]) /* &((SSBO *)ssa_1596)->field0 */ vec4 32 ssa_1599 = deref_array &(*ssa_1598)[ssa_1595] (ssbo uint) /* &((SSBO *)ssa_1596)->field0[ssa_1595] */ vec1 32 ssa_1600 = intrinsic load_deref (ssa_1599) (access=16) vec1 32 ssa_1601 = imul ssa_1582, ssa_346 vec1 32 ssa_1602 = iadd ssa_1601, ssa_345 vec4 32 ssa_1603 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1604 = deref_cast (SSBO *)ssa_1603 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1605 = deref_struct &ssa_1604->field0 (ssbo uint[]) /* &((SSBO *)ssa_1603)->field0 */ vec4 32 ssa_1606 = deref_array &(*ssa_1605)[ssa_1602] (ssbo uint) /* &((SSBO *)ssa_1603)->field0[ssa_1602] */ vec1 32 ssa_1607 = intrinsic load_deref (ssa_1606) (access=16) vec1 32 ssa_1608 = imul ssa_1582, ssa_344 vec1 32 ssa_1609 = iadd ssa_1608, ssa_343 vec4 32 ssa_1610 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1611 = deref_cast (SSBO *)ssa_1610 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1612 = deref_struct &ssa_1611->field0 (ssbo uint[]) /* &((SSBO *)ssa_1610)->field0 */ vec4 32 ssa_1613 = deref_array &(*ssa_1612)[ssa_1609] (ssbo uint) /* &((SSBO *)ssa_1610)->field0[ssa_1609] */ vec1 32 ssa_1614 = intrinsic load_deref (ssa_1613) (access=16) vec1 32 ssa_1615 = imul ssa_1582, ssa_342 vec1 32 ssa_1616 = iadd ssa_1615, ssa_341 vec4 32 ssa_1617 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1618 = deref_cast (SSBO *)ssa_1617 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1619 = deref_struct &ssa_1618->field0 (ssbo uint[]) /* &((SSBO *)ssa_1617)->field0 */ vec4 32 ssa_1620 = deref_array &(*ssa_1619)[ssa_1616] (ssbo uint) /* &((SSBO *)ssa_1617)->field0[ssa_1616] */ vec1 32 ssa_1621 = intrinsic load_deref (ssa_1620) (access=16) vec1 32 ssa_1622 = imul ssa_1582, ssa_340 vec1 32 ssa_1623 = iadd ssa_1622, ssa_339 vec4 32 ssa_1624 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1625 = deref_cast (SSBO *)ssa_1624 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1626 = deref_struct &ssa_1625->field0 (ssbo uint[]) /* &((SSBO *)ssa_1624)->field0 */ vec4 32 ssa_1627 = deref_array &(*ssa_1626)[ssa_1623] (ssbo uint) /* &((SSBO *)ssa_1624)->field0[ssa_1623] */ vec1 32 ssa_1628 = intrinsic load_deref (ssa_1627) (access=16) vec1 32 ssa_1629 = imul ssa_1582, ssa_338 vec1 32 ssa_1630 = iadd ssa_1629, ssa_337 vec4 32 ssa_1631 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1632 = deref_cast (SSBO *)ssa_1631 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1633 = deref_struct &ssa_1632->field0 (ssbo uint[]) /* &((SSBO *)ssa_1631)->field0 */ vec4 32 ssa_1634 = deref_array &(*ssa_1633)[ssa_1630] (ssbo uint) /* &((SSBO *)ssa_1631)->field0[ssa_1630] */ vec1 32 ssa_1635 = intrinsic load_deref (ssa_1634) (access=16) vec1 32 ssa_1636 = imul ssa_1582, ssa_336 vec1 32 ssa_1637 = iadd ssa_1636, ssa_335 vec4 32 ssa_1638 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1639 = deref_cast (SSBO *)ssa_1638 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1640 = deref_struct &ssa_1639->field0 (ssbo uint[]) /* &((SSBO *)ssa_1638)->field0 */ vec4 32 ssa_1641 = deref_array &(*ssa_1640)[ssa_1637] (ssbo uint) /* &((SSBO *)ssa_1638)->field0[ssa_1637] */ vec1 32 ssa_1642 = intrinsic load_deref (ssa_1641) (access=16) vec1 32 ssa_1643 = imul ssa_1582, ssa_334 vec1 32 ssa_1644 = iadd ssa_1643, ssa_333 vec4 32 ssa_1645 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1646 = deref_cast (SSBO *)ssa_1645 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1647 = deref_struct &ssa_1646->field0 (ssbo uint[]) /* &((SSBO *)ssa_1645)->field0 */ vec4 32 ssa_1648 = deref_array &(*ssa_1647)[ssa_1644] (ssbo uint) /* &((SSBO *)ssa_1645)->field0[ssa_1644] */ vec1 32 ssa_1649 = intrinsic load_deref (ssa_1648) (access=16) vec1 32 ssa_1650 = imul ssa_1582, ssa_332 vec1 32 ssa_1651 = iadd ssa_1650, ssa_331 vec4 32 ssa_1652 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1653 = deref_cast (SSBO *)ssa_1652 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1654 = deref_struct &ssa_1653->field0 (ssbo uint[]) /* &((SSBO *)ssa_1652)->field0 */ vec4 32 ssa_1655 = deref_array &(*ssa_1654)[ssa_1651] (ssbo uint) /* &((SSBO *)ssa_1652)->field0[ssa_1651] */ vec1 32 ssa_1656 = intrinsic load_deref (ssa_1655) (access=16) vec1 32 ssa_1657 = imul ssa_1582, ssa_330 vec1 32 ssa_1658 = iadd ssa_1657, ssa_329 vec4 32 ssa_1659 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1660 = deref_cast (SSBO *)ssa_1659 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1661 = deref_struct &ssa_1660->field0 (ssbo uint[]) /* &((SSBO *)ssa_1659)->field0 */ vec4 32 ssa_1662 = deref_array &(*ssa_1661)[ssa_1658] (ssbo uint) /* &((SSBO *)ssa_1659)->field0[ssa_1658] */ vec1 32 ssa_1663 = intrinsic load_deref (ssa_1662) (access=16) vec1 32 ssa_1664 = imul ssa_1582, ssa_328 vec1 32 ssa_1665 = iadd ssa_1664, ssa_327 vec4 32 ssa_1666 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1667 = deref_cast (SSBO *)ssa_1666 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1668 = deref_struct &ssa_1667->field0 (ssbo uint[]) /* &((SSBO *)ssa_1666)->field0 */ vec4 32 ssa_1669 = deref_array &(*ssa_1668)[ssa_1665] (ssbo uint) /* &((SSBO *)ssa_1666)->field0[ssa_1665] */ vec1 32 ssa_1670 = intrinsic load_deref (ssa_1669) (access=16) vec1 32 ssa_1671 = imul ssa_1582, ssa_326 vec1 32 ssa_1672 = iadd ssa_1671, ssa_325 vec4 32 ssa_1673 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1674 = deref_cast (SSBO *)ssa_1673 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1675 = deref_struct &ssa_1674->field0 (ssbo uint[]) /* &((SSBO *)ssa_1673)->field0 */ vec4 32 ssa_1676 = deref_array &(*ssa_1675)[ssa_1672] (ssbo uint) /* &((SSBO *)ssa_1673)->field0[ssa_1672] */ vec1 32 ssa_1677 = intrinsic load_deref (ssa_1676) (access=16) vec1 32 ssa_1678 = imul ssa_1582, ssa_324 vec1 32 ssa_1679 = iadd ssa_1678, ssa_323 vec4 32 ssa_1680 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1681 = deref_cast (SSBO *)ssa_1680 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1682 = deref_struct &ssa_1681->field0 (ssbo uint[]) /* &((SSBO *)ssa_1680)->field0 */ vec4 32 ssa_1683 = deref_array &(*ssa_1682)[ssa_1679] (ssbo uint) /* &((SSBO *)ssa_1680)->field0[ssa_1679] */ vec1 32 ssa_1684 = intrinsic load_deref (ssa_1683) (access=16) vec1 32 ssa_1685 = imul ssa_1582, ssa_322 vec1 32 ssa_1686 = iadd ssa_1685, ssa_321 vec4 32 ssa_1687 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1688 = deref_cast (SSBO *)ssa_1687 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1689 = deref_struct &ssa_1688->field0 (ssbo uint[]) /* &((SSBO *)ssa_1687)->field0 */ vec4 32 ssa_1690 = deref_array &(*ssa_1689)[ssa_1686] (ssbo uint) /* &((SSBO *)ssa_1687)->field0[ssa_1686] */ vec1 32 ssa_1691 = intrinsic load_deref (ssa_1690) (access=16) vec1 32 ssa_1692 = imul ssa_1582, ssa_320 vec1 32 ssa_1693 = iadd ssa_1692, ssa_319 vec4 32 ssa_1694 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1695 = deref_cast (SSBO *)ssa_1694 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1696 = deref_struct &ssa_1695->field0 (ssbo uint[]) /* &((SSBO *)ssa_1694)->field0 */ vec4 32 ssa_1697 = deref_array &(*ssa_1696)[ssa_1693] (ssbo uint) /* &((SSBO *)ssa_1694)->field0[ssa_1693] */ vec1 32 ssa_1698 = intrinsic load_deref (ssa_1697) (access=16) vec1 32 ssa_1699 = imul ssa_1582, ssa_318 vec1 32 ssa_1700 = iadd ssa_1699, ssa_317 vec4 32 ssa_1701 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1702 = deref_cast (SSBO *)ssa_1701 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1703 = deref_struct &ssa_1702->field0 (ssbo uint[]) /* &((SSBO *)ssa_1701)->field0 */ vec4 32 ssa_1704 = deref_array &(*ssa_1703)[ssa_1700] (ssbo uint) /* &((SSBO *)ssa_1701)->field0[ssa_1700] */ vec1 32 ssa_1705 = intrinsic load_deref (ssa_1704) (access=16) vec1 32 ssa_1706 = imul ssa_1582, ssa_316 vec1 32 ssa_1707 = iadd ssa_1706, ssa_315 vec4 32 ssa_1708 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1709 = deref_cast (SSBO *)ssa_1708 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1710 = deref_struct &ssa_1709->field0 (ssbo uint[]) /* &((SSBO *)ssa_1708)->field0 */ vec4 32 ssa_1711 = deref_array &(*ssa_1710)[ssa_1707] (ssbo uint) /* &((SSBO *)ssa_1708)->field0[ssa_1707] */ vec1 32 ssa_1712 = intrinsic load_deref (ssa_1711) (access=16) vec1 32 ssa_1713 = imul ssa_1582, ssa_314 vec1 32 ssa_1714 = iadd ssa_1713, ssa_313 vec4 32 ssa_1715 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1716 = deref_cast (SSBO *)ssa_1715 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1717 = deref_struct &ssa_1716->field0 (ssbo uint[]) /* &((SSBO *)ssa_1715)->field0 */ vec4 32 ssa_1718 = deref_array &(*ssa_1717)[ssa_1714] (ssbo uint) /* &((SSBO *)ssa_1715)->field0[ssa_1714] */ vec1 32 ssa_1719 = intrinsic load_deref (ssa_1718) (access=16) vec1 32 ssa_1720 = imul ssa_1582, ssa_312 vec1 32 ssa_1721 = iadd ssa_1720, ssa_311 vec4 32 ssa_1722 = intrinsic load_vulkan_descriptor (ssa_536) (desc_type=SSBO /*7*/) vec4 32 ssa_1723 = deref_cast (SSBO *)ssa_1722 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1724 = deref_struct &ssa_1723->field0 (ssbo uvec2[]) /* &((SSBO *)ssa_1722)->field0 */ vec4 32 ssa_1725 = deref_array &(*ssa_1724)[ssa_1721] (ssbo uvec2) /* &((SSBO *)ssa_1722)->field0[ssa_1721] */ vec2 32 ssa_1726 = intrinsic load_deref (ssa_1725) (access=16) vec1 32 ssa_1727 = imul ssa_1582, ssa_310 vec1 32 ssa_1728 = iadd ssa_1727, ssa_309 vec4 32 ssa_1729 = intrinsic load_vulkan_descriptor (ssa_536) (desc_type=SSBO /*7*/) vec4 32 ssa_1730 = deref_cast (SSBO *)ssa_1729 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1731 = deref_struct &ssa_1730->field0 (ssbo uvec2[]) /* &((SSBO *)ssa_1729)->field0 */ vec4 32 ssa_1732 = deref_array &(*ssa_1731)[ssa_1728] (ssbo uvec2) /* &((SSBO *)ssa_1729)->field0[ssa_1728] */ vec2 32 ssa_1733 = intrinsic load_deref (ssa_1732) (access=16) vec1 32 ssa_1734 = imul ssa_1582, ssa_308 vec1 32 ssa_1735 = iadd ssa_1734, ssa_307 vec4 32 ssa_1736 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1737 = deref_cast (SSBO *)ssa_1736 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1738 = deref_struct &ssa_1737->field0 (ssbo uint[]) /* &((SSBO *)ssa_1736)->field0 */ vec4 32 ssa_1739 = deref_array &(*ssa_1738)[ssa_1735] (ssbo uint) /* &((SSBO *)ssa_1736)->field0[ssa_1735] */ vec1 32 ssa_1740 = intrinsic load_deref (ssa_1739) (access=16) vec1 32 ssa_1741 = imul ssa_1582, ssa_306 vec1 32 ssa_1742 = iadd ssa_1741, ssa_305 vec4 32 ssa_1743 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1744 = deref_cast (SSBO *)ssa_1743 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1745 = deref_struct &ssa_1744->field0 (ssbo uint[]) /* &((SSBO *)ssa_1743)->field0 */ vec4 32 ssa_1746 = deref_array &(*ssa_1745)[ssa_1742] (ssbo uint) /* &((SSBO *)ssa_1743)->field0[ssa_1742] */ vec1 32 ssa_1747 = intrinsic load_deref (ssa_1746) (access=16) vec1 32 ssa_1748 = imul ssa_1582, ssa_304 vec1 32 ssa_1749 = iadd ssa_1748, ssa_303 vec4 32 ssa_1750 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1751 = deref_cast (SSBO *)ssa_1750 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1752 = deref_struct &ssa_1751->field0 (ssbo uint[]) /* &((SSBO *)ssa_1750)->field0 */ vec4 32 ssa_1753 = deref_array &(*ssa_1752)[ssa_1749] (ssbo uint) /* &((SSBO *)ssa_1750)->field0[ssa_1749] */ vec1 32 ssa_1754 = intrinsic load_deref (ssa_1753) (access=16) vec1 32 ssa_1755 = imul ssa_1582, ssa_302 vec1 32 ssa_1756 = iadd ssa_1755, ssa_301 vec4 32 ssa_1757 = intrinsic load_vulkan_descriptor (ssa_531) (desc_type=SSBO /*7*/) vec4 32 ssa_1758 = deref_cast (SSBO *)ssa_1757 (ssbo SSBO) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1759 = deref_struct &ssa_1758->field0 (ssbo uint[]) /* &((SSBO *)ssa_1757)->field0 */ vec4 32 ssa_1760 = deref_array &(*ssa_1759)[ssa_1756] (ssbo uint) /* &((SSBO *)ssa_1757)->field0[ssa_1756] */ vec1 32 ssa_1761 = intrinsic load_deref (ssa_1760) (access=16) vec4 32 ssa_1762 = intrinsic load_vulkan_descriptor (ssa_623) (desc_type=SSBO /*7*/) vec4 32 ssa_1763 = deref_cast (BindlessCBV *)ssa_1762 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1764 = deref_struct &ssa_1763->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1762)->field0 */ vec1 32 ssa_1765 = load_const (0x0000004f = 0.000000) vec4 32 ssa_1766 = deref_array &(*ssa_1764)[79] (ssbo vec4) /* &((BindlessCBV *)ssa_1762)->field0[79] */ vec4 32 ssa_1767 = intrinsic load_deref (ssa_1766) (access=16) vec4 32 ssa_1768 = intrinsic load_vulkan_descriptor (ssa_623) (desc_type=SSBO /*7*/) vec4 32 ssa_1769 = deref_cast (BindlessCBV *)ssa_1768 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_1770 = deref_struct &ssa_1769->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_1768)->field0 */ vec1 32 ssa_1771 = load_const (0x00000050 = 0.000000) vec4 32 ssa_1772 = deref_array &(*ssa_1770)[80] (ssbo vec4) /* &((BindlessCBV *)ssa_1768)->field0[80] */ vec4 32 ssa_1773 = intrinsic load_deref (ssa_1772) (access=16) vec1 32 ssa_1774 = fmul ssa_1773.y, ssa_1767.z vec1 32 ssa_1775 = fmul ssa_1773.y, ssa_1767.w vec1 32 ssa_1776 = fmul ssa_1773.x, ssa_1773.w vec1 32 ssa_1777 = fmax! ssa_1776, ssa_300 vec1 32 ssa_1778 = fmin! ssa_1777, ssa_299 vec1 32 ssa_1779 = fadd ssa_1774, ssa_298 vec1 32 ssa_1780 = fadd ssa_1775, ssa_297 vec1 32 ssa_1781 = fmul ssa_1778, ssa_1779 vec1 32 ssa_1782 = fmul ssa_1778, ssa_1780 vec1 32 ssa_1783 = fadd ssa_1781, ssa_296 vec1 32 ssa_1784 = fadd ssa_1782, ssa_295 vec1 32 ssa_1785 = fmul ssa_1698, ssa_829 vec1 32 ssa_1786 = fmul ssa_1698, ssa_828 vec1 32 ssa_1787 = fadd ssa_1733.x, ssa_1785 vec1 32 ssa_1788 = fadd ssa_1733.y, ssa_1786 vec1 32 ssa_1789 = fmul ssa_1698, ssa_811 vec1 32 ssa_1790 = fmul ssa_1698, ssa_812 vec1 32 ssa_1791 = fmul ssa_1698, ssa_813 vec1 32 ssa_1792 = fmul ssa_1698, ssa_814 vec1 32 ssa_1793 = fmul ssa_1705, ssa_829 vec1 32 ssa_1794 = fmul ssa_1705, ssa_828 vec1 32 ssa_1795 = fadd ssa_1726.x, ssa_1793 vec1 32 ssa_1796 = fadd ssa_1726.y, ssa_1794 vec1 32 ssa_1797 = fmul ssa_1705, ssa_826.x vec1 32 ssa_1798 = fmul ssa_1797, ssa_812 vec1 32 ssa_1799 = fmul ssa_1797, ssa_814 vec1 32 ssa_1800 = fmul ssa_811, ssa_778.x vec1 32 ssa_1801 = fmul ssa_1800, ssa_1797 vec1 32 ssa_1802 = fmul ssa_1801, ssa_1783 vec1 32 ssa_1803 = fmul ssa_1798, ssa_1783 vec1 32 ssa_1804 = fmul ssa_813, ssa_778.x vec1 32 ssa_1805 = fmul ssa_1804, ssa_1797 vec1 32 ssa_1806 = fmul ssa_1805, ssa_1784 vec1 32 ssa_1807 = fmul ssa_1799, ssa_1784 vec1 32 ssa_1808 = iadd ssa_1747, ssa_294 vec1 32 ssa_1809 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1810 = deref_struct &ssa_1809->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1811 = intrinsic load_deref (ssa_1810) (access=0) vec1 32 ssa_1812 = iadd ssa_1811, ssa_293 vec1 32 ssa_1813 = iadd ssa_1812, ssa_1808 vec1 32 ssa_1814 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1815 = deref_array &(*ssa_1814)[ssa_1813] (uniform texture2D) /* &@0[ssa_1813] */ vec1 32 ssa_1817 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1818 = deref_array &(*ssa_1817)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1820 = vec2 ssa_1795, ssa_1796 vec2 32 ssa_1821 = vec2 ssa_1802, ssa_1803 vec2 32 ssa_1822 = vec2 ssa_1806, ssa_1807 vec4 32 ssa_1825 = (float32)txd ssa_1815 (texture_deref), ssa_1818 (sampler_deref), ssa_1820 (coord), ssa_1821 (ddx), ssa_1822 (ddy) vec1 32 ssa_1826 = fmul ssa_1825.x, ssa_292 vec1 32 ssa_1827 = fmul ssa_1825.y, ssa_291 vec1 32 ssa_1828 = fadd ssa_1826, ssa_290 vec1 32 ssa_1829 = fadd ssa_1827, ssa_289 vec1 32 ssa_1830 = fsub ssa_288, ssa_1825.w vec1 32 ssa_1831 = fmul ssa_1825.w, ssa_1712 vec1 32 ssa_1832 = fadd ssa_1830, ssa_1831 vec1 32 ssa_1833 = fmax! ssa_1832, ssa_287 vec1 32 ssa_1834 = fmin! ssa_1833, ssa_286 vec1 32 ssa_1835 = fadd ssa_1834, ssa_285 vec1 32 ssa_1836 = fabs ssa_1835 vec1 32 ssa_1837 = fmul ssa_1836, ssa_284 vec1 32 ssa_1838 = fsub ssa_283, ssa_1837 vec1 32 ssa_1839 = flog2 ssa_1838 vec1 32 ssa_1840 = fmul ssa_1839, ssa_282 vec1 32 ssa_1841 = fexp2 ssa_1840 vec1 32 ssa_1842 = fmax! ssa_1841, ssa_281 vec1 32 ssa_1843 = fmin! ssa_1842, ssa_280 vec1 32 ssa_1844 = fmul ssa_1843, ssa_1607 vec1 32 ssa_1845 = fsub ssa_1844, ssa_1537 vec1 32 ssa_1846 = fmax! ssa_1845, ssa_279 vec1 32 ssa_1847 = fmin! ssa_1846, ssa_278 vec1 32 ssa_1848 = fmul ssa_1834, ssa_1607 vec1 32 ssa_1849 = fmul ssa_1828, ssa_1719 vec1 32 ssa_1850 = fmul ssa_1829, ssa_1719 vec1 32 ssa_1851 = fsub ssa_1849, ssa_1541 vec1 32 ssa_1852 = fsub ssa_1850, ssa_1543 vec1 32 ssa_1853 = fmul ssa_1847, ssa_1851 vec1 32 ssa_1854 = fmul ssa_1847, ssa_1852 vec1 32 ssa_1855 = fadd ssa_1853, ssa_1541 vec1 32 ssa_1856 = fadd ssa_1854, ssa_1543 vec1 32 ssa_1857 = fabs ssa_1719 vec1 32 ssa_1858 = fmul ssa_1857, ssa_1847 vec1 32 ssa_1859 = fmax! ssa_1539, ssa_1858 vec1 32 ssa_1860 = fmin! ssa_1559, ssa_1848 vec1 32 ssa_1861 = ushr ssa_1761, ssa_277 vec1 32 ssa_1862 = iadd ssa_1861, ssa_276 vec1 32 ssa_1863 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1864 = deref_struct &ssa_1863->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1865 = intrinsic load_deref (ssa_1864) (access=0) vec1 32 ssa_1866 = iadd ssa_1865, ssa_275 vec1 32 ssa_1867 = iadd ssa_1866, ssa_1862 vec1 32 ssa_1868 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1869 = deref_array &(*ssa_1868)[ssa_1867] (uniform texture2D) /* &@0[ssa_1867] */ vec1 32 ssa_1871 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1872 = deref_array &(*ssa_1871)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1874 = vec2 ssa_1787, ssa_1788 vec2 32 ssa_1875 = vec2 ssa_1789, ssa_1790 vec2 32 ssa_1876 = vec2 ssa_1791, ssa_1792 vec4 32 ssa_1879 = (float32)txd ssa_1869 (texture_deref), ssa_1872 (sampler_deref), ssa_1874 (coord), ssa_1875 (ddx), ssa_1876 (ddy) vec1 32 ssa_1880 = fmul ssa_1879.x, ssa_1642 vec1 32 ssa_1881 = fadd ssa_1880, ssa_1649 vec1 32 ssa_1882 = fmax! ssa_1881, ssa_274 vec1 32 ssa_1883 = fmin! ssa_1882, ssa_273 vec1 32 ssa_1884 = fmul ssa_1883, ssa_1656 vec1 32 ssa_1885 = fadd ssa_1884, ssa_1663 vec1 32 ssa_1886 = fmax! ssa_1885, ssa_272 vec1 32 ssa_1887 = fmin! ssa_1886, ssa_271 vec1 32 ssa_1888 = fmul ssa_1887, ssa_1860 vec1 32 ssa_1889 = fadd ssa_1888, ssa_1549 vec1 32 ssa_1890 = iand ssa_1761, ssa_270 vec1 32 ssa_1891 = iadd ssa_1890, ssa_269 vec1 32 ssa_1892 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1893 = deref_struct &ssa_1892->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1894 = intrinsic load_deref (ssa_1893) (access=0) vec1 32 ssa_1895 = iadd ssa_1894, ssa_268 vec1 32 ssa_1896 = iadd ssa_1895, ssa_1891 vec1 32 ssa_1897 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1898 = deref_array &(*ssa_1897)[ssa_1896] (uniform texture2D) /* &@0[ssa_1896] */ vec1 32 ssa_1900 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1901 = deref_array &(*ssa_1900)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1903 = vec2 ssa_1787, ssa_1788 vec2 32 ssa_1904 = vec2 ssa_1789, ssa_1790 vec2 32 ssa_1905 = vec2 ssa_1791, ssa_1792 vec4 32 ssa_1908 = (float32)txd ssa_1898 (texture_deref), ssa_1901 (sampler_deref), ssa_1903 (coord), ssa_1904 (ddx), ssa_1905 (ddy) vec1 32 ssa_1909 = fmul ssa_1908.x, ssa_1614 vec1 32 ssa_1910 = fadd ssa_1909, ssa_1621 vec1 32 ssa_1911 = fmax! ssa_1910, ssa_267 vec1 32 ssa_1912 = fmin! ssa_1911, ssa_266 vec1 32 ssa_1913 = fmul ssa_1912, ssa_1628 vec1 32 ssa_1914 = fadd ssa_1913, ssa_1635 vec1 32 ssa_1915 = fmax! ssa_1914, ssa_265 vec1 32 ssa_1916 = fmin! ssa_1915, ssa_264 vec1 32 ssa_1917 = fmul ssa_1916, ssa_1860 vec1 32 ssa_1918 = fadd ssa_1917, ssa_1551 vec1 32 ssa_1919 = iand ssa_1754, ssa_263 vec1 32 ssa_1920 = iadd ssa_1919, ssa_262 vec1 32 ssa_1921 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1922 = deref_struct &ssa_1921->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1923 = intrinsic load_deref (ssa_1922) (access=0) vec1 32 ssa_1924 = iadd ssa_1923, ssa_261 vec1 32 ssa_1925 = iadd ssa_1924, ssa_1920 vec1 32 ssa_1926 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1927 = deref_array &(*ssa_1926)[ssa_1925] (uniform texture2D) /* &@0[ssa_1925] */ vec1 32 ssa_1929 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1930 = deref_array &(*ssa_1929)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1932 = vec2 ssa_1787, ssa_1788 vec2 32 ssa_1933 = vec2 ssa_1789, ssa_1790 vec2 32 ssa_1934 = vec2 ssa_1791, ssa_1792 vec4 32 ssa_1937 = (float32)txd ssa_1927 (texture_deref), ssa_1930 (sampler_deref), ssa_1932 (coord), ssa_1933 (ddx), ssa_1934 (ddy) vec1 32 ssa_1938 = fmul ssa_1908.x, ssa_1670 vec1 32 ssa_1939 = fadd ssa_1938, ssa_1677 vec1 32 ssa_1940 = fmax! ssa_1939, ssa_260 vec1 32 ssa_1941 = fmin! ssa_1940, ssa_259 vec1 32 ssa_1942 = fmul ssa_1941, ssa_1684 vec1 32 ssa_1943 = fadd ssa_1942, ssa_1691 vec1 32 ssa_1944 = fmax! ssa_1943, ssa_258 vec1 32 ssa_1945 = fmin! ssa_1944, ssa_257 vec1 32 ssa_1946 = fadd ssa_1588, ssa_256 vec1 32 ssa_1947 = fadd ssa_1594, ssa_255 vec1 32 ssa_1948 = fadd ssa_1600, ssa_254 vec1 32 ssa_1949 = fmul ssa_1945, ssa_1946 vec1 32 ssa_1950 = fmul ssa_1945, ssa_1947 vec1 32 ssa_1951 = fmul ssa_1945, ssa_1948 vec1 32 ssa_1952 = fadd ssa_1949, ssa_253 vec1 32 ssa_1953 = fadd ssa_1950, ssa_252 vec1 32 ssa_1954 = fadd ssa_1951, ssa_251 vec1 32 ssa_1955 = fmul ssa_1937.x, ssa_1860 vec1 32 ssa_1956 = fmul ssa_1955, ssa_1952 vec1 32 ssa_1957 = fmul ssa_1937.y, ssa_1860 vec1 32 ssa_1958 = fmul ssa_1957, ssa_1953 vec1 32 ssa_1959 = fmul ssa_1937.z, ssa_1860 vec1 32 ssa_1960 = fmul ssa_1959, ssa_1954 vec1 32 ssa_1961 = fadd ssa_1956, ssa_1553 vec1 32 ssa_1962 = fadd ssa_1958, ssa_1555 vec1 32 ssa_1963 = fadd ssa_1960, ssa_1557 vec1 32 ssa_1964 = fmul ssa_1789, ssa_820.x vec1 32 ssa_1965 = fmul ssa_1790, ssa_820.x vec1 32 ssa_1966 = fmul ssa_1791, ssa_820.x vec1 32 ssa_1967 = fmul ssa_1792, ssa_820.x vec1 32 ssa_1968 = ushr ssa_1754, ssa_250 vec1 32 ssa_1969 = iadd ssa_1968, ssa_249 vec1 32 ssa_1970 = deref_var ®isters (push_const RootConstants) vec1 32 ssa_1971 = deref_struct &ssa_1970->field26 (push_const uint) /* ®isters.field26 */ vec1 32 ssa_1972 = intrinsic load_deref (ssa_1971) (access=0) vec1 32 ssa_1973 = iadd ssa_1972, ssa_248 vec1 32 ssa_1974 = iadd ssa_1973, ssa_1969 vec1 32 ssa_1975 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_1976 = deref_array &(*ssa_1975)[ssa_1974] (uniform texture2D) /* &@0[ssa_1974] */ vec1 32 ssa_1978 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_1979 = deref_array &(*ssa_1978)[ssa_607] (uniform sampler) /* &@9[ssa_607] */ vec2 32 ssa_1981 = vec2 ssa_1787, ssa_1788 vec2 32 ssa_1982 = vec2 ssa_1964, ssa_1965 vec2 32 ssa_1983 = vec2 ssa_1966, ssa_1967 vec4 32 ssa_1986 = (float32)txd ssa_1976 (texture_deref), ssa_1979 (sampler_deref), ssa_1981 (coord), ssa_1982 (ddx), ssa_1983 (ddy) vec1 32 ssa_1987 = fmul ssa_1986.x, ssa_247 vec1 32 ssa_1988 = fmul ssa_1986.y, ssa_246 vec1 32 ssa_1989 = fadd ssa_1987, ssa_245 vec1 32 ssa_1990 = fadd ssa_1988, ssa_244 vec1 32 ssa_1991 = fmul ssa_1860, ssa_1740 vec1 32 ssa_1992 = fmul ssa_1991, ssa_1989 vec1 32 ssa_1993 = fmul ssa_1991, ssa_1990 vec1 32 ssa_1994 = fadd ssa_1992, ssa_1545 vec1 32 ssa_1995 = fadd ssa_1993, ssa_1547 vec1 32 ssa_1996 = deref_var &phi@68 (function_temp float) intrinsic store_deref (ssa_1996, ssa_1963) (wrmask=x /*1*/, access=0) vec1 32 ssa_1997 = deref_var &phi@67 (function_temp float) intrinsic store_deref (ssa_1997, ssa_1962) (wrmask=x /*1*/, access=0) vec1 32 ssa_1998 = deref_var &phi@66 (function_temp float) intrinsic store_deref (ssa_1998, ssa_1961) (wrmask=x /*1*/, access=0) vec1 32 ssa_1999 = deref_var &phi@65 (function_temp float) intrinsic store_deref (ssa_1999, ssa_1918) (wrmask=x /*1*/, access=0) vec1 32 ssa_2000 = deref_var &phi@64 (function_temp float) intrinsic store_deref (ssa_2000, ssa_1889) (wrmask=x /*1*/, access=0) vec1 32 ssa_2001 = deref_var &phi@63 (function_temp float) intrinsic store_deref (ssa_2001, ssa_1995) (wrmask=x /*1*/, access=0) vec1 32 ssa_2002 = deref_var &phi@62 (function_temp float) intrinsic store_deref (ssa_2002, ssa_1994) (wrmask=x /*1*/, access=0) vec1 32 ssa_2003 = deref_var &phi@61 (function_temp float) intrinsic store_deref (ssa_2003, ssa_1856) (wrmask=x /*1*/, access=0) vec1 32 ssa_2004 = deref_var &phi@60 (function_temp float) intrinsic store_deref (ssa_2004, ssa_1855) (wrmask=x /*1*/, access=0) vec1 32 ssa_2005 = deref_var &phi@59 (function_temp float) intrinsic store_deref (ssa_2005, ssa_1859) (wrmask=x /*1*/, access=0) /* succs: block_20 */ } else { block block_19: /* preds: block_17 */ /* succs: block_20 */ } block block_20: /* preds: block_18 block_19 */ vec1 32 ssa_2006 = deref_var &phi@59 (function_temp float) vec1 32 ssa_2007 = intrinsic load_deref (ssa_2006) (access=0) vec1 32 ssa_2008 = deref_var &phi@60 (function_temp float) vec1 32 ssa_2009 = intrinsic load_deref (ssa_2008) (access=0) vec1 32 ssa_2010 = deref_var &phi@61 (function_temp float) vec1 32 ssa_2011 = intrinsic load_deref (ssa_2010) (access=0) vec1 32 ssa_2012 = deref_var &phi@62 (function_temp float) vec1 32 ssa_2013 = intrinsic load_deref (ssa_2012) (access=0) vec1 32 ssa_2014 = deref_var &phi@63 (function_temp float) vec1 32 ssa_2015 = intrinsic load_deref (ssa_2014) (access=0) vec1 32 ssa_2016 = deref_var &phi@64 (function_temp float) vec1 32 ssa_2017 = intrinsic load_deref (ssa_2016) (access=0) vec1 32 ssa_2018 = deref_var &phi@65 (function_temp float) vec1 32 ssa_2019 = intrinsic load_deref (ssa_2018) (access=0) vec1 32 ssa_2020 = deref_var &phi@66 (function_temp float) vec1 32 ssa_2021 = intrinsic load_deref (ssa_2020) (access=0) vec1 32 ssa_2022 = deref_var &phi@67 (function_temp float) vec1 32 ssa_2023 = intrinsic load_deref (ssa_2022) (access=0) vec1 32 ssa_2024 = deref_var &phi@68 (function_temp float) vec1 32 ssa_2025 = intrinsic load_deref (ssa_2024) (access=0) vec2 32 ssa_2026 = vec2 ssa_2013, ssa_2015 vec2 32 ssa_2027 = vec2 ssa_2013, ssa_2015 vec1 32 ssa_2028 = fdot2 ssa_2026, ssa_2027 vec1 32 ssa_2029 = fsub ssa_243, ssa_2028 vec1 32 ssa_2030 = fmax! ssa_2029, ssa_242 vec1 32 ssa_2031 = fmin! ssa_2030, ssa_241 vec1 32 ssa_2032 = fsqrt ssa_2031 vec2 32 ssa_2033 = vec2 ssa_2009, ssa_2011 vec2 32 ssa_2034 = vec2 ssa_2009, ssa_2011 vec1 32 ssa_2035 = fdot2 ssa_2033, ssa_2034 vec1 32 ssa_2036 = fsub ssa_240, ssa_2035 vec1 32 ssa_2037 = fmax! ssa_2036, ssa_239 vec1 32 ssa_2038 = fmin! ssa_2037, ssa_238 vec1 32 ssa_2039 = fsqrt ssa_2038 vec1 32 ssa_2040 = fsub ssa_2009, ssa_2013 vec1 32 ssa_2041 = fsub ssa_2011, ssa_2015 vec1 32 ssa_2042 = fsub ssa_2039, ssa_2032 vec1 32 ssa_2043 = fmul ssa_2040, ssa_2007 vec1 32 ssa_2044 = fmul ssa_2041, ssa_2007 vec1 32 ssa_2045 = fmul ssa_2042, ssa_2007 vec1 32 ssa_2046 = fadd ssa_2013, ssa_2043 vec1 32 ssa_2047 = fadd ssa_2015, ssa_2044 vec1 32 ssa_2048 = fadd ssa_2045, ssa_2032 vec1 32 ssa_2049 = fadd ssa_765, ssa_237 vec1 32 ssa_2050 = fsub ssa_236, ssa_2046 vec1 32 ssa_2051 = fsub ssa_235, ssa_2047 vec3 32 ssa_2052 = vec3 ssa_763, ssa_764, ssa_2049 vec3 32 ssa_2053 = vec3 ssa_2050, ssa_2051, ssa_2048 vec1 32 ssa_2054 = fdot3 ssa_2052, ssa_2053 vec1 32 ssa_2055 = fmul ssa_2054, ssa_763 vec1 32 ssa_2056 = fmul ssa_2054, ssa_764 vec1 32 ssa_2057 = fmul ssa_2049, ssa_2050 vec1 32 ssa_2058 = fmul ssa_2049, ssa_2051 vec1 32 ssa_2059 = fsub ssa_2055, ssa_2057 vec1 32 ssa_2060 = fsub ssa_2056, ssa_2058 vec1 32 ssa_2061 = fsub ssa_2054, ssa_2048 vec1 32 ssa_2062 = fmul ssa_2061, ssa_2049 vec3 32 ssa_2063 = vec3 ssa_2059, ssa_2060, ssa_2062 vec3 32 ssa_2064 = vec3 ssa_2059, ssa_2060, ssa_2062 vec1 32 ssa_2065 = fdot3 ssa_2063, ssa_2064 vec1 32 ssa_2066 = frsq ssa_2065 vec1 32 ssa_2067 = fmul ssa_2059, ssa_2066 vec1 32 ssa_2068 = fmul ssa_2060, ssa_2066 vec1 32 ssa_2069 = fmul ssa_2062, ssa_2066 vec1 32 ssa_2070 = fadd ssa_646.z, ssa_234 vec1 32 ssa_2071 = fmax! ssa_2070, ssa_233 vec1 32 ssa_2072 = fmin! ssa_2071, ssa_232 vec1 32 ssa_2073 = fmul ssa_2067, ssa_668.x vec1 32 ssa_2074 = fmul ssa_2067, ssa_670.y vec1 32 ssa_2075 = fmul ssa_2067, ssa_672.z vec1 32 ssa_2076 = fmul ssa_2068, ssa_676.y vec1 32 ssa_2077 = fmul ssa_2068, ssa_678.z vec1 32 ssa_2078 = fmul ssa_2068, ssa_680.w vec1 32 ssa_2079 = fadd ssa_2073, ssa_2076 vec1 32 ssa_2080 = fadd ssa_2074, ssa_2077 vec1 32 ssa_2081 = fadd ssa_2075, ssa_2078 vec1 32 ssa_2082 = fmul ssa_2069, ssa_686.z vec1 32 ssa_2083 = fmul ssa_2069, ssa_688.w vec1 32 ssa_2084 = fmul ssa_2069, ssa_674.x vec1 32 ssa_2085 = fadd ssa_2079, ssa_2082 vec1 32 ssa_2086 = fadd ssa_2080, ssa_2083 vec1 32 ssa_2087 = fadd ssa_2081, ssa_2084 vec4 32 ssa_2088 = intrinsic load_vulkan_descriptor (ssa_636) (desc_type=SSBO /*7*/) vec4 32 ssa_2089 = deref_cast (BindlessCBV *)ssa_2088 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2090 = deref_struct &ssa_2089->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2088)->field0 */ vec1 32 ssa_2091 = load_const (0x00000006 = 0.000000) vec4 32 ssa_2092 = deref_array &(*ssa_2090)[6] (ssbo vec4) /* &((BindlessCBV *)ssa_2088)->field0[6] */ vec4 32 ssa_2093 = intrinsic load_deref (ssa_2092) (access=16) vec4 32 ssa_2094 = intrinsic load_vulkan_descriptor (ssa_632) (desc_type=SSBO /*7*/) vec4 32 ssa_2095 = deref_cast (BindlessCBV *)ssa_2094 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2096 = deref_struct &ssa_2095->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2094)->field0 */ vec1 32 ssa_2097 = load_const (0x00000008 = 0.000000) vec4 32 ssa_2098 = deref_array &(*ssa_2096)[8] (ssbo vec4) /* &((BindlessCBV *)ssa_2094)->field0[8] */ vec4 32 ssa_2099 = intrinsic load_deref (ssa_2098) (access=16) vec1 32 ssa_2100 = fmul ssa_686.z, ssa_231 vec1 32 ssa_2101 = fmul ssa_688.w, ssa_230 vec1 32 ssa_2102 = fmul ssa_674.x, ssa_229 vec1 32 ssa_2103 = fadd ssa_664.z, ssa_2100 vec1 32 ssa_2104 = fadd ssa_666.w, ssa_2101 vec1 32 ssa_2105 = fadd ssa_656.x, ssa_2102 vec4 32 ssa_2106 = intrinsic load_vulkan_descriptor (ssa_614) (desc_type=SSBO /*7*/) vec4 32 ssa_2107 = deref_cast (BindlessCBV *)ssa_2106 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2108 = deref_struct &ssa_2107->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2106)->field0 */ vec1 32 ssa_2109 = load_const (0x0000001d = 0.000000) vec4 32 ssa_2110 = deref_array &(*ssa_2108)[29] (ssbo vec4) /* &((BindlessCBV *)ssa_2106)->field0[29] */ vec4 32 ssa_2111 = intrinsic load_deref (ssa_2110) (access=16) vec1 32 ssa_2112 = fmul ssa_2111.x, ssa_2103 vec1 32 ssa_2113 = fmul ssa_2111.y, ssa_2104 vec1 32 ssa_2114 = fmul ssa_2111.z, ssa_2105 vec4 32 ssa_2115 = intrinsic load_vulkan_descriptor (ssa_614) (desc_type=SSBO /*7*/) vec4 32 ssa_2116 = deref_cast (BindlessCBV *)ssa_2115 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2117 = deref_struct &ssa_2116->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2115)->field0 */ vec1 32 ssa_2118 = load_const (0x0000001e = 0.000000) vec4 32 ssa_2119 = deref_array &(*ssa_2117)[30] (ssbo vec4) /* &((BindlessCBV *)ssa_2115)->field0[30] */ vec4 32 ssa_2120 = intrinsic load_deref (ssa_2119) (access=16) vec1 32 ssa_2121 = fadd ssa_2112, ssa_2120.x vec1 32 ssa_2122 = fadd ssa_2113, ssa_2120.y vec1 32 ssa_2123 = fadd ssa_2114, ssa_2120.z vec1 32 ssa_2124 = fsub ssa_228, ssa_2121 vec1 32 ssa_2125 = fsub ssa_227, ssa_2122 vec1 32 ssa_2126 = fsub ssa_226, ssa_2123 vec1 32 ssa_2127 = fmin! ssa_2121, ssa_2124 vec1 32 ssa_2128 = fmin! ssa_2122, ssa_2125 vec1 32 ssa_2129 = fmin! ssa_2123, ssa_2126 vec1 32 ssa_2130 = fmin! ssa_2128, ssa_2129 vec1 32 ssa_2131 = fmin! ssa_2127, ssa_2130 vec1 32 ssa_2132 = fmul ssa_2131, ssa_225 vec1 32 ssa_2133 = fmax! ssa_2132, ssa_224 vec1 32 ssa_2134 = fmin! ssa_2133, ssa_223 vec1 32 ssa_2135 = deref_var &@2 (uniform texture3D[]) vec1 32 ssa_2136 = deref_array &(*ssa_2135)[ssa_576] (uniform texture3D) /* &@2[ssa_576] */ vec1 32 ssa_2138 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2139 = deref_array &(*ssa_2138)[ssa_593] (uniform sampler) /* &@9[ssa_593] */ vec3 32 ssa_2141 = vec3 ssa_2121, ssa_2122, ssa_2123 vec4 32 ssa_2144 = (float32)txl ssa_2136 (texture_deref), ssa_2139 (sampler_deref), ssa_2141 (coord), ssa_222 (lod) vec1 32 ssa_2145 = fmul ssa_2144.x, ssa_2134 vec1 32 ssa_2146 = fsub ssa_221, ssa_2145 vec1 32 ssa_2147 = fmax! ssa_2146, ssa_220 vec1 32 ssa_2148 = fmin! ssa_2147, ssa_219 vec1 1 ssa_2149 = flt! ssa_218, ssa_2148 vec1 32 ssa_2150 = deref_var &phi@84 (function_temp float) intrinsic store_deref (ssa_2150, ssa_2148) (wrmask=x /*1*/, access=0) /* succs: block_21 block_50 */ if ssa_2149 { block block_21: /* preds: block_20 */ vec1 1 ssa_2151 = load_const (false) vec1 32 ssa_2152 = deref_var &loop_break@69 (function_temp bool) intrinsic store_deref (ssa_2152, ssa_2151) (wrmask=x /*1*/, access=0) /* succs: block_22 */ loop { block block_22: /* preds: block_21 */ vec1 1 ssa_2153 = load_const (false) vec1 32 ssa_2154 = deref_var &loop_continue@70 (function_temp bool) intrinsic store_deref (ssa_2154, ssa_2153) (wrmask=x /*1*/, access=0) vec4 32 ssa_2155 = intrinsic load_vulkan_descriptor (ssa_614) (desc_type=SSBO /*7*/) vec4 32 ssa_2156 = deref_cast (BindlessCBV *)ssa_2155 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2157 = deref_struct &ssa_2156->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2155)->field0 */ vec1 32 ssa_2158 = load_const (0x00000035 = 0.000000) vec4 32 ssa_2159 = deref_array &(*ssa_2157)[53] (ssbo vec4) /* &((BindlessCBV *)ssa_2155)->field0[53] */ vec4 32 ssa_2160 = intrinsic load_deref (ssa_2159) (access=16) vec1 32 ssa_2161 = fmul ssa_2160.x, ssa_2103 vec1 32 ssa_2162 = fmul ssa_2160.y, ssa_2104 vec1 32 ssa_2163 = fadd ssa_2161, ssa_2160.z vec1 32 ssa_2164 = fadd ssa_2162, ssa_2160.w vec1 32 ssa_2165 = fadd ssa_2163, ssa_217 vec1 32 ssa_2166 = fsub ssa_216, ssa_2164 vec1 32 ssa_2167 = fabs ssa_2165 vec1 32 ssa_2168 = fabs ssa_2166 vec1 1 ssa_2169 = flt! ssa_2167, ssa_215 vec1 1 ssa_2170 = flt! ssa_2168, ssa_214 vec1 1 ssa_2171 = iand ssa_2169, ssa_2170 vec1 32 ssa_2172 = deref_var &phi@73 (function_temp float) intrinsic store_deref (ssa_2172, ssa_20) (wrmask=x /*1*/, access=0) vec1 32 ssa_2173 = deref_var &phi@72 (function_temp float) intrinsic store_deref (ssa_2173, ssa_2163) (wrmask=x /*1*/, access=0) vec1 32 ssa_2174 = deref_var &phi@71 (function_temp float) intrinsic store_deref (ssa_2174, ssa_2164) (wrmask=x /*1*/, access=0) /* succs: block_23 block_24 */ if ssa_2171 { block block_23: /* preds: block_22 */ /* succs: block_31 */ } else { block block_24: /* preds: block_22 */ vec4 32 ssa_2175 = intrinsic load_vulkan_descriptor (ssa_614) (desc_type=SSBO /*7*/) vec4 32 ssa_2176 = deref_cast (BindlessCBV *)ssa_2175 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2177 = deref_struct &ssa_2176->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2175)->field0 */ vec1 32 ssa_2178 = load_const (0x00000036 = 0.000000) vec4 32 ssa_2179 = deref_array &(*ssa_2177)[54] (ssbo vec4) /* &((BindlessCBV *)ssa_2175)->field0[54] */ vec4 32 ssa_2180 = intrinsic load_deref (ssa_2179) (access=16) vec1 32 ssa_2181 = fmul ssa_2180.x, ssa_2103 vec1 32 ssa_2182 = fmul ssa_2180.y, ssa_2104 vec1 32 ssa_2183 = fadd ssa_2181, ssa_2180.z vec1 32 ssa_2184 = fadd ssa_2182, ssa_2180.w vec1 32 ssa_2185 = fadd ssa_2183, ssa_213 vec1 32 ssa_2186 = fsub ssa_212, ssa_2184 vec1 32 ssa_2187 = fabs ssa_2185 vec1 32 ssa_2188 = fabs ssa_2186 vec1 1 ssa_2189 = flt! ssa_2187, ssa_211 vec1 1 ssa_2190 = flt! ssa_2188, ssa_210 vec1 1 ssa_2191 = iand ssa_2189, ssa_2190 vec1 32 ssa_2192 = deref_var &phi@74 (function_temp bool) intrinsic store_deref (ssa_2192, ssa_18) (wrmask=x /*1*/, access=0) /* succs: block_25 block_26 */ if ssa_2191 { block block_25: /* preds: block_24 */ /* succs: block_27 */ } else { block block_26: /* preds: block_24 */ break /* succs: block_35 */ } block block_27: /* preds: block_25 */ vec1 32 ssa_2193 = deref_var &loop_break@69 (function_temp bool) vec1 1 ssa_2194 = intrinsic load_deref (ssa_2193) (access=0) /* succs: block_28 block_29 */ if ssa_2194 { block block_28: /* preds: block_27 */ break /* succs: block_35 */ } else { block block_29: /* preds: block_27 */ /* succs: block_30 */ } block block_30: /* preds: block_29 */ vec1 32 ssa_2195 = deref_var &phi@73 (function_temp float) intrinsic store_deref (ssa_2195, ssa_19) (wrmask=x /*1*/, access=0) vec1 32 ssa_2196 = deref_var &phi@72 (function_temp float) intrinsic store_deref (ssa_2196, ssa_2183) (wrmask=x /*1*/, access=0) vec1 32 ssa_2197 = deref_var &phi@71 (function_temp float) intrinsic store_deref (ssa_2197, ssa_2184) (wrmask=x /*1*/, access=0) /* succs: block_31 */ } block block_31: /* preds: block_23 block_30 */ vec1 32 ssa_2198 = deref_var &loop_break@69 (function_temp bool) vec1 1 ssa_2199 = intrinsic load_deref (ssa_2198) (access=0) /* succs: block_32 block_33 */ if ssa_2199 { block block_32: /* preds: block_31 */ break /* succs: block_35 */ } else { block block_33: /* preds: block_31 */ /* succs: block_34 */ } block block_34: /* preds: block_33 */ vec1 32 ssa_2200 = deref_var &phi@71 (function_temp float) vec1 32 ssa_2201 = intrinsic load_deref (ssa_2200) (access=0) vec1 32 ssa_2202 = deref_var &phi@72 (function_temp float) vec1 32 ssa_2203 = intrinsic load_deref (ssa_2202) (access=0) vec1 32 ssa_2204 = deref_var &phi@73 (function_temp float) vec1 32 ssa_2205 = intrinsic load_deref (ssa_2204) (access=0) vec1 32 ssa_2206 = fsub! ssa_209, ssa_2201 vec1 32 ssa_2207 = fadd ssa_2105, ssa_208 vec1 32 ssa_2208 = deref_var &@1 (uniform texture2DArray[]) vec1 32 ssa_2209 = deref_array &(*ssa_2208)[ssa_580] (uniform texture2DArray) /* &@1[ssa_580] */ vec1 32 ssa_2211 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2212 = deref_array &(*ssa_2211)[ssa_597] (uniform sampler) /* &@9[ssa_597] */ vec3 32 ssa_2214 = vec3 ssa_2203, ssa_2206, ssa_2205 vec4 32 ssa_2217 = (float32)txl ssa_2209 (texture_deref), ssa_2212 (sampler_deref), ssa_2214 (coord), ssa_207 (lod) vec1 32 ssa_2218 = fmul ssa_2217.x, ssa_206 vec1 32 ssa_2219 = fsub ssa_2207, ssa_2218 vec1 1 ssa_2220 = flt! ssa_2219, ssa_205 vec1 32 ssa_2221 = deref_var &phi@74 (function_temp bool) intrinsic store_deref (ssa_2221, ssa_2220) (wrmask=x /*1*/, access=0) break /* succs: block_35 */ } block block_35: /* preds: block_26 block_28 block_32 block_34 */ vec1 32 ssa_2222 = deref_var &phi@74 (function_temp bool) vec1 1 ssa_2223 = intrinsic load_deref (ssa_2222) (access=0) vec1 32 ssa_2224 = deref_var &phi@83 (function_temp float) intrinsic store_deref (ssa_2224, ssa_13) (wrmask=x /*1*/, access=0) /* succs: block_36 block_37 */ if ssa_2223 { block block_36: /* preds: block_35 */ /* succs: block_49 */ } else { block block_37: /* preds: block_35 */ vec1 32 ssa_2225 = deref_var &phi@78 (function_temp uint) intrinsic store_deref (ssa_2225, ssa_16) (wrmask=x /*1*/, access=0) vec1 32 ssa_2226 = deref_var &phi@77 (function_temp float) intrinsic store_deref (ssa_2226, ssa_17) (wrmask=x /*1*/, access=0) vec1 1 ssa_2227 = load_const (false) vec1 32 ssa_2228 = deref_var &loop_break@75 (function_temp bool) intrinsic store_deref (ssa_2228, ssa_2227) (wrmask=x /*1*/, access=0) /* succs: block_38 */ loop { block block_38: /* preds: block_37 block_47 */ vec1 1 ssa_2229 = load_const (false) vec1 32 ssa_2230 = deref_var &loop_continue@76 (function_temp bool) intrinsic store_deref (ssa_2230, ssa_2229) (wrmask=x /*1*/, access=0) vec1 32 ssa_2231 = deref_var &phi@77 (function_temp float) vec1 32 ssa_2232 = intrinsic load_deref (ssa_2231) (access=0) vec1 32 ssa_2233 = deref_var &phi@78 (function_temp uint) vec1 32 ssa_2234 = intrinsic load_deref (ssa_2233) (access=0) vec4 32 ssa_2235 = intrinsic load_vulkan_descriptor (ssa_614) (desc_type=SSBO /*7*/) vec4 32 ssa_2236 = deref_cast (BindlessCBV *)ssa_2235 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2237 = deref_struct &ssa_2236->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2235)->field0 */ vec1 32 ssa_2238 = load_const (0x00000035 = 0.000000) vec4 32 ssa_2239 = deref_array &(*ssa_2237)[53] (ssbo vec4) /* &((BindlessCBV *)ssa_2235)->field0[53] */ vec4 32 ssa_2240 = intrinsic load_deref (ssa_2239) (access=16) vec1 32 ssa_2241 = fmul ssa_2240.x, ssa_2103 vec1 32 ssa_2242 = fmul ssa_2240.y, ssa_2104 vec1 32 ssa_2243 = fadd ssa_2241, ssa_2240.z vec1 32 ssa_2244 = fadd ssa_2242, ssa_2240.w vec1 32 ssa_2245 = fadd ssa_2243, ssa_204 vec1 32 ssa_2246 = fsub ssa_203, ssa_2244 vec1 32 ssa_2247 = fabs ssa_2245 vec1 32 ssa_2248 = fabs ssa_2246 vec1 1 ssa_2249 = flt! ssa_2247, ssa_202 vec1 1 ssa_2250 = flt! ssa_2248, ssa_201 vec1 1 ssa_2251 = iand ssa_2249, ssa_2250 vec1 32 ssa_2252 = deref_var &phi@81 (function_temp uint) intrinsic store_deref (ssa_2252, ssa_15) (wrmask=x /*1*/, access=0) vec1 32 ssa_2253 = deref_var &phi@80 (function_temp float) intrinsic store_deref (ssa_2253, ssa_2243) (wrmask=x /*1*/, access=0) vec1 32 ssa_2254 = deref_var &phi@79 (function_temp float) intrinsic store_deref (ssa_2254, ssa_2244) (wrmask=x /*1*/, access=0) /* succs: block_39 block_40 */ if ssa_2251 { block block_39: /* preds: block_38 */ /* succs: block_41 */ } else { block block_40: /* preds: block_38 */ vec4 32 ssa_2255 = intrinsic load_vulkan_descriptor (ssa_614) (desc_type=SSBO /*7*/) vec4 32 ssa_2256 = deref_cast (BindlessCBV *)ssa_2255 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2257 = deref_struct &ssa_2256->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2255)->field0 */ vec1 32 ssa_2258 = load_const (0x00000036 = 0.000000) vec4 32 ssa_2259 = deref_array &(*ssa_2257)[54] (ssbo vec4) /* &((BindlessCBV *)ssa_2255)->field0[54] */ vec4 32 ssa_2260 = intrinsic load_deref (ssa_2259) (access=16) vec1 32 ssa_2261 = fmul ssa_2260.x, ssa_2103 vec1 32 ssa_2262 = fmul ssa_2260.y, ssa_2104 vec1 32 ssa_2263 = fadd ssa_2261, ssa_2260.z vec1 32 ssa_2264 = fadd ssa_2262, ssa_2260.w vec1 32 ssa_2265 = fadd ssa_2263, ssa_200 vec1 32 ssa_2266 = fsub ssa_199, ssa_2264 vec1 32 ssa_2267 = fabs ssa_2265 vec1 32 ssa_2268 = fabs ssa_2266 vec1 1 ssa_2269 = flt! ssa_2267, ssa_198 vec1 1 ssa_2270 = flt! ssa_2268, ssa_197 vec1 1 ssa_2271 = iand ssa_2269, ssa_2270 vec1 32 ssa_2272 = bcsel ssa_2271, ssa_195, ssa_196 vec1 32 ssa_2273 = deref_var &phi@81 (function_temp uint) intrinsic store_deref (ssa_2273, ssa_2272) (wrmask=x /*1*/, access=0) vec1 32 ssa_2274 = deref_var &phi@80 (function_temp float) intrinsic store_deref (ssa_2274, ssa_2263) (wrmask=x /*1*/, access=0) vec1 32 ssa_2275 = deref_var &phi@79 (function_temp float) intrinsic store_deref (ssa_2275, ssa_2264) (wrmask=x /*1*/, access=0) /* succs: block_41 */ } block block_41: /* preds: block_39 block_40 */ vec1 32 ssa_2276 = deref_var &phi@79 (function_temp float) vec1 32 ssa_2277 = intrinsic load_deref (ssa_2276) (access=0) vec1 32 ssa_2278 = deref_var &phi@80 (function_temp float) vec1 32 ssa_2279 = intrinsic load_deref (ssa_2278) (access=0) vec1 32 ssa_2280 = deref_var &phi@81 (function_temp uint) vec1 32 ssa_2281 = intrinsic load_deref (ssa_2280) (access=0) vec1 32 ssa_2282 = u2f32 ssa_2281 vec1 1 ssa_2283 = flt! ssa_2282, ssa_194 vec1 32 ssa_2284 = deref_var &phi@82 (function_temp float) intrinsic store_deref (ssa_2284, ssa_14) (wrmask=x /*1*/, access=0) /* succs: block_42 block_43 */ if ssa_2283 { block block_42: /* preds: block_41 */ vec1 32 ssa_2285 = fsub! ssa_193, ssa_2277 vec1 32 ssa_2286 = fadd ssa_2105, ssa_192 vec1 32 ssa_2287 = fmul ssa_2286, ssa_191 vec1 32 ssa_2288 = fsub ssa_190, ssa_2287 vec1 32 ssa_2289 = deref_var &@1 (uniform texture2DArray[]) vec1 32 ssa_2290 = deref_array &(*ssa_2289)[ssa_580] (uniform texture2DArray) /* &@1[ssa_580] */ vec1 32 ssa_2292 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2293 = deref_array &(*ssa_2292)[ssa_601] (uniform sampler) /* &@9[ssa_601] */ vec3 32 ssa_2295 = vec3 ssa_2279, ssa_2285, ssa_2282 vec1 32 ssa_2298 = (float32)txl ssa_2290 (texture_deref), ssa_2293 (sampler_deref), ssa_2295 (coord), ssa_2288 (comparator), ssa_189 (lod) vec1 32 ssa_2299 = deref_var &phi@82 (function_temp float) intrinsic store_deref (ssa_2299, ssa_2298) (wrmask=x /*1*/, access=0) /* succs: block_44 */ } else { block block_43: /* preds: block_41 */ /* succs: block_44 */ } block block_44: /* preds: block_42 block_43 */ vec1 32 ssa_2300 = deref_var &phi@82 (function_temp float) vec1 32 ssa_2301 = intrinsic load_deref (ssa_2300) (access=0) vec1 32 ssa_2302 = fadd ssa_2301, ssa_2232 vec1 32 ssa_2303 = iadd ssa_2234, ssa_188 vec1 1 ssa_2304 = ieq ssa_2303, ssa_187 vec1 32 ssa_2305 = deref_var &phi@78 (function_temp uint) intrinsic store_deref (ssa_2305, ssa_2303) (wrmask=x /*1*/, access=0) vec1 32 ssa_2306 = deref_var &phi@77 (function_temp float) intrinsic store_deref (ssa_2306, ssa_2302) (wrmask=x /*1*/, access=0) /* succs: block_45 block_46 */ if ssa_2304 { block block_45: /* preds: block_44 */ break /* succs: block_48 */ } else { block block_46: /* preds: block_44 */ /* succs: block_47 */ } block block_47: /* preds: block_46 */ continue /* succs: block_38 */ } block block_48: /* preds: block_45 */ vec1 32 ssa_2307 = fmul ssa_2302, ssa_186 vec1 32 ssa_2308 = deref_var &phi@83 (function_temp float) intrinsic store_deref (ssa_2308, ssa_2307) (wrmask=x /*1*/, access=0) /* succs: block_49 */ } block block_49: /* preds: block_36 block_48 */ vec1 32 ssa_2309 = deref_var &phi@83 (function_temp float) vec1 32 ssa_2310 = intrinsic load_deref (ssa_2309) (access=0) vec1 32 ssa_2311 = fmul ssa_2310, ssa_2148 vec1 32 ssa_2312 = deref_var &phi@84 (function_temp float) intrinsic store_deref (ssa_2312, ssa_2311) (wrmask=x /*1*/, access=0) /* succs: block_51 */ } else { block block_50: /* preds: block_20 */ /* succs: block_51 */ } block block_51: /* preds: block_49 block_50 */ vec1 32 ssa_2313 = deref_var &phi@84 (function_temp float) vec1 32 ssa_2314 = intrinsic load_deref (ssa_2313) (access=0) vec1 1 ssa_2315 = flt! ssa_185, ssa_2314 vec1 32 ssa_2316 = deref_var &phi@86 (function_temp float) intrinsic store_deref (ssa_2316, ssa_2314) (wrmask=x /*1*/, access=0) /* succs: block_52 block_56 */ if ssa_2315 { block block_52: /* preds: block_51 */ vec4 32 ssa_2317 = intrinsic load_vulkan_descriptor (ssa_628) (desc_type=SSBO /*7*/) vec4 32 ssa_2318 = deref_cast (BindlessCBV *)ssa_2317 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2319 = deref_struct &ssa_2318->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2317)->field0 */ vec1 32 ssa_2320 = load_const (0x00000034 = 0.000000) vec4 32 ssa_2321 = deref_array &(*ssa_2319)[52] (ssbo vec4) /* &((BindlessCBV *)ssa_2317)->field0[52] */ vec4 32 ssa_2322 = intrinsic load_deref (ssa_2321) (access=16) vec1 1 ssa_2323 = flt! ssa_184, ssa_2322.z vec1 1 ssa_2324 = flt! ssa_656.x, ssa_2322.w vec1 1 ssa_2325 = iand ssa_2323, ssa_2324 vec1 32 ssa_2326 = bcsel ssa_2325, ssa_182, ssa_183 vec1 32 ssa_2327 = fsub ssa_181, ssa_2326 vec1 32 ssa_2328 = fmul ssa_2327, ssa_2314 vec1 32 ssa_2329 = fmul ssa_686.z, ssa_180 vec1 32 ssa_2330 = fmul ssa_688.w, ssa_179 vec1 32 ssa_2331 = fmul ssa_674.x, ssa_178 vec1 32 ssa_2332 = fadd ssa_664.z, ssa_2329 vec1 32 ssa_2333 = fadd ssa_666.w, ssa_2330 vec1 32 ssa_2334 = fadd ssa_656.x, ssa_2331 vec4 32 ssa_2335 = intrinsic load_vulkan_descriptor (ssa_628) (desc_type=SSBO /*7*/) vec4 32 ssa_2336 = deref_cast (BindlessCBV *)ssa_2335 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2337 = deref_struct &ssa_2336->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2335)->field0 */ vec1 32 ssa_2338 = load_const (0x00000024 = 0.000000) vec4 32 ssa_2339 = deref_array &(*ssa_2337)[36] (ssbo vec4) /* &((BindlessCBV *)ssa_2335)->field0[36] */ vec4 32 ssa_2340 = intrinsic load_deref (ssa_2339) (access=16) vec1 32 ssa_2341 = fmul ssa_2340.x, ssa_177 vec1 32 ssa_2342 = fmul ssa_2340.y, ssa_176 vec1 32 ssa_2343 = ffloor ssa_2341 vec1 32 ssa_2344 = ffloor ssa_2342 vec1 32 ssa_2345 = fmul ssa_2343, ssa_175 vec1 32 ssa_2346 = fmul ssa_2344, ssa_174 vec1 32 ssa_2347 = fsub ssa_2332, ssa_2345 vec1 32 ssa_2348 = fsub ssa_2333, ssa_2346 vec1 32 ssa_2349 = fmul ssa_2347, ssa_173 vec1 32 ssa_2350 = fmul ssa_2348, ssa_172 vec1 32 ssa_2351 = fadd ssa_2349, ssa_171 vec1 32 ssa_2352 = fsub ssa_170, ssa_2350 vec1 1 ssa_2353 = flt! ssa_2351, ssa_169 vec1 1 ssa_2354 = flt! ssa_2352, ssa_168 vec1 1 ssa_2355 = ior ssa_2353, ssa_2354 vec1 1 ssa_2356 = flt! ssa_167, ssa_2351 vec1 1 ssa_2357 = flt! ssa_166, ssa_2352 vec1 1 ssa_2358 = ior ssa_2356, ssa_2357 vec1 1 ssa_2359 = ior ssa_2355, ssa_2358 vec1 32 ssa_2360 = deref_var &phi@85 (function_temp float) intrinsic store_deref (ssa_2360, ssa_12) (wrmask=x /*1*/, access=0) /* succs: block_53 block_54 */ if ssa_2359 { block block_53: /* preds: block_52 */ /* succs: block_55 */ } else { block block_54: /* preds: block_52 */ vec1 32 ssa_2361 = fmul ssa_2340.z, ssa_165 vec1 32 ssa_2362 = ffloor ssa_2361 vec1 32 ssa_2363 = fmul ssa_2362, ssa_164 vec1 32 ssa_2364 = deref_var &@3 (uniform utexture2D[]) vec1 32 ssa_2365 = deref_array &(*ssa_2364)[ssa_572] (uniform utexture2D) /* &@3[ssa_572] */ vec1 32 ssa_2367 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2368 = deref_array &(*ssa_2367)[ssa_604] (uniform sampler) /* &@9[ssa_604] */ vec2 32 ssa_2370 = vec2 ssa_2351, ssa_2352 vec4 32 ssa_2373 = (uint32)tg4 ssa_2365 (texture_deref), ssa_2368 (sampler_deref), ssa_2370 (coord), 0 (gather_component) vec1 32 ssa_2374 = deref_var &@3 (uniform utexture2D[]) vec1 32 ssa_2375 = deref_array &(*ssa_2374)[ssa_550] (uniform utexture2D) /* &@3[ssa_550] */ vec1 32 ssa_2377 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2378 = deref_array &(*ssa_2377)[ssa_604] (uniform sampler) /* &@9[ssa_604] */ vec2 32 ssa_2380 = vec2 ssa_2351, ssa_2352 vec4 32 ssa_2383 = (uint32)tg4 ssa_2375 (texture_deref), ssa_2378 (sampler_deref), ssa_2380 (coord), 0 (gather_component) vec1 32 ssa_2384 = u2f32 ssa_2373.x vec1 32 ssa_2385 = u2f32 ssa_2373.y vec1 32 ssa_2386 = u2f32 ssa_2373.z vec1 32 ssa_2387 = u2f32 ssa_2373.w vec1 32 ssa_2388 = fmul ssa_2384, ssa_163 vec1 32 ssa_2389 = fmul ssa_2385, ssa_162 vec1 32 ssa_2390 = fmul ssa_2386, ssa_161 vec1 32 ssa_2391 = fmul ssa_2387, ssa_160 vec1 32 ssa_2392 = fadd ssa_2363, ssa_159 vec1 32 ssa_2393 = fsub ssa_2392, ssa_2388 vec1 32 ssa_2394 = fsub ssa_2392, ssa_2389 vec1 32 ssa_2395 = fsub ssa_2392, ssa_2390 vec1 32 ssa_2396 = fsub ssa_2392, ssa_2391 vec1 32 ssa_2397 = u2f32 ssa_2383.x vec1 32 ssa_2398 = u2f32 ssa_2383.y vec1 32 ssa_2399 = u2f32 ssa_2383.z vec1 32 ssa_2400 = u2f32 ssa_2383.w vec1 32 ssa_2401 = fmul ssa_2397, ssa_158 vec1 32 ssa_2402 = fmul ssa_2398, ssa_157 vec1 32 ssa_2403 = fmul ssa_2399, ssa_156 vec1 32 ssa_2404 = fmul ssa_2400, ssa_155 vec1 32 ssa_2405 = fsub ssa_2392, ssa_2401 vec1 32 ssa_2406 = fsub ssa_2392, ssa_2402 vec1 32 ssa_2407 = fsub ssa_2392, ssa_2403 vec1 32 ssa_2408 = fsub ssa_2392, ssa_2404 vec1 32 ssa_2409 = fmul ssa_2347, ssa_154 vec1 32 ssa_2410 = fmul ssa_2348, ssa_153 vec1 32 ssa_2411 = fadd ssa_2409, ssa_152 vec1 32 ssa_2412 = fsub ssa_151, ssa_2410 vec1 32 ssa_2413 = ffract ssa_2411 vec1 32 ssa_2414 = ffract ssa_2412 vec1 1 ssa_2415 = flt! ssa_2334, ssa_2393 vec1 1 ssa_2416 = flt! ssa_2334, ssa_2394 vec1 1 ssa_2417 = flt! ssa_2334, ssa_2395 vec1 1 ssa_2418 = flt! ssa_2334, ssa_2396 vec1 1 ssa_2419 = flt! ssa_2405, ssa_2334 vec1 1 ssa_2420 = flt! ssa_2406, ssa_2334 vec1 1 ssa_2421 = flt! ssa_2407, ssa_2334 vec1 1 ssa_2422 = flt! ssa_2408, ssa_2334 vec1 1 ssa_2423 = iand ssa_2415, ssa_2419 vec1 1 ssa_2424 = iand ssa_2416, ssa_2420 vec1 1 ssa_2425 = iand ssa_2417, ssa_2421 vec1 1 ssa_2426 = iand ssa_2418, ssa_2422 vec1 32 ssa_2427 = bcsel ssa_2423, ssa_149, ssa_150 vec1 32 ssa_2428 = bcsel ssa_2424, ssa_147, ssa_148 vec1 32 ssa_2429 = bcsel ssa_2425, ssa_145, ssa_146 vec1 32 ssa_2430 = bcsel ssa_2426, ssa_143, ssa_144 vec1 32 ssa_2431 = fsub ssa_142, ssa_2413 vec1 32 ssa_2432 = fmul ssa_2427, ssa_2431 vec1 32 ssa_2433 = fmul ssa_2428, ssa_2413 vec1 32 ssa_2434 = fadd ssa_2432, ssa_2433 vec1 32 ssa_2435 = fmul ssa_2434, ssa_2414 vec1 32 ssa_2436 = fsub ssa_141, ssa_2414 vec1 32 ssa_2437 = fmul ssa_2436, ssa_2413 vec1 32 ssa_2438 = fmul ssa_2437, ssa_2429 vec1 32 ssa_2439 = fmul ssa_2436, ssa_2431 vec1 32 ssa_2440 = fmul ssa_2439, ssa_2430 vec1 32 ssa_2441 = fadd ssa_2440, ssa_2438 vec1 32 ssa_2442 = fadd ssa_2441, ssa_2435 vec1 32 ssa_2443 = deref_var &phi@85 (function_temp float) intrinsic store_deref (ssa_2443, ssa_2442) (wrmask=x /*1*/, access=0) /* succs: block_55 */ } block block_55: /* preds: block_53 block_54 */ vec1 32 ssa_2444 = deref_var &phi@85 (function_temp float) vec1 32 ssa_2445 = intrinsic load_deref (ssa_2444) (access=0) vec1 32 ssa_2446 = fsub ssa_140, ssa_2445 vec1 32 ssa_2447 = fmul ssa_2328, ssa_2446 vec1 32 ssa_2448 = deref_var &phi@86 (function_temp float) intrinsic store_deref (ssa_2448, ssa_2447) (wrmask=x /*1*/, access=0) /* succs: block_57 */ } else { block block_56: /* preds: block_51 */ /* succs: block_57 */ } block block_57: /* preds: block_55 block_56 */ vec1 32 ssa_2449 = deref_var &phi@86 (function_temp float) vec1 32 ssa_2450 = intrinsic load_deref (ssa_2449) (access=0) vec4 32 ssa_2451 = intrinsic load_vulkan_descriptor (ssa_632) (desc_type=SSBO /*7*/) vec4 32 ssa_2452 = deref_cast (BindlessCBV *)ssa_2451 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2453 = deref_struct &ssa_2452->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2451)->field0 */ vec1 32 ssa_2454 = load_const (0x00000000 = 0.000000) vec4 32 ssa_2455 = deref_array &(*ssa_2453)[0] (ssbo vec4) /* &((BindlessCBV *)ssa_2451)->field0[0] */ vec4 32 ssa_2456 = intrinsic load_deref (ssa_2455) (access=16) vec1 1 ssa_2457 = flt! ssa_2450, ssa_139 vec1 32 ssa_2458 = fadd ssa_2093.x, ssa_2093.y vec1 1 ssa_2459 = flt! ssa_2458, ssa_138 vec1 1 ssa_2460 = ior ssa_2459, ssa_2457 vec1 32 ssa_2461 = deref_var &phi@110 (function_temp float) intrinsic store_deref (ssa_2461, ssa_0) (wrmask=x /*1*/, access=0) vec1 32 ssa_2462 = deref_var &phi@109 (function_temp float) intrinsic store_deref (ssa_2462, ssa_1) (wrmask=x /*1*/, access=0) vec1 32 ssa_2463 = deref_var &phi@108 (function_temp float) intrinsic store_deref (ssa_2463, ssa_2) (wrmask=x /*1*/, access=0) vec1 32 ssa_2464 = deref_var &phi@107 (function_temp float) intrinsic store_deref (ssa_2464, ssa_3) (wrmask=x /*1*/, access=0) vec1 32 ssa_2465 = deref_var &phi@106 (function_temp float) intrinsic store_deref (ssa_2465, ssa_4) (wrmask=x /*1*/, access=0) vec1 32 ssa_2466 = deref_var &phi@105 (function_temp float) intrinsic store_deref (ssa_2466, ssa_5) (wrmask=x /*1*/, access=0) /* succs: block_58 block_59 */ if ssa_2460 { block block_58: /* preds: block_57 */ /* succs: block_72 */ } else { block block_59: /* preds: block_57 */ vec1 1 ssa_2467 = flt! ssa_658.y, ssa_137 vec1 32 ssa_2468 = deref_var &phi@88 (function_temp float) intrinsic store_deref (ssa_2468, ssa_10) (wrmask=x /*1*/, access=0) vec1 32 ssa_2469 = deref_var &phi@87 (function_temp float) intrinsic store_deref (ssa_2469, ssa_11) (wrmask=x /*1*/, access=0) /* succs: block_60 block_61 */ if ssa_2467 { block block_60: /* preds: block_59 */ vec1 32 ssa_2470 = fadd ssa_674.x, ssa_136 vec1 32 ssa_2471 = fmax! ssa_2470, ssa_135 vec1 32 ssa_2472 = fmin! ssa_2471, ssa_134 vec1 32 ssa_2473 = fmul ssa_2472, ssa_2450 vec1 32 ssa_2474 = fadd ssa_2093.y, ssa_133 vec1 32 ssa_2475 = fadd ssa_2474, ssa_2473 vec1 32 ssa_2476 = fmax! ssa_2475, ssa_132 vec1 32 ssa_2477 = fmin! ssa_2476, ssa_131 vec1 32 ssa_2478 = fmul ssa_2450, ssa_2099.w vec1 32 ssa_2479 = fmul ssa_2478, ssa_2477 vec1 32 ssa_2480 = fsub ssa_130, ssa_2479 vec1 32 ssa_2481 = fsub ssa_129, ssa_2450 vec1 32 ssa_2482 = fmul ssa_2481, ssa_2099.z vec1 32 ssa_2483 = fmul ssa_2482, ssa_2477 vec1 32 ssa_2484 = fsub ssa_128, ssa_2483 vec1 32 ssa_2485 = deref_var &phi@88 (function_temp float) intrinsic store_deref (ssa_2485, ssa_2484) (wrmask=x /*1*/, access=0) vec1 32 ssa_2486 = deref_var &phi@87 (function_temp float) intrinsic store_deref (ssa_2486, ssa_2480) (wrmask=x /*1*/, access=0) /* succs: block_62 */ } else { block block_61: /* preds: block_59 */ /* succs: block_62 */ } block block_62: /* preds: block_60 block_61 */ vec1 32 ssa_2487 = deref_var &phi@87 (function_temp float) vec1 32 ssa_2488 = intrinsic load_deref (ssa_2487) (access=0) vec1 32 ssa_2489 = deref_var &phi@88 (function_temp float) vec1 32 ssa_2490 = intrinsic load_deref (ssa_2489) (access=0) vec1 1 ssa_2491 = flt! ssa_658.y, ssa_127 vec1 32 ssa_2492 = deref_var &phi@94 (function_temp float) intrinsic store_deref (ssa_2492, ssa_6) (wrmask=x /*1*/, access=0) vec1 32 ssa_2493 = deref_var &phi@93 (function_temp float) intrinsic store_deref (ssa_2493, ssa_7) (wrmask=x /*1*/, access=0) vec1 32 ssa_2494 = deref_var &phi@92 (function_temp float) intrinsic store_deref (ssa_2494, ssa_8) (wrmask=x /*1*/, access=0) vec1 32 ssa_2495 = deref_var &phi@91 (function_temp float) intrinsic store_deref (ssa_2495, ssa_9) (wrmask=x /*1*/, access=0) vec1 32 ssa_2496 = deref_var &phi@90 (function_temp float) intrinsic store_deref (ssa_2496, ssa_2490) (wrmask=x /*1*/, access=0) vec1 32 ssa_2497 = deref_var &phi@89 (function_temp float) intrinsic store_deref (ssa_2497, ssa_2488) (wrmask=x /*1*/, access=0) /* succs: block_63 block_64 */ if ssa_2491 { block block_63: /* preds: block_62 */ vec1 32 ssa_2498 = fmul ssa_664.z, ssa_126 vec1 32 ssa_2499 = fmul ssa_666.w, ssa_125 vec1 32 ssa_2500 = ffract ssa_2498 vec1 32 ssa_2501 = ffract ssa_2499 vec1 32 ssa_2502 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_2503 = deref_array &(*ssa_2502)[ssa_560] (uniform texture2D) /* &@0[ssa_560] */ vec1 32 ssa_2505 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2506 = deref_array &(*ssa_2505)[ssa_597] (uniform sampler) /* &@9[ssa_597] */ vec2 32 ssa_2508 = vec2 ssa_2500, ssa_2501 vec4 32 ssa_2511 = (float32)tex ssa_2503 (texture_deref), ssa_2506 (sampler_deref), ssa_2508 (coord) vec1 32 ssa_2512 = fmul ssa_664.z, ssa_124 vec1 32 ssa_2513 = fmul ssa_666.w, ssa_123 vec1 32 ssa_2514 = ffract ssa_2512 vec1 32 ssa_2515 = ffract ssa_2513 vec2 32 ssa_2516 = vec2 ssa_2514, ssa_2515 vec4 32 ssa_2519 = (float32)tex ssa_2503 (texture_deref), ssa_2506 (sampler_deref), ssa_2516 (coord) vec1 32 ssa_2520 = fmul ssa_2519.w, ssa_2511.w vec1 32 ssa_2521 = fadd ssa_2520, ssa_122 vec1 32 ssa_2522 = fmax! ssa_2521, ssa_121 vec1 32 ssa_2523 = fmin! ssa_2522, ssa_120 vec1 32 ssa_2524 = fadd ssa_2523, ssa_119 vec1 32 ssa_2525 = fmul ssa_2524, ssa_118 vec1 32 ssa_2526 = fmax! ssa_2525, ssa_117 vec1 32 ssa_2527 = fmin! ssa_2526, ssa_116 vec1 32 ssa_2528 = fadd ssa_2523, ssa_115 vec1 32 ssa_2529 = fmul ssa_2528, ssa_114 vec1 32 ssa_2530 = fmax! ssa_2529, ssa_113 vec1 32 ssa_2531 = fmin! ssa_2530, ssa_112 vec1 32 ssa_2532 = fmul ssa_2450, ssa_2099.w vec1 32 ssa_2533 = fsub ssa_111, ssa_2532 vec1 32 ssa_2534 = fsub ssa_2533, ssa_2488 vec1 32 ssa_2535 = fmul ssa_2531, ssa_2534 vec1 32 ssa_2536 = fadd ssa_2535, ssa_2488 vec1 32 ssa_2537 = fsub ssa_2533, ssa_2536 vec1 32 ssa_2538 = fmul ssa_2537, ssa_2527 vec1 32 ssa_2539 = fadd ssa_2538, ssa_2536 vec1 32 ssa_2540 = fsub ssa_110, ssa_2450 vec1 32 ssa_2541 = fmul ssa_2540, ssa_2099.z vec1 32 ssa_2542 = fsub ssa_109, ssa_2541 vec1 32 ssa_2543 = fsub ssa_2542, ssa_2490 vec1 32 ssa_2544 = fmul ssa_2531, ssa_2543 vec1 32 ssa_2545 = fadd ssa_2544, ssa_2490 vec1 32 ssa_2546 = fmul ssa_2545, ssa_2527 vec1 32 ssa_2547 = fsub ssa_2545, ssa_2546 vec1 32 ssa_2548 = fsub ssa_108, ssa_2527 vec1 32 ssa_2549 = fmul ssa_664.z, ssa_107 vec1 32 ssa_2550 = fmul ssa_666.w, ssa_106 vec1 32 ssa_2551 = ffract ssa_2549 vec1 32 ssa_2552 = ffract ssa_2550 vec1 32 ssa_2553 = ffract ssa_2456.z vec1 32 ssa_2554 = fmul ssa_2553, ssa_105 vec1 32 ssa_2555 = ffloor ssa_2554 vec1 32 ssa_2556 = fmul ssa_2555, ssa_104 vec1 32 ssa_2557 = ffract ssa_2556 vec1 32 ssa_2558 = ffloor ssa_2556 vec1 32 ssa_2559 = ffract ssa_2551 vec1 32 ssa_2560 = fmul ssa_2559, ssa_103 vec1 32 ssa_2561 = ffract ssa_2552 vec1 32 ssa_2562 = fadd ssa_2560, ssa_2557 vec1 32 ssa_2563 = fsub ssa_2561, ssa_2558 vec1 32 ssa_2564 = fmul ssa_2563, ssa_102 vec1 32 ssa_2565 = ffract ssa_2562 vec1 32 ssa_2566 = ffract ssa_2564 vec1 32 ssa_2567 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_2568 = deref_array &(*ssa_2567)[ssa_568] (uniform texture2D) /* &@0[ssa_568] */ vec1 32 ssa_2570 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2571 = deref_array &(*ssa_2570)[ssa_597] (uniform sampler) /* &@9[ssa_597] */ vec2 32 ssa_2573 = vec2 ssa_2565, ssa_2566 vec4 32 ssa_2576 = (float32)tex ssa_2568 (texture_deref), ssa_2571 (sampler_deref), ssa_2573 (coord) vec1 32 ssa_2577 = fmul ssa_2093.x, ssa_101 vec1 32 ssa_2578 = fadd ssa_2576.x, ssa_100 vec1 32 ssa_2579 = fadd ssa_2576.y, ssa_99 vec1 32 ssa_2580 = fmul ssa_2578, ssa_2577 vec1 32 ssa_2581 = fmul ssa_2579, ssa_2577 vec1 32 ssa_2582 = fadd ssa_2580, ssa_98 vec1 32 ssa_2583 = fadd ssa_2581, ssa_97 vec1 32 ssa_2584 = fmul ssa_2582, ssa_2527 vec1 32 ssa_2585 = fmul ssa_2583, ssa_2527 vec1 32 ssa_2586 = deref_var &phi@94 (function_temp float) intrinsic store_deref (ssa_2586, ssa_2527) (wrmask=x /*1*/, access=0) vec1 32 ssa_2587 = deref_var &phi@93 (function_temp float) intrinsic store_deref (ssa_2587, ssa_2585) (wrmask=x /*1*/, access=0) vec1 32 ssa_2588 = deref_var &phi@92 (function_temp float) intrinsic store_deref (ssa_2588, ssa_2584) (wrmask=x /*1*/, access=0) vec1 32 ssa_2589 = deref_var &phi@91 (function_temp float) intrinsic store_deref (ssa_2589, ssa_2548) (wrmask=x /*1*/, access=0) vec1 32 ssa_2590 = deref_var &phi@90 (function_temp float) intrinsic store_deref (ssa_2590, ssa_2547) (wrmask=x /*1*/, access=0) vec1 32 ssa_2591 = deref_var &phi@89 (function_temp float) intrinsic store_deref (ssa_2591, ssa_2539) (wrmask=x /*1*/, access=0) /* succs: block_65 */ } else { block block_64: /* preds: block_62 */ /* succs: block_65 */ } block block_65: /* preds: block_63 block_64 */ vec1 32 ssa_2592 = deref_var &phi@89 (function_temp float) vec1 32 ssa_2593 = intrinsic load_deref (ssa_2592) (access=0) vec1 32 ssa_2594 = deref_var &phi@90 (function_temp float) vec1 32 ssa_2595 = intrinsic load_deref (ssa_2594) (access=0) vec1 32 ssa_2596 = deref_var &phi@91 (function_temp float) vec1 32 ssa_2597 = intrinsic load_deref (ssa_2596) (access=0) vec1 32 ssa_2598 = deref_var &phi@92 (function_temp float) vec1 32 ssa_2599 = intrinsic load_deref (ssa_2598) (access=0) vec1 32 ssa_2600 = deref_var &phi@93 (function_temp float) vec1 32 ssa_2601 = intrinsic load_deref (ssa_2600) (access=0) vec1 32 ssa_2602 = deref_var &phi@94 (function_temp float) vec1 32 ssa_2603 = intrinsic load_deref (ssa_2602) (access=0) vec1 1 ssa_2604 = flt! ssa_658.y, ssa_96 vec1 32 ssa_2605 = deref_var &phi@98 (function_temp float) intrinsic store_deref (ssa_2605, ssa_2603) (wrmask=x /*1*/, access=0) vec1 32 ssa_2606 = deref_var &phi@97 (function_temp float) intrinsic store_deref (ssa_2606, ssa_2601) (wrmask=x /*1*/, access=0) vec1 32 ssa_2607 = deref_var &phi@96 (function_temp float) intrinsic store_deref (ssa_2607, ssa_2599) (wrmask=x /*1*/, access=0) vec1 32 ssa_2608 = deref_var &phi@95 (function_temp float) intrinsic store_deref (ssa_2608, ssa_2595) (wrmask=x /*1*/, access=0) /* succs: block_66 block_67 */ if ssa_2604 { block block_66: /* preds: block_65 */ vec1 32 ssa_2609 = fabs ssa_688.w vec1 32 ssa_2610 = fmul ssa_2456.z, ssa_95 vec1 32 ssa_2611 = fmul ssa_666.w, ssa_94 vec1 32 ssa_2612 = fmul ssa_656.x, ssa_93 vec1 32 ssa_2613 = fadd ssa_2610, ssa_2612 vec1 32 ssa_2614 = fmul ssa_664.z, ssa_92 vec1 32 ssa_2615 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_2616 = deref_array &(*ssa_2615)[ssa_564] (uniform texture2D) /* &@0[ssa_564] */ vec1 32 ssa_2618 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2619 = deref_array &(*ssa_2618)[ssa_597] (uniform sampler) /* &@9[ssa_597] */ vec2 32 ssa_2621 = vec2 ssa_2611, ssa_2613 vec4 32 ssa_2624 = (float32)tex ssa_2616 (texture_deref), ssa_2619 (sampler_deref), ssa_2621 (coord) vec2 32 ssa_2625 = vec2 ssa_2614, ssa_2613 vec4 32 ssa_2628 = (float32)tex ssa_2616 (texture_deref), ssa_2619 (sampler_deref), ssa_2625 (coord) vec1 32 ssa_2629 = fsub ssa_2628.x, ssa_2624.x vec1 32 ssa_2630 = fsub ssa_2628.y, ssa_2624.y vec1 32 ssa_2631 = fsub ssa_2628.z, ssa_2624.z vec1 32 ssa_2632 = fmul ssa_666.w, ssa_91 vec1 32 ssa_2633 = fmul ssa_2613, ssa_90 vec1 32 ssa_2634 = fmul ssa_2456.z, ssa_89 vec1 32 ssa_2635 = fadd ssa_2633, ssa_2634 vec1 32 ssa_2636 = fmul ssa_664.z, ssa_88 vec2 32 ssa_2637 = vec2 ssa_2632, ssa_2635 vec4 32 ssa_2640 = (float32)tex ssa_2616 (texture_deref), ssa_2619 (sampler_deref), ssa_2637 (coord) vec2 32 ssa_2641 = vec2 ssa_2636, ssa_2635 vec4 32 ssa_2644 = (float32)tex ssa_2616 (texture_deref), ssa_2619 (sampler_deref), ssa_2641 (coord) vec1 32 ssa_2645 = fsub ssa_2644.x, ssa_2640.x vec1 32 ssa_2646 = fsub ssa_2644.y, ssa_2640.y vec1 32 ssa_2647 = fsub ssa_2644.z, ssa_2640.z vec1 32 ssa_2648 = fadd ssa_2647, ssa_2631 vec1 32 ssa_2649 = fmul ssa_2648, ssa_2609 vec1 32 ssa_2650 = fadd ssa_2640.z, ssa_2624.z vec1 32 ssa_2651 = fadd ssa_2650, ssa_2649 vec1 32 ssa_2652 = fabs ssa_674.x vec1 32 ssa_2653 = fadd ssa_2652, ssa_87 vec1 32 ssa_2654 = fmul ssa_2653, ssa_86 vec1 32 ssa_2655 = fmax! ssa_2654, ssa_85 vec1 32 ssa_2656 = fmin! ssa_2655, ssa_84 vec1 32 ssa_2657 = fmul ssa_2093.x, ssa_2093.y vec1 32 ssa_2658 = fmul ssa_2657, ssa_2656 vec1 32 ssa_2659 = fmul ssa_2450, ssa_2099.z vec1 32 ssa_2660 = fsub ssa_83, ssa_2659 vec1 32 ssa_2661 = fadd ssa_2645, ssa_2629 vec1 32 ssa_2662 = fmul ssa_2661, ssa_2609 vec1 32 ssa_2663 = fadd ssa_2624.x, ssa_82 vec1 32 ssa_2664 = fadd ssa_2663, ssa_2640.x vec1 32 ssa_2665 = fadd ssa_2664, ssa_2662 vec1 32 ssa_2666 = fadd ssa_2646, ssa_2630 vec1 32 ssa_2667 = fmul ssa_2666, ssa_2609 vec1 32 ssa_2668 = fadd ssa_2624.y, ssa_81 vec1 32 ssa_2669 = fadd ssa_2668, ssa_2640.y vec1 32 ssa_2670 = fadd ssa_2669, ssa_2667 vec1 32 ssa_2671 = fmul ssa_2665, ssa_2660 vec1 32 ssa_2672 = fmul ssa_2670, ssa_2660 vec1 32 ssa_2673 = fsub ssa_2671, ssa_2599 vec1 32 ssa_2674 = fsub ssa_2672, ssa_2601 vec1 32 ssa_2675 = fmul ssa_2673, ssa_2658 vec1 32 ssa_2676 = fmul ssa_2674, ssa_2658 vec1 32 ssa_2677 = fmul ssa_2658, ssa_2603 vec1 32 ssa_2678 = fadd ssa_2675, ssa_2599 vec1 32 ssa_2679 = fadd ssa_2676, ssa_2601 vec1 32 ssa_2680 = fsub ssa_2603, ssa_2677 vec1 32 ssa_2681 = fsub ssa_80, ssa_2450 vec1 32 ssa_2682 = fmul ssa_2681, ssa_2099.z vec1 32 ssa_2683 = fsub ssa_79, ssa_2682 vec1 32 ssa_2684 = fsub ssa_2683, ssa_2595 vec1 32 ssa_2685 = fmul ssa_2684, ssa_78 vec1 32 ssa_2686 = fmul ssa_2685, ssa_2658 vec1 32 ssa_2687 = fmul ssa_2686, ssa_2651 vec1 32 ssa_2688 = fadd ssa_2687, ssa_2595 vec1 32 ssa_2689 = deref_var &phi@98 (function_temp float) intrinsic store_deref (ssa_2689, ssa_2680) (wrmask=x /*1*/, access=0) vec1 32 ssa_2690 = deref_var &phi@97 (function_temp float) intrinsic store_deref (ssa_2690, ssa_2679) (wrmask=x /*1*/, access=0) vec1 32 ssa_2691 = deref_var &phi@96 (function_temp float) intrinsic store_deref (ssa_2691, ssa_2678) (wrmask=x /*1*/, access=0) vec1 32 ssa_2692 = deref_var &phi@95 (function_temp float) intrinsic store_deref (ssa_2692, ssa_2688) (wrmask=x /*1*/, access=0) /* succs: block_68 */ } else { block block_67: /* preds: block_65 */ /* succs: block_68 */ } block block_68: /* preds: block_66 block_67 */ vec1 32 ssa_2693 = deref_var &phi@95 (function_temp float) vec1 32 ssa_2694 = intrinsic load_deref (ssa_2693) (access=0) vec1 32 ssa_2695 = deref_var &phi@96 (function_temp float) vec1 32 ssa_2696 = intrinsic load_deref (ssa_2695) (access=0) vec1 32 ssa_2697 = deref_var &phi@97 (function_temp float) vec1 32 ssa_2698 = intrinsic load_deref (ssa_2697) (access=0) vec1 32 ssa_2699 = deref_var &phi@98 (function_temp float) vec1 32 ssa_2700 = intrinsic load_deref (ssa_2699) (access=0) vec1 1 ssa_2701 = flt! ssa_658.y, ssa_77 vec1 32 ssa_2702 = deref_var &phi@104 (function_temp float) intrinsic store_deref (ssa_2702, ssa_2593) (wrmask=x /*1*/, access=0) vec1 32 ssa_2703 = deref_var &phi@103 (function_temp float) intrinsic store_deref (ssa_2703, ssa_2694) (wrmask=x /*1*/, access=0) vec1 32 ssa_2704 = deref_var &phi@102 (function_temp float) intrinsic store_deref (ssa_2704, ssa_2597) (wrmask=x /*1*/, access=0) vec1 32 ssa_2705 = deref_var &phi@101 (function_temp float) intrinsic store_deref (ssa_2705, ssa_2696) (wrmask=x /*1*/, access=0) vec1 32 ssa_2706 = deref_var &phi@100 (function_temp float) intrinsic store_deref (ssa_2706, ssa_2698) (wrmask=x /*1*/, access=0) vec1 32 ssa_2707 = deref_var &phi@99 (function_temp float) intrinsic store_deref (ssa_2707, ssa_2700) (wrmask=x /*1*/, access=0) /* succs: block_69 block_70 */ if ssa_2701 { block block_69: /* preds: block_68 */ vec1 32 ssa_2708 = fadd ssa_674.x, ssa_76 vec1 32 ssa_2709 = fmul ssa_2708, ssa_75 vec1 32 ssa_2710 = fmax! ssa_2709, ssa_74 vec1 32 ssa_2711 = fmin! ssa_2710, ssa_73 vec1 32 ssa_2712 = ffract ssa_664.z vec1 32 ssa_2713 = ffract ssa_666.w vec1 32 ssa_2714 = deref_var &@0 (uniform texture2D[]) vec1 32 ssa_2715 = deref_array &(*ssa_2714)[ssa_560] (uniform texture2D) /* &@0[ssa_560] */ vec1 32 ssa_2717 = deref_var &@9 (uniform sampler[]) vec1 32 ssa_2718 = deref_array &(*ssa_2717)[ssa_597] (uniform sampler) /* &@9[ssa_597] */ vec2 32 ssa_2720 = vec2 ssa_2712, ssa_2713 vec4 32 ssa_2723 = (float32)tex ssa_2715 (texture_deref), ssa_2718 (sampler_deref), ssa_2720 (coord) vec1 32 ssa_2724 = fmul ssa_2456.z, ssa_72 vec1 32 ssa_2725 = fadd ssa_2723.x, ssa_2724 vec1 32 ssa_2726 = ffract ssa_2725 vec1 32 ssa_2727 = fsub ssa_71, ssa_2726 vec1 32 ssa_2728 = fmax! ssa_2727, ssa_70 vec1 32 ssa_2729 = fmin! ssa_2728, ssa_69 vec1 32 ssa_2730 = fmul ssa_2093.x, ssa_2093.x vec1 32 ssa_2731 = fmul ssa_2730, ssa_68 vec1 32 ssa_2732 = fmul ssa_2731, ssa_2450 vec1 32 ssa_2733 = fmul ssa_2732, ssa_2711 vec1 32 ssa_2734 = fmul ssa_2733, ssa_2723.z vec1 32 ssa_2735 = fmul ssa_2734, ssa_2729 vec1 32 ssa_2736 = fadd ssa_2093.y, ssa_67 vec1 32 ssa_2737 = fadd ssa_2736, ssa_2723.y vec1 32 ssa_2738 = fmul ssa_2737, ssa_66 vec1 32 ssa_2739 = fmax! ssa_2738, ssa_65 vec1 32 ssa_2740 = fmin! ssa_2739, ssa_64 vec1 32 ssa_2741 = fmul ssa_2450, ssa_2099.w vec1 32 ssa_2742 = fsub ssa_63, ssa_2741 vec1 32 ssa_2743 = fsub ssa_2742, ssa_2593 vec1 32 ssa_2744 = fmul ssa_2740, ssa_2743 vec1 32 ssa_2745 = fmul ssa_2744, ssa_2735 vec1 32 ssa_2746 = fadd ssa_2745, ssa_2593 vec1 32 ssa_2747 = fmul ssa_2740, ssa_2694 vec1 32 ssa_2748 = fmul ssa_2747, ssa_2735 vec1 32 ssa_2749 = fsub ssa_2694, ssa_2748 vec1 32 ssa_2750 = deref_var &phi@104 (function_temp float) intrinsic store_deref (ssa_2750, ssa_2746) (wrmask=x /*1*/, access=0) vec1 32 ssa_2751 = deref_var &phi@103 (function_temp float) intrinsic store_deref (ssa_2751, ssa_2749) (wrmask=x /*1*/, access=0) vec1 32 ssa_2752 = deref_var &phi@102 (function_temp float) intrinsic store_deref (ssa_2752, ssa_2597) (wrmask=x /*1*/, access=0) vec1 32 ssa_2753 = deref_var &phi@101 (function_temp float) intrinsic store_deref (ssa_2753, ssa_2696) (wrmask=x /*1*/, access=0) vec1 32 ssa_2754 = deref_var &phi@100 (function_temp float) intrinsic store_deref (ssa_2754, ssa_2698) (wrmask=x /*1*/, access=0) vec1 32 ssa_2755 = deref_var &phi@99 (function_temp float) intrinsic store_deref (ssa_2755, ssa_2700) (wrmask=x /*1*/, access=0) /* succs: block_71 */ } else { block block_70: /* preds: block_68 */ /* succs: block_71 */ } block block_71: /* preds: block_69 block_70 */ vec1 32 ssa_2756 = deref_var &phi@99 (function_temp float) vec1 32 ssa_2757 = intrinsic load_deref (ssa_2756) (access=0) vec1 32 ssa_2758 = deref_var &phi@100 (function_temp float) vec1 32 ssa_2759 = intrinsic load_deref (ssa_2758) (access=0) vec1 32 ssa_2760 = deref_var &phi@101 (function_temp float) vec1 32 ssa_2761 = intrinsic load_deref (ssa_2760) (access=0) vec1 32 ssa_2762 = deref_var &phi@102 (function_temp float) vec1 32 ssa_2763 = intrinsic load_deref (ssa_2762) (access=0) vec1 32 ssa_2764 = deref_var &phi@103 (function_temp float) vec1 32 ssa_2765 = intrinsic load_deref (ssa_2764) (access=0) vec1 32 ssa_2766 = deref_var &phi@104 (function_temp float) vec1 32 ssa_2767 = intrinsic load_deref (ssa_2766) (access=0) vec1 32 ssa_2768 = deref_var &phi@110 (function_temp float) intrinsic store_deref (ssa_2768, ssa_2757) (wrmask=x /*1*/, access=0) vec1 32 ssa_2769 = deref_var &phi@109 (function_temp float) intrinsic store_deref (ssa_2769, ssa_2759) (wrmask=x /*1*/, access=0) vec1 32 ssa_2770 = deref_var &phi@108 (function_temp float) intrinsic store_deref (ssa_2770, ssa_2761) (wrmask=x /*1*/, access=0) vec1 32 ssa_2771 = deref_var &phi@107 (function_temp float) intrinsic store_deref (ssa_2771, ssa_2763) (wrmask=x /*1*/, access=0) vec1 32 ssa_2772 = deref_var &phi@106 (function_temp float) intrinsic store_deref (ssa_2772, ssa_2765) (wrmask=x /*1*/, access=0) vec1 32 ssa_2773 = deref_var &phi@105 (function_temp float) intrinsic store_deref (ssa_2773, ssa_2767) (wrmask=x /*1*/, access=0) /* succs: block_72 */ } block block_72: /* preds: block_58 block_71 */ vec1 32 ssa_2774 = deref_var &phi@105 (function_temp float) vec1 32 ssa_2775 = intrinsic load_deref (ssa_2774) (access=0) vec1 32 ssa_2776 = deref_var &phi@106 (function_temp float) vec1 32 ssa_2777 = intrinsic load_deref (ssa_2776) (access=0) vec1 32 ssa_2778 = deref_var &phi@107 (function_temp float) vec1 32 ssa_2779 = intrinsic load_deref (ssa_2778) (access=0) vec1 32 ssa_2780 = deref_var &phi@108 (function_temp float) vec1 32 ssa_2781 = intrinsic load_deref (ssa_2780) (access=0) vec1 32 ssa_2782 = deref_var &phi@109 (function_temp float) vec1 32 ssa_2783 = intrinsic load_deref (ssa_2782) (access=0) vec1 32 ssa_2784 = deref_var &phi@110 (function_temp float) vec1 32 ssa_2785 = intrinsic load_deref (ssa_2784) (access=0) vec1 32 ssa_2786 = fmul ssa_2775, ssa_2021 vec1 32 ssa_2787 = fmul ssa_2775, ssa_2023 vec1 32 ssa_2788 = fmul ssa_2775, ssa_2025 vec1 32 ssa_2789 = fmul ssa_2777, ssa_2019 vec1 32 ssa_2790 = fmul ssa_2779, ssa_2085 vec1 32 ssa_2791 = fmul ssa_2779, ssa_2086 vec1 32 ssa_2792 = fmul ssa_2779, ssa_2087 vec1 32 ssa_2793 = fadd ssa_2790, ssa_2781 vec1 32 ssa_2794 = fadd ssa_2783, ssa_2791 vec1 32 ssa_2795 = fadd ssa_2785, ssa_2792 vec1 32 ssa_2796 = deref_var &phi@113 (function_temp float) intrinsic store_deref (ssa_2796, ssa_2795) (wrmask=x /*1*/, access=0) vec1 32 ssa_2797 = deref_var &phi@112 (function_temp float) intrinsic store_deref (ssa_2797, ssa_2794) (wrmask=x /*1*/, access=0) vec1 32 ssa_2798 = deref_var &phi@111 (function_temp float) intrinsic store_deref (ssa_2798, ssa_2793) (wrmask=x /*1*/, access=0) /* succs: block_73 block_74 */ if ssa_640 { block block_73: /* preds: block_72 */ vec3 32 ssa_2799 = vec3 ssa_686.z, ssa_688.w, ssa_674.x vec3 32 ssa_2800 = vec3 ssa_2793, ssa_2794, ssa_2795 vec1 32 ssa_2801 = fdot3 ssa_2799, ssa_2800 vec1 32 ssa_2802 = fmul ssa_2793, ssa_62 vec1 32 ssa_2803 = fmul ssa_2802, ssa_2801 vec1 32 ssa_2804 = fmul ssa_2794, ssa_61 vec1 32 ssa_2805 = fmul ssa_2804, ssa_2801 vec1 32 ssa_2806 = fmul ssa_2795, ssa_60 vec1 32 ssa_2807 = fmul ssa_2806, ssa_2801 vec1 32 ssa_2808 = fsub ssa_686.z, ssa_2803 vec1 32 ssa_2809 = fsub ssa_688.w, ssa_2805 vec1 32 ssa_2810 = fsub ssa_674.x, ssa_2807 vec1 32 ssa_2811 = deref_var &phi@113 (function_temp float) intrinsic store_deref (ssa_2811, ssa_2810) (wrmask=x /*1*/, access=0) vec1 32 ssa_2812 = deref_var &phi@112 (function_temp float) intrinsic store_deref (ssa_2812, ssa_2809) (wrmask=x /*1*/, access=0) vec1 32 ssa_2813 = deref_var &phi@111 (function_temp float) intrinsic store_deref (ssa_2813, ssa_2808) (wrmask=x /*1*/, access=0) /* succs: block_75 */ } else { block block_74: /* preds: block_72 */ /* succs: block_75 */ } block block_75: /* preds: block_73 block_74 */ vec1 32 ssa_2814 = deref_var &phi@111 (function_temp float) vec1 32 ssa_2815 = intrinsic load_deref (ssa_2814) (access=0) vec1 32 ssa_2816 = deref_var &phi@112 (function_temp float) vec1 32 ssa_2817 = intrinsic load_deref (ssa_2816) (access=0) vec1 32 ssa_2818 = deref_var &phi@113 (function_temp float) vec1 32 ssa_2819 = intrinsic load_deref (ssa_2818) (access=0) vec4 32 ssa_2820 = intrinsic load_vulkan_descriptor (ssa_636) (desc_type=SSBO /*7*/) vec4 32 ssa_2821 = deref_cast (BindlessCBV *)ssa_2820 (ssbo BindlessCBV) /* ptr_stride=0, align_mul=4, align_offset=0 */ vec4 32 ssa_2822 = deref_struct &ssa_2821->field0 (ssbo vec4[4096]) /* &((BindlessCBV *)ssa_2820)->field0 */ vec1 32 ssa_2823 = load_const (0x00000000 = 0.000000) vec4 32 ssa_2824 = deref_array &(*ssa_2822)[0] (ssbo vec4) /* &((BindlessCBV *)ssa_2820)->field0[0] */ vec4 32 ssa_2825 = intrinsic load_deref (ssa_2824) (access=16) vec1 32 ssa_2826 = fmul ssa_2825.y, ssa_59 vec1 32 ssa_2827 = fsqrt ssa_2786 vec1 32 ssa_2828 = fsqrt ssa_2787 vec1 32 ssa_2829 = fsqrt ssa_2788 vec1 32 ssa_2830 = fabs ssa_2819 vec1 32 ssa_2831 = fabs ssa_2817 vec1 32 ssa_2832 = fmax! ssa_2831, ssa_2830 vec1 32 ssa_2833 = fabs ssa_2815 vec1 32 ssa_2834 = fmax! ssa_2833, ssa_2832 vec1 32 ssa_2835 = fdiv ssa_2815, ssa_2834 vec1 32 ssa_2836 = fdiv ssa_2817, ssa_2834 vec1 32 ssa_2837 = fdiv ssa_2819, ssa_2834 vec1 32 ssa_2838 = fmul ssa_2835, ssa_58 vec1 32 ssa_2839 = fmul ssa_2836, ssa_57 vec1 32 ssa_2840 = fmul ssa_2837, ssa_56 vec1 32 ssa_2841 = fadd ssa_2838, ssa_55 vec1 32 ssa_2842 = fadd ssa_2839, ssa_54 vec1 32 ssa_2843 = fadd ssa_2840, ssa_53 vec1 32 ssa_2844 = fmul ssa_2072, ssa_52 vec1 32 ssa_2845 = f2u32 ssa_2844 vec1 32 ssa_2846 = u2f32 ssa_2845 vec1 32 ssa_2847 = fmul ssa_2846, ssa_51 vec1 32 ssa_2848 = fdiv ssa_660.z, ssa_650.y vec1 32 ssa_2849 = fdiv ssa_662.w, ssa_650.y vec1 32 ssa_2850 = fdiv ssa_648.x, ssa_650.y vec1 32 ssa_2851 = fdiv ssa_652.z, ssa_644.y vec1 32 ssa_2852 = fdiv ssa_654.w, ssa_644.y vec1 32 ssa_2853 = fdiv ssa_642.x, ssa_644.y vec1 32 ssa_2854 = fsub ssa_2851, ssa_2848 vec1 32 ssa_2855 = fsub ssa_2852, ssa_2849 vec1 32 ssa_2856 = fsub ssa_2853, ssa_2850 vec1 32 ssa_2857 = fmul ssa_2854, ssa_50 vec1 32 ssa_2858 = fmul ssa_2855, ssa_49 vec1 32 ssa_2859 = fmul ssa_2856, ssa_48 vec1 32 ssa_2860 = deref_var &SV_Target (shader_out vec4) vec4 32 ssa_2861 = intrinsic load_deref (ssa_2860) (access=0) vec4 32 ssa_2862 = vec4 ssa_2827, ssa_2861.y, ssa_2861.z, ssa_2861.w intrinsic store_deref (ssa_2860, ssa_2862) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2863 = deref_var &SV_Target (shader_out vec4) vec4 32 ssa_2864 = intrinsic load_deref (ssa_2863) (access=0) vec4 32 ssa_2865 = vec4 ssa_2864.x, ssa_2828, ssa_2864.z, ssa_2864.w intrinsic store_deref (ssa_2863, ssa_2865) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2866 = deref_var &SV_Target (shader_out vec4) vec4 32 ssa_2867 = intrinsic load_deref (ssa_2866) (access=0) vec4 32 ssa_2868 = vec4 ssa_2867.x, ssa_2867.y, ssa_2829, ssa_2867.w intrinsic store_deref (ssa_2866, ssa_2868) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2869 = deref_var &SV_Target (shader_out vec4) vec4 32 ssa_2870 = intrinsic load_deref (ssa_2869) (access=0) vec4 32 ssa_2871 = vec4 ssa_2870.x, ssa_2870.y, ssa_2870.z, ssa_2826 intrinsic store_deref (ssa_2869, ssa_2871) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2872 = deref_var &SV_Target_1 (shader_out vec4) vec4 32 ssa_2873 = intrinsic load_deref (ssa_2872) (access=0) vec4 32 ssa_2874 = vec4 ssa_2841, ssa_2873.y, ssa_2873.z, ssa_2873.w intrinsic store_deref (ssa_2872, ssa_2874) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2875 = deref_var &SV_Target_1 (shader_out vec4) vec4 32 ssa_2876 = intrinsic load_deref (ssa_2875) (access=0) vec4 32 ssa_2877 = vec4 ssa_2876.x, ssa_2842, ssa_2876.z, ssa_2876.w intrinsic store_deref (ssa_2875, ssa_2877) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2878 = deref_var &SV_Target_1 (shader_out vec4) vec4 32 ssa_2879 = intrinsic load_deref (ssa_2878) (access=0) vec4 32 ssa_2880 = vec4 ssa_2879.x, ssa_2879.y, ssa_2843, ssa_2879.w intrinsic store_deref (ssa_2878, ssa_2880) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2881 = deref_var &SV_Target_1 (shader_out vec4) vec4 32 ssa_2882 = intrinsic load_deref (ssa_2881) (access=0) vec4 32 ssa_2883 = vec4 ssa_2882.x, ssa_2882.y, ssa_2882.z, ssa_47 intrinsic store_deref (ssa_2881, ssa_2883) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2884 = deref_var &SV_Target_2 (shader_out vec4) vec4 32 ssa_2885 = intrinsic load_deref (ssa_2884) (access=0) vec4 32 ssa_2886 = vec4 ssa_2017, ssa_2885.y, ssa_2885.z, ssa_2885.w intrinsic store_deref (ssa_2884, ssa_2886) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2887 = deref_var &SV_Target_2 (shader_out vec4) vec4 32 ssa_2888 = intrinsic load_deref (ssa_2887) (access=0) vec4 32 ssa_2889 = vec4 ssa_2888.x, ssa_2789, ssa_2888.z, ssa_2888.w intrinsic store_deref (ssa_2887, ssa_2889) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2890 = deref_var &SV_Target_2 (shader_out vec4) vec4 32 ssa_2891 = intrinsic load_deref (ssa_2890) (access=0) vec4 32 ssa_2892 = vec4 ssa_2891.x, ssa_2891.y, ssa_46, ssa_2891.w intrinsic store_deref (ssa_2890, ssa_2892) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2893 = deref_var &SV_Target_2 (shader_out vec4) vec4 32 ssa_2894 = intrinsic load_deref (ssa_2893) (access=0) vec4 32 ssa_2895 = vec4 ssa_2894.x, ssa_2894.y, ssa_2894.z, ssa_2847 intrinsic store_deref (ssa_2893, ssa_2895) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2896 = deref_var &SV_Target_3 (shader_out vec4) vec4 32 ssa_2897 = intrinsic load_deref (ssa_2896) (access=0) vec4 32 ssa_2898 = vec4 ssa_2857, ssa_2897.y, ssa_2897.z, ssa_2897.w intrinsic store_deref (ssa_2896, ssa_2898) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2899 = deref_var &SV_Target_3 (shader_out vec4) vec4 32 ssa_2900 = intrinsic load_deref (ssa_2899) (access=0) vec4 32 ssa_2901 = vec4 ssa_2900.x, ssa_2858, ssa_2900.z, ssa_2900.w intrinsic store_deref (ssa_2899, ssa_2901) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2902 = deref_var &SV_Target_3 (shader_out vec4) vec4 32 ssa_2903 = intrinsic load_deref (ssa_2902) (access=0) vec4 32 ssa_2904 = vec4 ssa_2903.x, ssa_2903.y, ssa_2859, ssa_2903.w intrinsic store_deref (ssa_2902, ssa_2904) (wrmask=xyzw /*15*/, access=0) vec1 32 ssa_2905 = deref_var &SV_Target_3 (shader_out vec4) vec4 32 ssa_2906 = intrinsic load_deref (ssa_2905) (access=0) vec4 32 ssa_2907 = vec4 ssa_2906.x, ssa_2906.y, ssa_2906.z, ssa_45 intrinsic store_deref (ssa_2905, ssa_2907) (wrmask=xyzw /*15*/, access=0) /* succs: block_76 */ block block_76: } NIR (SSA form) for vertex shader: shader: MESA_SHADER_VERTEX source_sha1: {0x44ea7faa, 0x2435d792, 0xdd0f3938, 0x25cede04, 0x94926cc7} stage: 0 next_stage: 0 num_ssbos: 2 inputs_read: 15-22,25-28,30 outputs_written: 0,17,24,33-39 subgroup_size: 2 clip_distance_array_size: 1 divergence_analysis_run: true bit_sizes_float: 0x20 bit_sizes_int: 0x21 separate_shader: true inputs: 0 outputs: 0 uniforms: 256 decl_var push_const INTERP_MODE_NONE RootConstants registers decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @0 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly BindlessCBV[] @1 (~0, 0, 2) decl_var shader_in INTERP_MODE_NONE vec3 POSITION (VERT_ATTRIB_GENERIC0.xyz, 15, 0) decl_var shader_in INTERP_MODE_NONE uvec4 BLENDINDICES (VERT_ATTRIB_GENERIC1.xyzw, 16, 0) decl_var shader_in INTERP_MODE_NONE vec4 BLENDWEIGHT (VERT_ATTRIB_GENERIC2.xyzw, 17, 0) decl_var shader_in INTERP_MODE_NONE uvec4 BLENDINDICES_1 (VERT_ATTRIB_GENERIC3.xyzw, 18, 0) decl_var shader_in INTERP_MODE_NONE vec4 BLENDWEIGHT_1 (VERT_ATTRIB_GENERIC4.xyzw, 19, 0) decl_var shader_in INTERP_MODE_NONE vec2 TEXCOORD (VERT_ATTRIB_GENERIC5.xy, 20, 0) decl_var shader_in INTERP_MODE_NONE vec3 NORMAL (VERT_ATTRIB_GENERIC6.xyz, 21, 0) decl_var shader_in INTERP_MODE_NONE vec4 TANGENT (VERT_ATTRIB_GENERIC7.xyzw, 22, 0) decl_var shader_in INTERP_MODE_NONE vec4[3] INSTANCE_TRANSFORM (VERT_ATTRIB_GENERIC10.xyzw, 25, 0) decl_var shader_in INTERP_MODE_NONE uvec4 INSTANCE_SKINNING_DATA (VERT_ATTRIB_GENERIC13.xyzw, 28, 0) decl_var shader_in INTERP_MODE_NONE float LIGHT_BLOCKER_INTENSITY (VERT_ATTRIB_GENERIC15.x, 30, 0) decl_var invariant shader_out INTERP_MODE_NONE vec4 SV_Position (VARYING_SLOT_POS.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE float[1] @2 (VARYING_SLOT_CLIP_DIST0.x, 17, 0) compact decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD@3 (VARYING_SLOT_VAR1.xyzw, 33, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_1 (VARYING_SLOT_VAR2.xyzw, 34, 0) decl_var shader_out INTERP_MODE_NONE vec3 TEXCOORD_2 (VARYING_SLOT_VAR3.xyz, 35, 0) decl_var shader_out INTERP_MODE_NONE vec2 TEXCOORD_3 (VARYING_SLOT_VAR4.zw, 36, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_4 (VARYING_SLOT_VAR5.xyzw, 37, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_5 (VARYING_SLOT_VAR6.xyzw, 38, 0) decl_var shader_out INTERP_MODE_NONE vec3 TEXCOORD_6 (VARYING_SLOT_VAR7.xyz, 39, 0) decl_function main (0 params) impl main { block block_0: /* preds: */ vec1 32 con ssa_0 = load_const (0x00000000 = 0.000000) vec1 32 con ssa_1 = load_const (0xbf800000 = -1.000000) vec1 32 con ssa_2 = load_const (0x40000000 = 2.000000) vec1 32 con ssa_3 = load_const (0x37000000 = 0.000008) vec1 32 con ssa_4 = load_const (0x00000001 = 0.000000) vec1 32 con ssa_5 = load_const (0x4479ffff = 999.999939) vec1 32 con ssa_6 = load_const (0x00000003 = 0.000000) vec1 32 con ssa_7 = load_const (0x00000002 = 0.000000) vec1 32 con ssa_8 = load_const (0x00000010 = 0.000000) vec1 32 con ssa_9 = load_const (0x00000008 = 0.000000) vec1 32 con ssa_10 = load_const (0x00000020 = 0.000000) vec1 32 con ssa_11 = load_const (0x3f000000 = 0.500000) vec1 32 con ssa_12 = load_const (0x3727c5ac = 0.000010) vec1 32 con ssa_13 = load_const (0x00000018 = 0.000000) vec1 32 con ssa_14 = intrinsic load_uniform (ssa_13) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_15 = iadd ssa_14, ssa_7 vec1 32 con ssa_16 = load_const (0x000f423f = 0.000000) vec1 32 con ssa_17 = umin ssa_15, ssa_16 vec1 32 con ssa_18 = intrinsic load_uniform (ssa_0) (base=252, range=4, dest_type=uint /*4*/) vec1 32 con ssa_19 = load_const (0x00000006 = 0.000000) vec1 32 con ssa_20 = ishl ssa_17, ssa_19 vec1 32 con ssa_21 = load_const (0x00000080 = 0.000000) vec1 32 con ssa_22 = iadd3 ssa_21, ssa_20, ssa_18 vec1 32 con ssa_23 = load_const (0xdeaddeed = -6264355898823540736.000000) vec1 32 con ssa_24 = intrinsic resource_intel (ssa_23, ssa_22, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_25 = intrinsic load_uniform (ssa_8) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_26 = umin ssa_25, ssa_16 vec1 32 con ssa_27 = ishl ssa_26, ssa_19 vec1 32 con ssa_28 = iadd3 ssa_21, ssa_27, ssa_18 vec1 32 con ssa_29 = intrinsic resource_intel (ssa_23, ssa_28, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_30 = intrinsic load_uniform (ssa_9) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_31 = umin ssa_30, ssa_16 vec1 32 con ssa_32 = ishl ssa_31, ssa_19 vec1 32 con ssa_33 = iadd3 ssa_21, ssa_32, ssa_18 vec1 32 con ssa_34 = intrinsic resource_intel (ssa_23, ssa_33, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_35 = intrinsic load_uniform (ssa_0) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_36 = iadd ssa_35, ssa_4 vec1 32 con ssa_37 = umin ssa_36, ssa_16 vec1 32 con ssa_38 = ishl ssa_37, ssa_19 vec1 32 con ssa_39 = iadd3 ssa_21, ssa_38, ssa_18 vec1 32 con ssa_40 = intrinsic resource_intel (ssa_23, ssa_39, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_41 = load_const (0x0000000c = 0.000000) vec1 32 con ssa_42 = intrinsic load_uniform (ssa_41) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_43 = umin ssa_42, ssa_16 vec1 32 con ssa_44 = ishl ssa_43, ssa_19 vec1 32 con ssa_45 = iadd3 ssa_21, ssa_44, ssa_18 vec1 32 con ssa_46 = intrinsic resource_intel (ssa_23, ssa_45, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 div ssa_47 = intrinsic load_input (ssa_0) (base=12, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC15 slots=1 /*158*/) vec3 32 div ssa_48 = intrinsic load_input (ssa_0) (base=11, component=0, dest_type=uint32 /*36*/, io location=VERT_ATTRIB_GENERIC13 slots=1 /*156*/) vec4 32 div ssa_49 = intrinsic load_input (ssa_0) (base=7, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC7 slots=1 /*150*/) vec3 32 div ssa_50 = intrinsic load_input (ssa_0) (base=6, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC6 slots=1 /*149*/) vec2 32 div ssa_51 = intrinsic load_input (ssa_0) (base=5, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC5 slots=1 /*148*/) vec4 32 div ssa_52 = intrinsic load_input (ssa_0) (base=4, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC4 slots=1 /*147*/) vec4 32 div ssa_53 = intrinsic load_input (ssa_0) (base=3, component=0, dest_type=uint32 /*36*/, io location=VERT_ATTRIB_GENERIC3 slots=1 /*146*/) vec4 32 div ssa_54 = intrinsic load_input (ssa_0) (base=2, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC2 slots=1 /*145*/) vec4 32 div ssa_55 = intrinsic load_input (ssa_0) (base=1, component=0, dest_type=uint32 /*36*/, io location=VERT_ATTRIB_GENERIC1 slots=1 /*144*/) vec3 32 div ssa_56 = intrinsic load_input (ssa_0) (base=0, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC0 slots=1 /*143*/) vec4 32 div ssa_57 = intrinsic load_input (ssa_0) (base=8, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC10 slots=1 /*153*/) vec4 32 div ssa_58 = intrinsic load_input (ssa_0) (base=9, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC11 slots=1 /*154*/) vec4 32 div ssa_59 = intrinsic load_input (ssa_0) (base=10, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC12 slots=1 /*155*/) vec1 32 con ssa_60 = load_const (0x00000260 = 0.000000) vec4 32 con ssa_61 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_60) (access=80, align_mul=1073741824, align_offset=608) vec1 32 con ssa_62 = ineg! ssa_61.x vec1 32 div ssa_63 = iadd! ssa_57.w, ssa_62 vec1 32 con ssa_64 = ineg! ssa_61.y vec1 32 div ssa_65 = iadd! ssa_58.w, ssa_64 vec1 32 con ssa_66 = ineg! ssa_61.z vec1 32 div ssa_67 = iadd! ssa_59.w, ssa_66 vec1 32 div ssa_68 = i2f32! ssa_63 vec1 32 div ssa_69 = i2f32! ssa_65 vec1 32 div ssa_70 = i2f32! ssa_67 vec1 32 div ssa_71 = fmul! ssa_68, ssa_3 vec1 32 div ssa_72 = fmul! ssa_69, ssa_3 vec1 32 div ssa_73 = fmul! ssa_70, ssa_3 vec1 32 con ssa_74 = load_const (0x00000040 = 0.000000) vec8 32 con ssa_75 = intrinsic load_ssbo_uniform_block_intel (ssa_34, ssa_74) (access=80, align_mul=1073741824, align_offset=64) vec1 32 con ssa_76 = load_const (0x00000004 = 0.000000) vec1 32 div ssa_77 = fmul! ssa_75.a, ssa_56.x vec1 32 div ssa_78 = fmul! ssa_75.b, ssa_56.y vec1 32 div ssa_79 = fmul! ssa_75.c, ssa_56.z vec1 32 div ssa_80 = fadd! ssa_77, ssa_75.e vec1 32 div ssa_81 = fadd! ssa_78, ssa_75.f vec1 32 div ssa_82 = fadd! ssa_79, ssa_75.g vec4 32 con ssa_83 = intrinsic load_ssbo_uniform_block_intel (ssa_46, ssa_8) (access=80, align_mul=1073741824, align_offset=16) vec1 32 div ssa_84 = fadd! ssa_54.y, ssa_54.x vec1 32 div ssa_85 = fadd! ssa_54.z, ssa_84 vec1 32 div ssa_86 = fadd! ssa_54.w, ssa_85 vec1 32 div ssa_87 = fadd! ssa_52.y, ssa_52.x vec1 32 div ssa_88 = fadd! ssa_52.z, ssa_87 vec1 32 div ssa_89 = fadd! ssa_52.w, ssa_88 vec1 32 div ssa_90 = fadd! ssa_89, ssa_86 vec1 32 div ssa_91 = fmax! ssa_12, ssa_90 vec1 32 div ssa_92 = frcp! ssa_91 vec1 32 div ssa_93 = fmul! ssa_54.x, ssa_92 vec1 32 div ssa_94 = fmul! ssa_54.y, ssa_92 vec1 32 div ssa_95 = fmul! ssa_54.z, ssa_92 vec1 32 div ssa_96 = fmul! ssa_54.w, ssa_92 vec1 32 div ssa_97 = fmul! ssa_52.x, ssa_92 vec1 32 div ssa_98 = fmul! ssa_52.y, ssa_92 vec1 32 div ssa_99 = fmul! ssa_52.z, ssa_92 vec1 32 div ssa_100 = fmul! ssa_52.w, ssa_92 vec1 32 div ssa_101 = imul ssa_55.x, ssa_48.y vec1 32 div ssa_102 = iadd ssa_101, ssa_48.x vec1 32 con ssa_103 = load_const (0xfffffffc = -nan) vec1 32 div ssa_104 = iand ssa_102, ssa_103 vec1 32 div ssa_105 = intrinsic load_ssbo (ssa_24, ssa_104) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_106 = iadd ssa_104, ssa_76 vec1 32 div ssa_107 = intrinsic load_ssbo (ssa_24, ssa_106) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_108 = iadd ssa_104, ssa_9 vec1 32 div ssa_109 = intrinsic load_ssbo (ssa_24, ssa_108) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_110 = iadd ssa_104, ssa_41 vec1 32 div ssa_111 = intrinsic load_ssbo (ssa_24, ssa_110) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_112 = iadd ssa_104, ssa_8 vec1 32 div ssa_113 = intrinsic load_ssbo (ssa_24, ssa_112) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_114 = load_const (0x00000014 = 0.000000) vec1 32 div ssa_115 = iadd ssa_114, ssa_104 vec1 32 div ssa_116 = intrinsic load_ssbo (ssa_24, ssa_115) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_117 = iadd ssa_13, ssa_104 vec1 32 div ssa_118 = intrinsic load_ssbo (ssa_24, ssa_117) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_119 = load_const (0x0000001c = 0.000000) vec1 32 div ssa_120 = iadd ssa_119, ssa_104 vec1 32 div ssa_121 = intrinsic load_ssbo (ssa_24, ssa_120) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_122 = iadd ssa_104, ssa_10 vec1 32 div ssa_123 = intrinsic load_ssbo (ssa_24, ssa_122) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_124 = load_const (0x00000024 = 0.000000) vec1 32 div ssa_125 = iadd ssa_124, ssa_104 vec1 32 div ssa_126 = intrinsic load_ssbo (ssa_24, ssa_125) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_127 = load_const (0x00000028 = 0.000000) vec1 32 div ssa_128 = iadd ssa_127, ssa_104 vec1 32 div ssa_129 = intrinsic load_ssbo (ssa_24, ssa_128) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_130 = load_const (0x0000002c = 0.000000) vec1 32 div ssa_131 = iadd ssa_130, ssa_104 vec1 32 div ssa_132 = intrinsic load_ssbo (ssa_24, ssa_131) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_133 = imul ssa_53.x, ssa_48.y vec1 32 div ssa_134 = iadd ssa_133, ssa_48.x vec1 32 div ssa_135 = iand ssa_134, ssa_103 vec1 32 div ssa_136 = intrinsic load_ssbo (ssa_24, ssa_135) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_137 = iadd ssa_135, ssa_76 vec1 32 div ssa_138 = intrinsic load_ssbo (ssa_24, ssa_137) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_139 = iadd ssa_135, ssa_9 vec1 32 div ssa_140 = intrinsic load_ssbo (ssa_24, ssa_139) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_141 = iadd ssa_135, ssa_41 vec1 32 div ssa_142 = intrinsic load_ssbo (ssa_24, ssa_141) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_143 = iadd ssa_135, ssa_8 vec1 32 div ssa_144 = intrinsic load_ssbo (ssa_24, ssa_143) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_145 = iadd ssa_114, ssa_135 vec1 32 div ssa_146 = intrinsic load_ssbo (ssa_24, ssa_145) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_147 = iadd ssa_13, ssa_135 vec1 32 div ssa_148 = intrinsic load_ssbo (ssa_24, ssa_147) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_149 = iadd ssa_119, ssa_135 vec1 32 div ssa_150 = intrinsic load_ssbo (ssa_24, ssa_149) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_151 = iadd ssa_135, ssa_10 vec1 32 div ssa_152 = intrinsic load_ssbo (ssa_24, ssa_151) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_153 = iadd ssa_124, ssa_135 vec1 32 div ssa_154 = intrinsic load_ssbo (ssa_24, ssa_153) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_155 = iadd ssa_127, ssa_135 vec1 32 div ssa_156 = intrinsic load_ssbo (ssa_24, ssa_155) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_157 = iadd ssa_130, ssa_135 vec1 32 div ssa_158 = intrinsic load_ssbo (ssa_24, ssa_157) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_159 = fmul! ssa_105, ssa_93 vec1 32 div ssa_160 = fmul! ssa_113, ssa_93 vec1 32 div ssa_161 = fmul! ssa_123, ssa_93 vec1 32 div ssa_162 = fmul! ssa_107, ssa_93 vec1 32 div ssa_163 = fmul! ssa_116, ssa_93 vec1 32 div ssa_164 = fmul! ssa_126, ssa_93 vec1 32 div ssa_165 = fmul! ssa_109, ssa_93 vec1 32 div ssa_166 = fmul! ssa_118, ssa_93 vec1 32 div ssa_167 = fmul! ssa_129, ssa_93 vec1 32 div ssa_168 = fmul! ssa_111, ssa_93 vec1 32 div ssa_169 = fmul! ssa_121, ssa_93 vec1 32 div ssa_170 = fmul! ssa_132, ssa_93 vec1 32 div ssa_171 = fmul! ssa_136, ssa_97 vec1 32 div ssa_172 = fmul! ssa_144, ssa_97 vec1 32 div ssa_173 = fmul! ssa_152, ssa_97 vec1 32 div ssa_174 = fmul! ssa_138, ssa_97 vec1 32 div ssa_175 = fmul! ssa_146, ssa_97 vec1 32 div ssa_176 = fmul! ssa_154, ssa_97 vec1 32 div ssa_177 = fmul! ssa_140, ssa_97 vec1 32 div ssa_178 = fmul! ssa_148, ssa_97 vec1 32 div ssa_179 = fmul! ssa_156, ssa_97 vec1 32 div ssa_180 = fmul! ssa_142, ssa_97 vec1 32 div ssa_181 = fmul! ssa_150, ssa_97 vec1 32 div ssa_182 = fmul! ssa_158, ssa_97 vec1 32 div ssa_183 = fadd! ssa_171, ssa_159 vec1 32 div ssa_184 = fadd! ssa_172, ssa_160 vec1 32 div ssa_185 = fadd! ssa_173, ssa_161 vec1 32 div ssa_186 = fadd! ssa_174, ssa_162 vec1 32 div ssa_187 = fadd! ssa_175, ssa_163 vec1 32 div ssa_188 = fadd! ssa_176, ssa_164 vec1 32 div ssa_189 = fadd! ssa_177, ssa_165 vec1 32 div ssa_190 = fadd! ssa_178, ssa_166 vec1 32 div ssa_191 = fadd! ssa_179, ssa_167 vec1 32 div ssa_192 = fadd! ssa_180, ssa_168 vec1 32 div ssa_193 = fadd! ssa_181, ssa_169 vec1 32 div ssa_194 = fadd! ssa_182, ssa_170 vec1 32 div ssa_195 = imul ssa_55.y, ssa_48.y vec1 32 div ssa_196 = iadd ssa_195, ssa_48.x vec1 32 div ssa_197 = iand ssa_196, ssa_103 vec1 32 div ssa_198 = intrinsic load_ssbo (ssa_24, ssa_197) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_199 = iadd ssa_197, ssa_76 vec1 32 div ssa_200 = intrinsic load_ssbo (ssa_24, ssa_199) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_201 = iadd ssa_197, ssa_9 vec1 32 div ssa_202 = intrinsic load_ssbo (ssa_24, ssa_201) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_203 = iadd ssa_197, ssa_41 vec1 32 div ssa_204 = intrinsic load_ssbo (ssa_24, ssa_203) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_205 = iadd ssa_197, ssa_8 vec1 32 div ssa_206 = intrinsic load_ssbo (ssa_24, ssa_205) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_207 = iadd ssa_114, ssa_197 vec1 32 div ssa_208 = intrinsic load_ssbo (ssa_24, ssa_207) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_209 = iadd ssa_13, ssa_197 vec1 32 div ssa_210 = intrinsic load_ssbo (ssa_24, ssa_209) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_211 = iadd ssa_119, ssa_197 vec1 32 div ssa_212 = intrinsic load_ssbo (ssa_24, ssa_211) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_213 = iadd ssa_197, ssa_10 vec1 32 div ssa_214 = intrinsic load_ssbo (ssa_24, ssa_213) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_215 = iadd ssa_124, ssa_197 vec1 32 div ssa_216 = intrinsic load_ssbo (ssa_24, ssa_215) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_217 = iadd ssa_127, ssa_197 vec1 32 div ssa_218 = intrinsic load_ssbo (ssa_24, ssa_217) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_219 = iadd ssa_130, ssa_197 vec1 32 div ssa_220 = intrinsic load_ssbo (ssa_24, ssa_219) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_221 = imul ssa_53.y, ssa_48.y vec1 32 div ssa_222 = iadd ssa_221, ssa_48.x vec1 32 div ssa_223 = iand ssa_222, ssa_103 vec1 32 div ssa_224 = intrinsic load_ssbo (ssa_24, ssa_223) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_225 = iadd ssa_223, ssa_76 vec1 32 div ssa_226 = intrinsic load_ssbo (ssa_24, ssa_225) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_227 = iadd ssa_223, ssa_9 vec1 32 div ssa_228 = intrinsic load_ssbo (ssa_24, ssa_227) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_229 = iadd ssa_223, ssa_41 vec1 32 div ssa_230 = intrinsic load_ssbo (ssa_24, ssa_229) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_231 = iadd ssa_223, ssa_8 vec1 32 div ssa_232 = intrinsic load_ssbo (ssa_24, ssa_231) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_233 = iadd ssa_114, ssa_223 vec1 32 div ssa_234 = intrinsic load_ssbo (ssa_24, ssa_233) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_235 = iadd ssa_13, ssa_223 vec1 32 div ssa_236 = intrinsic load_ssbo (ssa_24, ssa_235) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_237 = iadd ssa_119, ssa_223 vec1 32 div ssa_238 = intrinsic load_ssbo (ssa_24, ssa_237) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_239 = iadd ssa_223, ssa_10 vec1 32 div ssa_240 = intrinsic load_ssbo (ssa_24, ssa_239) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_241 = iadd ssa_124, ssa_223 vec1 32 div ssa_242 = intrinsic load_ssbo (ssa_24, ssa_241) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_243 = iadd ssa_127, ssa_223 vec1 32 div ssa_244 = intrinsic load_ssbo (ssa_24, ssa_243) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_245 = iadd ssa_130, ssa_223 vec1 32 div ssa_246 = intrinsic load_ssbo (ssa_24, ssa_245) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_247 = fmul! ssa_198, ssa_94 vec1 32 div ssa_248 = fmul! ssa_206, ssa_94 vec1 32 div ssa_249 = fmul! ssa_214, ssa_94 vec1 32 div ssa_250 = fmul! ssa_200, ssa_94 vec1 32 div ssa_251 = fmul! ssa_208, ssa_94 vec1 32 div ssa_252 = fmul! ssa_216, ssa_94 vec1 32 div ssa_253 = fmul! ssa_202, ssa_94 vec1 32 div ssa_254 = fmul! ssa_210, ssa_94 vec1 32 div ssa_255 = fmul! ssa_218, ssa_94 vec1 32 div ssa_256 = fmul! ssa_204, ssa_94 vec1 32 div ssa_257 = fmul! ssa_212, ssa_94 vec1 32 div ssa_258 = fmul! ssa_220, ssa_94 vec1 32 div ssa_259 = fadd! ssa_183, ssa_247 vec1 32 div ssa_260 = fadd! ssa_184, ssa_248 vec1 32 div ssa_261 = fadd! ssa_185, ssa_249 vec1 32 div ssa_262 = fadd! ssa_186, ssa_250 vec1 32 div ssa_263 = fadd! ssa_187, ssa_251 vec1 32 div ssa_264 = fadd! ssa_188, ssa_252 vec1 32 div ssa_265 = fadd! ssa_189, ssa_253 vec1 32 div ssa_266 = fadd! ssa_190, ssa_254 vec1 32 div ssa_267 = fadd! ssa_191, ssa_255 vec1 32 div ssa_268 = fadd! ssa_192, ssa_256 vec1 32 div ssa_269 = fadd! ssa_193, ssa_257 vec1 32 div ssa_270 = fadd! ssa_194, ssa_258 vec1 32 div ssa_271 = fmul! ssa_224, ssa_98 vec1 32 div ssa_272 = fmul! ssa_232, ssa_98 vec1 32 div ssa_273 = fmul! ssa_240, ssa_98 vec1 32 div ssa_274 = fmul! ssa_226, ssa_98 vec1 32 div ssa_275 = fmul! ssa_234, ssa_98 vec1 32 div ssa_276 = fmul! ssa_242, ssa_98 vec1 32 div ssa_277 = fmul! ssa_228, ssa_98 vec1 32 div ssa_278 = fmul! ssa_236, ssa_98 vec1 32 div ssa_279 = fmul! ssa_244, ssa_98 vec1 32 div ssa_280 = fmul! ssa_230, ssa_98 vec1 32 div ssa_281 = fmul! ssa_238, ssa_98 vec1 32 div ssa_282 = fmul! ssa_246, ssa_98 vec1 32 div ssa_283 = fadd! ssa_259, ssa_271 vec1 32 div ssa_284 = fadd! ssa_260, ssa_272 vec1 32 div ssa_285 = fadd! ssa_261, ssa_273 vec1 32 div ssa_286 = fadd! ssa_262, ssa_274 vec1 32 div ssa_287 = fadd! ssa_263, ssa_275 vec1 32 div ssa_288 = fadd! ssa_264, ssa_276 vec1 32 div ssa_289 = fadd! ssa_265, ssa_277 vec1 32 div ssa_290 = fadd! ssa_266, ssa_278 vec1 32 div ssa_291 = fadd! ssa_267, ssa_279 vec1 32 div ssa_292 = fadd! ssa_268, ssa_280 vec1 32 div ssa_293 = fadd! ssa_269, ssa_281 vec1 32 div ssa_294 = fadd! ssa_270, ssa_282 vec1 32 div ssa_295 = imul ssa_55.z, ssa_48.y vec1 32 div ssa_296 = iadd ssa_295, ssa_48.x vec1 32 div ssa_297 = iand ssa_296, ssa_103 vec1 32 div ssa_298 = intrinsic load_ssbo (ssa_24, ssa_297) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_299 = iadd ssa_297, ssa_76 vec1 32 div ssa_300 = intrinsic load_ssbo (ssa_24, ssa_299) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_301 = iadd ssa_297, ssa_9 vec1 32 div ssa_302 = intrinsic load_ssbo (ssa_24, ssa_301) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_303 = iadd ssa_297, ssa_41 vec1 32 div ssa_304 = intrinsic load_ssbo (ssa_24, ssa_303) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_305 = iadd ssa_297, ssa_8 vec1 32 div ssa_306 = intrinsic load_ssbo (ssa_24, ssa_305) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_307 = iadd ssa_114, ssa_297 vec1 32 div ssa_308 = intrinsic load_ssbo (ssa_24, ssa_307) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_309 = iadd ssa_13, ssa_297 vec1 32 div ssa_310 = intrinsic load_ssbo (ssa_24, ssa_309) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_311 = iadd ssa_119, ssa_297 vec1 32 div ssa_312 = intrinsic load_ssbo (ssa_24, ssa_311) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_313 = iadd ssa_297, ssa_10 vec1 32 div ssa_314 = intrinsic load_ssbo (ssa_24, ssa_313) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_315 = iadd ssa_124, ssa_297 vec1 32 div ssa_316 = intrinsic load_ssbo (ssa_24, ssa_315) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_317 = iadd ssa_127, ssa_297 vec1 32 div ssa_318 = intrinsic load_ssbo (ssa_24, ssa_317) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_319 = iadd ssa_130, ssa_297 vec1 32 div ssa_320 = intrinsic load_ssbo (ssa_24, ssa_319) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_321 = imul ssa_53.z, ssa_48.y vec1 32 div ssa_322 = iadd ssa_321, ssa_48.x vec1 32 div ssa_323 = iand ssa_322, ssa_103 vec1 32 div ssa_324 = intrinsic load_ssbo (ssa_24, ssa_323) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_325 = iadd ssa_323, ssa_76 vec1 32 div ssa_326 = intrinsic load_ssbo (ssa_24, ssa_325) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_327 = iadd ssa_323, ssa_9 vec1 32 div ssa_328 = intrinsic load_ssbo (ssa_24, ssa_327) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_329 = iadd ssa_323, ssa_41 vec1 32 div ssa_330 = intrinsic load_ssbo (ssa_24, ssa_329) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_331 = iadd ssa_323, ssa_8 vec1 32 div ssa_332 = intrinsic load_ssbo (ssa_24, ssa_331) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_333 = iadd ssa_114, ssa_323 vec1 32 div ssa_334 = intrinsic load_ssbo (ssa_24, ssa_333) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_335 = iadd ssa_13, ssa_323 vec1 32 div ssa_336 = intrinsic load_ssbo (ssa_24, ssa_335) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_337 = iadd ssa_119, ssa_323 vec1 32 div ssa_338 = intrinsic load_ssbo (ssa_24, ssa_337) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_339 = iadd ssa_323, ssa_10 vec1 32 div ssa_340 = intrinsic load_ssbo (ssa_24, ssa_339) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_341 = iadd ssa_124, ssa_323 vec1 32 div ssa_342 = intrinsic load_ssbo (ssa_24, ssa_341) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_343 = iadd ssa_127, ssa_323 vec1 32 div ssa_344 = intrinsic load_ssbo (ssa_24, ssa_343) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_345 = iadd ssa_130, ssa_323 vec1 32 div ssa_346 = intrinsic load_ssbo (ssa_24, ssa_345) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_347 = fmul! ssa_298, ssa_95 vec1 32 div ssa_348 = fmul! ssa_306, ssa_95 vec1 32 div ssa_349 = fmul! ssa_314, ssa_95 vec1 32 div ssa_350 = fmul! ssa_300, ssa_95 vec1 32 div ssa_351 = fmul! ssa_308, ssa_95 vec1 32 div ssa_352 = fmul! ssa_316, ssa_95 vec1 32 div ssa_353 = fmul! ssa_302, ssa_95 vec1 32 div ssa_354 = fmul! ssa_310, ssa_95 vec1 32 div ssa_355 = fmul! ssa_318, ssa_95 vec1 32 div ssa_356 = fmul! ssa_304, ssa_95 vec1 32 div ssa_357 = fmul! ssa_312, ssa_95 vec1 32 div ssa_358 = fmul! ssa_320, ssa_95 vec1 32 div ssa_359 = fadd! ssa_283, ssa_347 vec1 32 div ssa_360 = fadd! ssa_284, ssa_348 vec1 32 div ssa_361 = fadd! ssa_285, ssa_349 vec1 32 div ssa_362 = fadd! ssa_286, ssa_350 vec1 32 div ssa_363 = fadd! ssa_287, ssa_351 vec1 32 div ssa_364 = fadd! ssa_288, ssa_352 vec1 32 div ssa_365 = fadd! ssa_289, ssa_353 vec1 32 div ssa_366 = fadd! ssa_290, ssa_354 vec1 32 div ssa_367 = fadd! ssa_291, ssa_355 vec1 32 div ssa_368 = fadd! ssa_292, ssa_356 vec1 32 div ssa_369 = fadd! ssa_293, ssa_357 vec1 32 div ssa_370 = fadd! ssa_294, ssa_358 vec1 32 div ssa_371 = fmul! ssa_324, ssa_99 vec1 32 div ssa_372 = fmul! ssa_332, ssa_99 vec1 32 div ssa_373 = fmul! ssa_340, ssa_99 vec1 32 div ssa_374 = fmul! ssa_326, ssa_99 vec1 32 div ssa_375 = fmul! ssa_334, ssa_99 vec1 32 div ssa_376 = fmul! ssa_342, ssa_99 vec1 32 div ssa_377 = fmul! ssa_328, ssa_99 vec1 32 div ssa_378 = fmul! ssa_336, ssa_99 vec1 32 div ssa_379 = fmul! ssa_344, ssa_99 vec1 32 div ssa_380 = fmul! ssa_330, ssa_99 vec1 32 div ssa_381 = fmul! ssa_338, ssa_99 vec1 32 div ssa_382 = fmul! ssa_346, ssa_99 vec1 32 div ssa_383 = fadd! ssa_359, ssa_371 vec1 32 div ssa_384 = fadd! ssa_360, ssa_372 vec1 32 div ssa_385 = fadd! ssa_361, ssa_373 vec1 32 div ssa_386 = fadd! ssa_362, ssa_374 vec1 32 div ssa_387 = fadd! ssa_363, ssa_375 vec1 32 div ssa_388 = fadd! ssa_364, ssa_376 vec1 32 div ssa_389 = fadd! ssa_365, ssa_377 vec1 32 div ssa_390 = fadd! ssa_366, ssa_378 vec1 32 div ssa_391 = fadd! ssa_367, ssa_379 vec1 32 div ssa_392 = fadd! ssa_368, ssa_380 vec1 32 div ssa_393 = fadd! ssa_369, ssa_381 vec1 32 div ssa_394 = fadd! ssa_370, ssa_382 vec1 32 div ssa_395 = imul ssa_55.w, ssa_48.y vec1 32 div ssa_396 = iadd ssa_395, ssa_48.x vec1 32 div ssa_397 = iand ssa_396, ssa_103 vec1 32 div ssa_398 = intrinsic load_ssbo (ssa_24, ssa_397) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_399 = iadd ssa_397, ssa_76 vec1 32 div ssa_400 = intrinsic load_ssbo (ssa_24, ssa_399) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_401 = iadd ssa_397, ssa_9 vec1 32 div ssa_402 = intrinsic load_ssbo (ssa_24, ssa_401) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_403 = iadd ssa_397, ssa_41 vec1 32 div ssa_404 = intrinsic load_ssbo (ssa_24, ssa_403) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_405 = iadd ssa_397, ssa_8 vec1 32 div ssa_406 = intrinsic load_ssbo (ssa_24, ssa_405) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_407 = iadd ssa_114, ssa_397 vec1 32 div ssa_408 = intrinsic load_ssbo (ssa_24, ssa_407) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_409 = iadd ssa_13, ssa_397 vec1 32 div ssa_410 = intrinsic load_ssbo (ssa_24, ssa_409) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_411 = iadd ssa_119, ssa_397 vec1 32 div ssa_412 = intrinsic load_ssbo (ssa_24, ssa_411) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_413 = iadd ssa_397, ssa_10 vec1 32 div ssa_414 = intrinsic load_ssbo (ssa_24, ssa_413) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_415 = iadd ssa_124, ssa_397 vec1 32 div ssa_416 = intrinsic load_ssbo (ssa_24, ssa_415) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_417 = iadd ssa_127, ssa_397 vec1 32 div ssa_418 = intrinsic load_ssbo (ssa_24, ssa_417) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_419 = iadd ssa_130, ssa_397 vec1 32 div ssa_420 = intrinsic load_ssbo (ssa_24, ssa_419) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_421 = imul ssa_53.w, ssa_48.y vec1 32 div ssa_422 = iadd ssa_421, ssa_48.x vec1 32 div ssa_423 = iand ssa_422, ssa_103 vec1 32 div ssa_424 = intrinsic load_ssbo (ssa_24, ssa_423) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_425 = iadd ssa_423, ssa_76 vec1 32 div ssa_426 = intrinsic load_ssbo (ssa_24, ssa_425) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_427 = iadd ssa_423, ssa_9 vec1 32 div ssa_428 = intrinsic load_ssbo (ssa_24, ssa_427) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_429 = iadd ssa_423, ssa_41 vec1 32 div ssa_430 = intrinsic load_ssbo (ssa_24, ssa_429) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_431 = iadd ssa_423, ssa_8 vec1 32 div ssa_432 = intrinsic load_ssbo (ssa_24, ssa_431) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_433 = iadd ssa_114, ssa_423 vec1 32 div ssa_434 = intrinsic load_ssbo (ssa_24, ssa_433) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_435 = iadd ssa_13, ssa_423 vec1 32 div ssa_436 = intrinsic load_ssbo (ssa_24, ssa_435) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_437 = iadd ssa_119, ssa_423 vec1 32 div ssa_438 = intrinsic load_ssbo (ssa_24, ssa_437) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_439 = iadd ssa_423, ssa_10 vec1 32 div ssa_440 = intrinsic load_ssbo (ssa_24, ssa_439) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_441 = iadd ssa_124, ssa_423 vec1 32 div ssa_442 = intrinsic load_ssbo (ssa_24, ssa_441) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_443 = iadd ssa_127, ssa_423 vec1 32 div ssa_444 = intrinsic load_ssbo (ssa_24, ssa_443) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_445 = iadd ssa_130, ssa_423 vec1 32 div ssa_446 = intrinsic load_ssbo (ssa_24, ssa_445) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_447 = fmul! ssa_398, ssa_96 vec1 32 div ssa_448 = fmul! ssa_406, ssa_96 vec1 32 div ssa_449 = fmul! ssa_414, ssa_96 vec1 32 div ssa_450 = fmul! ssa_400, ssa_96 vec1 32 div ssa_451 = fmul! ssa_408, ssa_96 vec1 32 div ssa_452 = fmul! ssa_416, ssa_96 vec1 32 div ssa_453 = fmul! ssa_402, ssa_96 vec1 32 div ssa_454 = fmul! ssa_410, ssa_96 vec1 32 div ssa_455 = fmul! ssa_418, ssa_96 vec1 32 div ssa_456 = fmul! ssa_404, ssa_96 vec1 32 div ssa_457 = fmul! ssa_412, ssa_96 vec1 32 div ssa_458 = fmul! ssa_420, ssa_96 vec1 32 div ssa_459 = fadd! ssa_383, ssa_447 vec1 32 div ssa_460 = fadd! ssa_384, ssa_448 vec1 32 div ssa_461 = fadd! ssa_385, ssa_449 vec1 32 div ssa_462 = fadd! ssa_386, ssa_450 vec1 32 div ssa_463 = fadd! ssa_387, ssa_451 vec1 32 div ssa_464 = fadd! ssa_388, ssa_452 vec1 32 div ssa_465 = fadd! ssa_389, ssa_453 vec1 32 div ssa_466 = fadd! ssa_390, ssa_454 vec1 32 div ssa_467 = fadd! ssa_391, ssa_455 vec1 32 div ssa_468 = fadd! ssa_392, ssa_456 vec1 32 div ssa_469 = fadd! ssa_393, ssa_457 vec1 32 div ssa_470 = fadd! ssa_394, ssa_458 vec1 32 div ssa_471 = fmul! ssa_424, ssa_100 vec1 32 div ssa_472 = fmul! ssa_432, ssa_100 vec1 32 div ssa_473 = fmul! ssa_440, ssa_100 vec1 32 div ssa_474 = fmul! ssa_426, ssa_100 vec1 32 div ssa_475 = fmul! ssa_434, ssa_100 vec1 32 div ssa_476 = fmul! ssa_442, ssa_100 vec1 32 div ssa_477 = fmul! ssa_428, ssa_100 vec1 32 div ssa_478 = fmul! ssa_436, ssa_100 vec1 32 div ssa_479 = fmul! ssa_444, ssa_100 vec1 32 div ssa_480 = fmul! ssa_430, ssa_100 vec1 32 div ssa_481 = fmul! ssa_438, ssa_100 vec1 32 div ssa_482 = fmul! ssa_446, ssa_100 vec1 32 div ssa_483 = fadd! ssa_459, ssa_471 vec1 32 div ssa_484 = fadd! ssa_460, ssa_472 vec1 32 div ssa_485 = fadd! ssa_461, ssa_473 vec1 32 div ssa_486 = fadd! ssa_462, ssa_474 vec1 32 div ssa_487 = fadd! ssa_463, ssa_475 vec1 32 div ssa_488 = fadd! ssa_464, ssa_476 vec1 32 div ssa_489 = fadd! ssa_465, ssa_477 vec1 32 div ssa_490 = fadd! ssa_466, ssa_478 vec1 32 div ssa_491 = fadd! ssa_467, ssa_479 vec1 32 div ssa_492 = fadd! ssa_468, ssa_480 vec1 32 div ssa_493 = fadd! ssa_469, ssa_481 vec1 32 div ssa_494 = fadd! ssa_470, ssa_482 vec1 32 div ssa_495 = fmul! ssa_483, ssa_80 vec1 32 div ssa_496 = ffma! ssa_81, ssa_486, ssa_495 vec1 32 div ssa_497 = ffma! ssa_82, ssa_489, ssa_496 vec1 32 div ssa_498 = fadd! ssa_497, ssa_492 vec1 32 div ssa_499 = fmul! ssa_484, ssa_80 vec1 32 div ssa_500 = ffma! ssa_81, ssa_487, ssa_499 vec1 32 div ssa_501 = ffma! ssa_82, ssa_490, ssa_500 vec1 32 div ssa_502 = fadd! ssa_501, ssa_493 vec1 32 div ssa_503 = fmul! ssa_485, ssa_80 vec1 32 div ssa_504 = ffma! ssa_81, ssa_488, ssa_503 vec1 32 div ssa_505 = ffma! ssa_82, ssa_491, ssa_504 vec1 32 div ssa_506 = fadd! ssa_505, ssa_494 vec1 32 con ssa_507 = fmax! ssa_83.y, ssa_83.z vec1 32 con ssa_508 = f2u32 ssa_83.x vec1 32 con ssa_509 = umin ssa_508, ssa_9 vec1 32 con ssa_510 = ieq32 ssa_509, ssa_0 vec1 32 con ssa_511 = flt32! ssa_11, ssa_507 vec1 32 con ssa_512 = ior! ssa_510, ssa_511 vec1 32 con ssa_513 = inot! ssa_512 /* succs: block_1 block_16 */ if ssa_513 { block block_1: /* preds: block_0 */ vec1 32 con ssa_514 = load_const (0x00000030 = 0.000000) /* succs: block_2 */ loop { block block_2: /* preds: block_1 block_14 */ vec1 32 con ssa_515 = phi block_1: ssa_0, block_14: ssa_646 vec1 32 con ssa_516 = phi block_1: ssa_0, block_14: ssa_654 vec1 32 con ssa_517 = phi block_1: ssa_8, block_14: ssa_655 vec1 32 con ssa_518 = phi block_1: ssa_10, block_14: ssa_656 vec1 32 con ssa_519 = phi block_1: ssa_514, block_14: ssa_657 vec4 32 con ssa_520 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_516) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_521 = unpack_half_2x16_split_x! ssa_520.x vec1 32 con ssa_522 = unpack_half_2x16_split_y! ssa_520.x vec1 32 con ssa_523 = unpack_half_2x16_split_x! ssa_520.y vec1 32 con ssa_524 = unpack_half_2x16_split_y! ssa_520.y vec1 32 con ssa_525 = unpack_half_2x16_split_x ssa_520.z vec1 32 con ssa_526 = unpack_half_2x16_split_y ssa_520.z vec1 32 con ssa_527 = unpack_half_2x16_split_x ssa_520.w vec1 32 con ssa_528 = unpack_half_2x16_split_y ssa_520.w vec4 32 con ssa_529 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_517) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_530 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_518) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_531 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_519) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_532 = fneg! ssa_529.x vec1 32 div ssa_533 = fadd! ssa_80, ssa_532 vec1 32 con ssa_534 = fneg! ssa_529.y vec1 32 div ssa_535 = fadd! ssa_81, ssa_534 vec1 32 con ssa_536 = fneg! ssa_529.z vec1 32 div ssa_537 = fadd! ssa_82, ssa_536 vec1 32 con ssa_538 = fadd! ssa_530.x, ssa_532 vec1 32 con ssa_539 = fadd! ssa_530.y, ssa_534 vec1 32 con ssa_540 = fadd! ssa_530.z, ssa_536 vec1 32 div ssa_541 = fmul! ssa_533, ssa_538 vec1 32 div ssa_542 = ffma! ssa_535, ssa_539, ssa_541 vec1 32 div ssa_543 = ffma! ssa_537, ssa_540, ssa_542 vec1 32 con ssa_544 = fmul! ssa_538, ssa_538 vec1 32 con ssa_545 = ffma! ssa_539, ssa_539, ssa_544 vec1 32 con ssa_546 = ffma! ssa_540, ssa_540, ssa_545 vec1 32 con ssa_547 = frcp! ssa_546 vec1 32 div ssa_548 = fmul! ssa_543, ssa_547 vec1 32 div ssa_549 = fmul! ssa_548, ssa_538 vec1 32 div ssa_550 = fmul! ssa_548, ssa_539 vec1 32 div ssa_551 = fmul! ssa_548, ssa_540 vec1 32 div ssa_552 = fneg ssa_80 vec1 32 div ssa_553 = fadd ssa_529.x, ssa_552 vec1 32 div ssa_554 = fadd ssa_553, ssa_549 vec1 32 div ssa_555 = fneg ssa_81 vec1 32 div ssa_556 = fadd ssa_529.y, ssa_555 vec1 32 div ssa_557 = fadd ssa_556, ssa_550 vec1 32 div ssa_558 = fneg ssa_82 vec1 32 div ssa_559 = fadd ssa_529.z, ssa_558 vec1 32 div ssa_560 = fadd ssa_559, ssa_551 vec1 32 div ssa_561 = fmul ssa_560, ssa_560 vec1 32 div ssa_562 = ffma ssa_557, ssa_557, ssa_561 vec1 32 div ssa_563 = ffma ssa_554, ssa_554, ssa_562 vec1 32 con ssa_564 = fmul ssa_529.w, ssa_529.w vec1 32 div ssa_565 = ffma ssa_527, ssa_82, ssa_528 vec1 32 div ssa_566 = ffma ssa_526, ssa_81, ssa_565 vec1 32 div ssa_567 = ffma ssa_525, ssa_80, ssa_566 vec1 32 con ssa_568 = ieq32 ssa_531.w, ssa_0 vec1 32 div ssa_569 = fge32! ssa_564, ssa_563 /* succs: block_3 block_7 */ if ssa_568 { block block_3: /* preds: block_2 */ vec1 32 div ssa_570 = ffma ssa_523, ssa_82, ssa_524 vec1 32 div ssa_571 = ffma ssa_522, ssa_81, ssa_570 vec1 32 div ssa_572 = ffma ssa_521, ssa_80, ssa_571 vec1 32 div ssa_573 = flt32! ssa_0, ssa_572 vec1 32 div ssa_574 = iand ssa_569, ssa_573 vec1 32 div ssa_575 = flt32! ssa_0, ssa_567 vec1 32 div ssa_576 = iand ssa_574, ssa_575 /* succs: block_4 block_5 */ if ssa_576 { block block_4: /* preds: block_3 */ vec1 32 div ssa_577 = fmul! ssa_80, ssa_521 vec1 32 div ssa_578 = ffma! ssa_81, ssa_522, ssa_577 vec1 32 div ssa_579 = ffma! ssa_82, ssa_523, ssa_578 vec1 32 div ssa_580 = fadd! ssa_524, ssa_579 vec1 32 div ssa_581 = fmul! ssa_521, ssa_580 vec1 32 div ssa_582 = fmul! ssa_522, ssa_580 vec1 32 div ssa_583 = fmul! ssa_523, ssa_580 vec1 32 div ssa_584 = fmul! ssa_581, ssa_581 vec1 32 div ssa_585 = ffma! ssa_582, ssa_582, ssa_584 vec1 32 div ssa_586 = ffma! ssa_583, ssa_583, ssa_585 vec1 32 div ssa_587 = fmul! ssa_586, ssa_5 vec1 32 div ssa_588 = fsat! ssa_587 vec1 32 div ssa_589 = fneg! ssa_498 vec1 32 div ssa_590 = fadd! ssa_531.x, ssa_589 vec1 32 div ssa_591 = fneg! ssa_502 vec1 32 div ssa_592 = fadd! ssa_531.y, ssa_591 vec1 32 div ssa_593 = fneg! ssa_506 vec1 32 div ssa_594 = fadd! ssa_531.z, ssa_593 vec1 32 div ssa_595 = fmul! ssa_588, ssa_590 vec1 32 div ssa_596 = fmul! ssa_588, ssa_592 vec1 32 div ssa_597 = fmul! ssa_588, ssa_594 vec1 32 div ssa_598 = fadd! ssa_595, ssa_498 vec1 32 div ssa_599 = fadd! ssa_596, ssa_502 vec1 32 div ssa_600 = fadd! ssa_597, ssa_506 break /* succs: block_15 */ } else { block block_5: /* preds: block_3 */ /* succs: block_6 */ } block block_6: /* preds: block_5 */ /* succs: block_11 */ } else { block block_7: /* preds: block_2 */ vec1 32 div ssa_601 = fge32! ssa_567, ssa_0 vec1 32 div ssa_602 = iand ssa_569, ssa_601 /* succs: block_8 block_9 */ if ssa_602 { block block_8: /* preds: block_7 */ vec1 32 div ssa_603 = fadd! ssa_549, ssa_529.x vec1 32 div ssa_604 = fadd! ssa_550, ssa_529.y vec1 32 div ssa_605 = fadd! ssa_551, ssa_529.z vec1 32 div ssa_606 = fmul! ssa_80, ssa_521 vec1 32 div ssa_607 = ffma! ssa_81, ssa_522, ssa_606 vec1 32 div ssa_608 = ffma! ssa_82, ssa_523, ssa_607 vec1 32 div ssa_609 = fadd! ssa_524, ssa_608 vec1 32 div ssa_610 = fneg! ssa_609 vec1 32 div ssa_611 = fmul! ssa_610, ssa_521 vec1 32 div ssa_612 = fmul! ssa_610, ssa_522 vec1 32 div ssa_613 = fmul! ssa_610, ssa_523 vec1 32 div ssa_614 = fadd! ssa_80, ssa_611 vec1 32 div ssa_615 = fadd! ssa_81, ssa_612 vec1 32 div ssa_616 = fadd! ssa_82, ssa_613 vec1 32 div ssa_617 = fneg! ssa_603 vec1 32 div ssa_618 = fadd! ssa_614, ssa_617 vec1 32 div ssa_619 = fneg! ssa_604 vec1 32 div ssa_620 = fadd! ssa_615, ssa_619 vec1 32 div ssa_621 = fneg! ssa_605 vec1 32 div ssa_622 = fadd! ssa_616, ssa_621 vec1 32 div ssa_623 = fmul! ssa_618, ssa_618 vec1 32 div ssa_624 = ffma! ssa_620, ssa_620, ssa_623 vec1 32 div ssa_625 = ffma! ssa_622, ssa_622, ssa_624 vec1 32 div ssa_626 = frsq! ssa_625 vec1 32 div ssa_627 = fmul! ssa_626, ssa_529.w vec1 32 div ssa_628 = fmul! ssa_627, ssa_618 vec1 32 div ssa_629 = fmul! ssa_627, ssa_620 vec1 32 div ssa_630 = fmul! ssa_627, ssa_622 vec1 32 div ssa_631 = fadd! ssa_628, ssa_603 vec1 32 div ssa_632 = fadd! ssa_629, ssa_604 vec1 32 div ssa_633 = fadd! ssa_630, ssa_605 vec1 32 div ssa_634 = fmul! ssa_631, ssa_483 vec1 32 div ssa_635 = ffma! ssa_632, ssa_486, ssa_634 vec1 32 div ssa_636 = ffma! ssa_633, ssa_489, ssa_635 vec1 32 div ssa_637 = fadd! ssa_636, ssa_492 vec1 32 div ssa_638 = fmul! ssa_631, ssa_484 vec1 32 div ssa_639 = ffma! ssa_632, ssa_487, ssa_638 vec1 32 div ssa_640 = ffma! ssa_633, ssa_490, ssa_639 vec1 32 div ssa_641 = fadd! ssa_640, ssa_493 vec1 32 div ssa_642 = fmul! ssa_631, ssa_485 vec1 32 div ssa_643 = ffma! ssa_632, ssa_488, ssa_642 vec1 32 div ssa_644 = ffma! ssa_633, ssa_491, ssa_643 vec1 32 div ssa_645 = fadd! ssa_644, ssa_494 break /* succs: block_15 */ } else { block block_9: /* preds: block_7 */ /* succs: block_10 */ } block block_10: /* preds: block_9 */ /* succs: block_11 */ } block block_11: /* preds: block_6 block_10 */ vec1 32 con ssa_646 = iadd ssa_515, ssa_4 vec1 32 con ssa_647 = uge32 ssa_646, ssa_509 /* succs: block_12 block_13 */ if ssa_647 { block block_12: /* preds: block_11 */ break /* succs: block_15 */ } else { block block_13: /* preds: block_11 */ /* succs: block_14 */ } block block_14: /* preds: block_13 */ vec1 32 con ssa_648 = ishl ssa_515, ssa_7 vec1 32 con ssa_649 = iadd ssa_648, ssa_76 vec1 32 con ssa_650 = ior ssa_649, ssa_4 vec1 32 con ssa_651 = ior ssa_649, ssa_7 vec1 32 con ssa_652 = ior ssa_649, ssa_6 vec1 32 con ssa_653 = ishl ssa_515, ssa_19 vec1 32 con ssa_654 = iadd ssa_653, ssa_74 vec1 32 con ssa_655 = ishl ssa_650, ssa_76 vec1 32 con ssa_656 = ishl ssa_651, ssa_76 vec1 32 con ssa_657 = ishl ssa_652, ssa_76 /* succs: block_2 */ } block block_15: /* preds: block_4 block_8 block_12 */ vec1 32 div ssa_658 = phi block_4: ssa_598, block_8: ssa_637, block_12: ssa_498 vec1 32 div ssa_659 = phi block_4: ssa_599, block_8: ssa_641, block_12: ssa_502 vec1 32 div ssa_660 = phi block_4: ssa_600, block_8: ssa_645, block_12: ssa_506 /* succs: block_17 */ } else { block block_16: /* preds: block_0 */ /* succs: block_17 */ } block block_17: /* preds: block_15 block_16 */ vec1 32 div ssa_661 = phi block_16: ssa_498, block_15: ssa_658 vec1 32 div ssa_662 = phi block_16: ssa_502, block_15: ssa_659 vec1 32 div ssa_663 = phi block_16: ssa_506, block_15: ssa_660 vec1 32 div ssa_664 = fmul! ssa_661, ssa_57.x vec1 32 div ssa_665 = ffma! ssa_662, ssa_57.y, ssa_664 vec1 32 div ssa_666 = ffma! ssa_663, ssa_57.z, ssa_665 vec1 32 div ssa_667 = fadd! ssa_666, ssa_71 vec1 32 div ssa_668 = fmul! ssa_661, ssa_58.x vec1 32 div ssa_669 = ffma! ssa_662, ssa_58.y, ssa_668 vec1 32 div ssa_670 = ffma! ssa_663, ssa_58.z, ssa_669 vec1 32 div ssa_671 = fadd! ssa_670, ssa_72 vec1 32 div ssa_672 = fmul! ssa_661, ssa_59.x vec1 32 div ssa_673 = ffma! ssa_662, ssa_59.y, ssa_672 vec1 32 div ssa_674 = ffma! ssa_663, ssa_59.z, ssa_673 vec1 32 div ssa_675 = fadd! ssa_674, ssa_73 vec1 32 con ssa_676 = load_const (0x000001a0 = 0.000000) vec4 32 con ssa_677 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_676) (access=80, align_mul=1073741824, align_offset=416) vec1 32 div ssa_678 = fmul ssa_677.x, ssa_667 vec1 32 div ssa_679 = ffma ssa_671, ssa_677.y, ssa_678 vec1 32 div ssa_680 = ffma ssa_675, ssa_677.z, ssa_679 vec1 32 div ssa_681 = fadd ssa_680, ssa_677.w vec1 32 con ssa_682 = load_const (0x00000250 = 0.000000) vec4 32 con ssa_683 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_682) (access=80, align_mul=1073741824, align_offset=592) vec1 32 div ssa_684 = fadd ssa_683.x, ssa_667 vec1 32 div ssa_685 = fadd ssa_683.y, ssa_671 vec1 32 div ssa_686 = fadd ssa_683.z, ssa_675 vec1 32 con ssa_687 = load_const (0x000001c0 = 0.000000) vec16 32 con ssa_688 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_687) (access=80, align_mul=1073741824, align_offset=448) vec1 32 div ssa_689 = fmul! ssa_688.a, ssa_667 vec1 32 div ssa_690 = ffma! ssa_671, ssa_688.b, ssa_689 vec1 32 div ssa_691 = ffma! ssa_675, ssa_688.c, ssa_690 vec1 32 div ssa_692 = fadd! ssa_691, ssa_688.d vec1 32 div ssa_693 = fmul! ssa_688.e, ssa_667 vec1 32 div ssa_694 = ffma! ssa_671, ssa_688.f, ssa_693 vec1 32 div ssa_695 = ffma! ssa_675, ssa_688.g, ssa_694 vec1 32 div ssa_696 = fadd! ssa_695, ssa_688.h vec1 32 div ssa_697 = fmul! ssa_688.i, ssa_667 vec1 32 div ssa_698 = ffma! ssa_671, ssa_688.j, ssa_697 vec1 32 div ssa_699 = ffma! ssa_675, ssa_688.k, ssa_698 vec1 32 div ssa_700 = fadd! ssa_699, ssa_688.l vec1 32 div ssa_701 = fmul! ssa_688.m, ssa_667 vec1 32 div ssa_702 = ffma! ssa_671, ssa_688.n, ssa_701 vec1 32 div ssa_703 = ffma! ssa_675, ssa_688.o, ssa_702 vec1 32 div ssa_704 = fadd! ssa_703, ssa_688.p vec1 32 div ssa_705 = iadd ssa_101, ssa_48.z vec1 32 div ssa_706 = iand ssa_705, ssa_103 vec1 32 div ssa_707 = intrinsic load_ssbo (ssa_24, ssa_706) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_708 = iadd ssa_706, ssa_76 vec1 32 div ssa_709 = intrinsic load_ssbo (ssa_24, ssa_708) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_710 = iadd ssa_706, ssa_9 vec1 32 div ssa_711 = intrinsic load_ssbo (ssa_24, ssa_710) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_712 = iadd ssa_706, ssa_41 vec1 32 div ssa_713 = intrinsic load_ssbo (ssa_24, ssa_712) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_714 = iadd ssa_706, ssa_8 vec1 32 div ssa_715 = intrinsic load_ssbo (ssa_24, ssa_714) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_716 = iadd ssa_114, ssa_706 vec1 32 div ssa_717 = intrinsic load_ssbo (ssa_24, ssa_716) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_718 = iadd ssa_13, ssa_706 vec1 32 div ssa_719 = intrinsic load_ssbo (ssa_24, ssa_718) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_720 = iadd ssa_119, ssa_706 vec1 32 div ssa_721 = intrinsic load_ssbo (ssa_24, ssa_720) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_722 = iadd ssa_706, ssa_10 vec1 32 div ssa_723 = intrinsic load_ssbo (ssa_24, ssa_722) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_724 = iadd ssa_124, ssa_706 vec1 32 div ssa_725 = intrinsic load_ssbo (ssa_24, ssa_724) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_726 = iadd ssa_127, ssa_706 vec1 32 div ssa_727 = intrinsic load_ssbo (ssa_24, ssa_726) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_728 = iadd ssa_130, ssa_706 vec1 32 div ssa_729 = intrinsic load_ssbo (ssa_24, ssa_728) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_730 = iadd ssa_133, ssa_48.z vec1 32 div ssa_731 = iand ssa_730, ssa_103 vec1 32 div ssa_732 = intrinsic load_ssbo (ssa_24, ssa_731) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_733 = iadd ssa_731, ssa_76 vec1 32 div ssa_734 = intrinsic load_ssbo (ssa_24, ssa_733) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_735 = iadd ssa_731, ssa_9 vec1 32 div ssa_736 = intrinsic load_ssbo (ssa_24, ssa_735) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_737 = iadd ssa_731, ssa_41 vec1 32 div ssa_738 = intrinsic load_ssbo (ssa_24, ssa_737) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_739 = iadd ssa_731, ssa_8 vec1 32 div ssa_740 = intrinsic load_ssbo (ssa_24, ssa_739) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_741 = iadd ssa_114, ssa_731 vec1 32 div ssa_742 = intrinsic load_ssbo (ssa_24, ssa_741) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_743 = iadd ssa_13, ssa_731 vec1 32 div ssa_744 = intrinsic load_ssbo (ssa_24, ssa_743) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_745 = iadd ssa_119, ssa_731 vec1 32 div ssa_746 = intrinsic load_ssbo (ssa_24, ssa_745) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_747 = iadd ssa_731, ssa_10 vec1 32 div ssa_748 = intrinsic load_ssbo (ssa_24, ssa_747) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_749 = iadd ssa_124, ssa_731 vec1 32 div ssa_750 = intrinsic load_ssbo (ssa_24, ssa_749) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_751 = iadd ssa_127, ssa_731 vec1 32 div ssa_752 = intrinsic load_ssbo (ssa_24, ssa_751) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_753 = iadd ssa_130, ssa_731 vec1 32 div ssa_754 = intrinsic load_ssbo (ssa_24, ssa_753) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_755 = fmul ssa_707, ssa_93 vec1 32 div ssa_756 = fmul ssa_715, ssa_93 vec1 32 div ssa_757 = fmul ssa_723, ssa_93 vec1 32 div ssa_758 = fmul ssa_709, ssa_93 vec1 32 div ssa_759 = fmul ssa_717, ssa_93 vec1 32 div ssa_760 = fmul ssa_725, ssa_93 vec1 32 div ssa_761 = fmul ssa_711, ssa_93 vec1 32 div ssa_762 = fmul ssa_719, ssa_93 vec1 32 div ssa_763 = fmul ssa_727, ssa_93 vec1 32 div ssa_764 = fmul ssa_713, ssa_93 vec1 32 div ssa_765 = fmul ssa_721, ssa_93 vec1 32 div ssa_766 = fmul ssa_729, ssa_93 vec1 32 div ssa_767 = ffma ssa_732, ssa_97, ssa_755 vec1 32 div ssa_768 = ffma ssa_740, ssa_97, ssa_756 vec1 32 div ssa_769 = ffma ssa_748, ssa_97, ssa_757 vec1 32 div ssa_770 = ffma ssa_734, ssa_97, ssa_758 vec1 32 div ssa_771 = ffma ssa_742, ssa_97, ssa_759 vec1 32 div ssa_772 = ffma ssa_750, ssa_97, ssa_760 vec1 32 div ssa_773 = ffma ssa_736, ssa_97, ssa_761 vec1 32 div ssa_774 = ffma ssa_744, ssa_97, ssa_762 vec1 32 div ssa_775 = ffma ssa_752, ssa_97, ssa_763 vec1 32 div ssa_776 = ffma ssa_738, ssa_97, ssa_764 vec1 32 div ssa_777 = ffma ssa_746, ssa_97, ssa_765 vec1 32 div ssa_778 = ffma ssa_754, ssa_97, ssa_766 vec1 32 div ssa_779 = iadd ssa_195, ssa_48.z vec1 32 div ssa_780 = iand ssa_779, ssa_103 vec1 32 div ssa_781 = intrinsic load_ssbo (ssa_24, ssa_780) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_782 = iadd ssa_780, ssa_76 vec1 32 div ssa_783 = intrinsic load_ssbo (ssa_24, ssa_782) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_784 = iadd ssa_780, ssa_9 vec1 32 div ssa_785 = intrinsic load_ssbo (ssa_24, ssa_784) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_786 = iadd ssa_780, ssa_41 vec1 32 div ssa_787 = intrinsic load_ssbo (ssa_24, ssa_786) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_788 = iadd ssa_780, ssa_8 vec1 32 div ssa_789 = intrinsic load_ssbo (ssa_24, ssa_788) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_790 = iadd ssa_114, ssa_780 vec1 32 div ssa_791 = intrinsic load_ssbo (ssa_24, ssa_790) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_792 = iadd ssa_13, ssa_780 vec1 32 div ssa_793 = intrinsic load_ssbo (ssa_24, ssa_792) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_794 = iadd ssa_119, ssa_780 vec1 32 div ssa_795 = intrinsic load_ssbo (ssa_24, ssa_794) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_796 = iadd ssa_780, ssa_10 vec1 32 div ssa_797 = intrinsic load_ssbo (ssa_24, ssa_796) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_798 = iadd ssa_124, ssa_780 vec1 32 div ssa_799 = intrinsic load_ssbo (ssa_24, ssa_798) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_800 = iadd ssa_127, ssa_780 vec1 32 div ssa_801 = intrinsic load_ssbo (ssa_24, ssa_800) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_802 = iadd ssa_130, ssa_780 vec1 32 div ssa_803 = intrinsic load_ssbo (ssa_24, ssa_802) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_804 = iadd ssa_221, ssa_48.z vec1 32 div ssa_805 = iand ssa_804, ssa_103 vec1 32 div ssa_806 = intrinsic load_ssbo (ssa_24, ssa_805) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_807 = iadd ssa_805, ssa_76 vec1 32 div ssa_808 = intrinsic load_ssbo (ssa_24, ssa_807) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_809 = iadd ssa_805, ssa_9 vec1 32 div ssa_810 = intrinsic load_ssbo (ssa_24, ssa_809) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_811 = iadd ssa_805, ssa_41 vec1 32 div ssa_812 = intrinsic load_ssbo (ssa_24, ssa_811) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_813 = iadd ssa_805, ssa_8 vec1 32 div ssa_814 = intrinsic load_ssbo (ssa_24, ssa_813) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_815 = iadd ssa_114, ssa_805 vec1 32 div ssa_816 = intrinsic load_ssbo (ssa_24, ssa_815) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_817 = iadd ssa_13, ssa_805 vec1 32 div ssa_818 = intrinsic load_ssbo (ssa_24, ssa_817) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_819 = iadd ssa_119, ssa_805 vec1 32 div ssa_820 = intrinsic load_ssbo (ssa_24, ssa_819) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_821 = iadd ssa_805, ssa_10 vec1 32 div ssa_822 = intrinsic load_ssbo (ssa_24, ssa_821) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_823 = iadd ssa_124, ssa_805 vec1 32 div ssa_824 = intrinsic load_ssbo (ssa_24, ssa_823) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_825 = iadd ssa_127, ssa_805 vec1 32 div ssa_826 = intrinsic load_ssbo (ssa_24, ssa_825) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_827 = iadd ssa_130, ssa_805 vec1 32 div ssa_828 = intrinsic load_ssbo (ssa_24, ssa_827) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_829 = ffma ssa_781, ssa_94, ssa_767 vec1 32 div ssa_830 = ffma ssa_789, ssa_94, ssa_768 vec1 32 div ssa_831 = ffma ssa_797, ssa_94, ssa_769 vec1 32 div ssa_832 = ffma ssa_783, ssa_94, ssa_770 vec1 32 div ssa_833 = ffma ssa_791, ssa_94, ssa_771 vec1 32 div ssa_834 = ffma ssa_799, ssa_94, ssa_772 vec1 32 div ssa_835 = ffma ssa_785, ssa_94, ssa_773 vec1 32 div ssa_836 = ffma ssa_793, ssa_94, ssa_774 vec1 32 div ssa_837 = ffma ssa_801, ssa_94, ssa_775 vec1 32 div ssa_838 = ffma ssa_787, ssa_94, ssa_776 vec1 32 div ssa_839 = ffma ssa_795, ssa_94, ssa_777 vec1 32 div ssa_840 = ffma ssa_803, ssa_94, ssa_778 vec1 32 div ssa_841 = ffma ssa_806, ssa_98, ssa_829 vec1 32 div ssa_842 = ffma ssa_814, ssa_98, ssa_830 vec1 32 div ssa_843 = ffma ssa_822, ssa_98, ssa_831 vec1 32 div ssa_844 = ffma ssa_808, ssa_98, ssa_832 vec1 32 div ssa_845 = ffma ssa_816, ssa_98, ssa_833 vec1 32 div ssa_846 = ffma ssa_824, ssa_98, ssa_834 vec1 32 div ssa_847 = ffma ssa_810, ssa_98, ssa_835 vec1 32 div ssa_848 = ffma ssa_818, ssa_98, ssa_836 vec1 32 div ssa_849 = ffma ssa_826, ssa_98, ssa_837 vec1 32 div ssa_850 = ffma ssa_812, ssa_98, ssa_838 vec1 32 div ssa_851 = ffma ssa_820, ssa_98, ssa_839 vec1 32 div ssa_852 = ffma ssa_828, ssa_98, ssa_840 vec1 32 div ssa_853 = iadd ssa_295, ssa_48.z vec1 32 div ssa_854 = iand ssa_853, ssa_103 vec1 32 div ssa_855 = intrinsic load_ssbo (ssa_24, ssa_854) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_856 = iadd ssa_854, ssa_76 vec1 32 div ssa_857 = intrinsic load_ssbo (ssa_24, ssa_856) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_858 = iadd ssa_854, ssa_9 vec1 32 div ssa_859 = intrinsic load_ssbo (ssa_24, ssa_858) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_860 = iadd ssa_854, ssa_41 vec1 32 div ssa_861 = intrinsic load_ssbo (ssa_24, ssa_860) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_862 = iadd ssa_854, ssa_8 vec1 32 div ssa_863 = intrinsic load_ssbo (ssa_24, ssa_862) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_864 = iadd ssa_114, ssa_854 vec1 32 div ssa_865 = intrinsic load_ssbo (ssa_24, ssa_864) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_866 = iadd ssa_13, ssa_854 vec1 32 div ssa_867 = intrinsic load_ssbo (ssa_24, ssa_866) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_868 = iadd ssa_119, ssa_854 vec1 32 div ssa_869 = intrinsic load_ssbo (ssa_24, ssa_868) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_870 = iadd ssa_854, ssa_10 vec1 32 div ssa_871 = intrinsic load_ssbo (ssa_24, ssa_870) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_872 = iadd ssa_124, ssa_854 vec1 32 div ssa_873 = intrinsic load_ssbo (ssa_24, ssa_872) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_874 = iadd ssa_127, ssa_854 vec1 32 div ssa_875 = intrinsic load_ssbo (ssa_24, ssa_874) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_876 = iadd ssa_130, ssa_854 vec1 32 div ssa_877 = intrinsic load_ssbo (ssa_24, ssa_876) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_878 = iadd ssa_321, ssa_48.z vec1 32 div ssa_879 = iand ssa_878, ssa_103 vec1 32 div ssa_880 = intrinsic load_ssbo (ssa_24, ssa_879) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_881 = iadd ssa_879, ssa_76 vec1 32 div ssa_882 = intrinsic load_ssbo (ssa_24, ssa_881) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_883 = iadd ssa_879, ssa_9 vec1 32 div ssa_884 = intrinsic load_ssbo (ssa_24, ssa_883) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_885 = iadd ssa_879, ssa_41 vec1 32 div ssa_886 = intrinsic load_ssbo (ssa_24, ssa_885) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_887 = iadd ssa_879, ssa_8 vec1 32 div ssa_888 = intrinsic load_ssbo (ssa_24, ssa_887) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_889 = iadd ssa_114, ssa_879 vec1 32 div ssa_890 = intrinsic load_ssbo (ssa_24, ssa_889) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_891 = iadd ssa_13, ssa_879 vec1 32 div ssa_892 = intrinsic load_ssbo (ssa_24, ssa_891) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_893 = iadd ssa_119, ssa_879 vec1 32 div ssa_894 = intrinsic load_ssbo (ssa_24, ssa_893) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_895 = iadd ssa_879, ssa_10 vec1 32 div ssa_896 = intrinsic load_ssbo (ssa_24, ssa_895) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_897 = iadd ssa_124, ssa_879 vec1 32 div ssa_898 = intrinsic load_ssbo (ssa_24, ssa_897) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_899 = iadd ssa_127, ssa_879 vec1 32 div ssa_900 = intrinsic load_ssbo (ssa_24, ssa_899) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_901 = iadd ssa_130, ssa_879 vec1 32 div ssa_902 = intrinsic load_ssbo (ssa_24, ssa_901) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_903 = ffma ssa_855, ssa_95, ssa_841 vec1 32 div ssa_904 = ffma ssa_863, ssa_95, ssa_842 vec1 32 div ssa_905 = ffma ssa_871, ssa_95, ssa_843 vec1 32 div ssa_906 = ffma ssa_857, ssa_95, ssa_844 vec1 32 div ssa_907 = ffma ssa_865, ssa_95, ssa_845 vec1 32 div ssa_908 = ffma ssa_873, ssa_95, ssa_846 vec1 32 div ssa_909 = ffma ssa_859, ssa_95, ssa_847 vec1 32 div ssa_910 = ffma ssa_867, ssa_95, ssa_848 vec1 32 div ssa_911 = ffma ssa_875, ssa_95, ssa_849 vec1 32 div ssa_912 = ffma ssa_861, ssa_95, ssa_850 vec1 32 div ssa_913 = ffma ssa_869, ssa_95, ssa_851 vec1 32 div ssa_914 = ffma ssa_877, ssa_95, ssa_852 vec1 32 div ssa_915 = ffma ssa_880, ssa_99, ssa_903 vec1 32 div ssa_916 = ffma ssa_888, ssa_99, ssa_904 vec1 32 div ssa_917 = ffma ssa_896, ssa_99, ssa_905 vec1 32 div ssa_918 = ffma ssa_882, ssa_99, ssa_906 vec1 32 div ssa_919 = ffma ssa_890, ssa_99, ssa_907 vec1 32 div ssa_920 = ffma ssa_898, ssa_99, ssa_908 vec1 32 div ssa_921 = ffma ssa_884, ssa_99, ssa_909 vec1 32 div ssa_922 = ffma ssa_892, ssa_99, ssa_910 vec1 32 div ssa_923 = ffma ssa_900, ssa_99, ssa_911 vec1 32 div ssa_924 = ffma ssa_886, ssa_99, ssa_912 vec1 32 div ssa_925 = ffma ssa_894, ssa_99, ssa_913 vec1 32 div ssa_926 = ffma ssa_902, ssa_99, ssa_914 vec1 32 div ssa_927 = iadd ssa_395, ssa_48.z vec1 32 div ssa_928 = iand ssa_927, ssa_103 vec1 32 div ssa_929 = intrinsic load_ssbo (ssa_24, ssa_928) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_930 = iadd ssa_928, ssa_76 vec1 32 div ssa_931 = intrinsic load_ssbo (ssa_24, ssa_930) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_932 = iadd ssa_928, ssa_9 vec1 32 div ssa_933 = intrinsic load_ssbo (ssa_24, ssa_932) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_934 = iadd ssa_928, ssa_41 vec1 32 div ssa_935 = intrinsic load_ssbo (ssa_24, ssa_934) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_936 = iadd ssa_928, ssa_8 vec1 32 div ssa_937 = intrinsic load_ssbo (ssa_24, ssa_936) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_938 = iadd ssa_114, ssa_928 vec1 32 div ssa_939 = intrinsic load_ssbo (ssa_24, ssa_938) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_940 = iadd ssa_13, ssa_928 vec1 32 div ssa_941 = intrinsic load_ssbo (ssa_24, ssa_940) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_942 = iadd ssa_119, ssa_928 vec1 32 div ssa_943 = intrinsic load_ssbo (ssa_24, ssa_942) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_944 = iadd ssa_928, ssa_10 vec1 32 div ssa_945 = intrinsic load_ssbo (ssa_24, ssa_944) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_946 = iadd ssa_124, ssa_928 vec1 32 div ssa_947 = intrinsic load_ssbo (ssa_24, ssa_946) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_948 = iadd ssa_127, ssa_928 vec1 32 div ssa_949 = intrinsic load_ssbo (ssa_24, ssa_948) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_950 = iadd ssa_130, ssa_928 vec1 32 div ssa_951 = intrinsic load_ssbo (ssa_24, ssa_950) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_952 = iadd ssa_421, ssa_48.z vec1 32 div ssa_953 = iand ssa_952, ssa_103 vec1 32 div ssa_954 = intrinsic load_ssbo (ssa_24, ssa_953) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_955 = iadd ssa_953, ssa_76 vec1 32 div ssa_956 = intrinsic load_ssbo (ssa_24, ssa_955) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_957 = iadd ssa_953, ssa_9 vec1 32 div ssa_958 = intrinsic load_ssbo (ssa_24, ssa_957) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_959 = iadd ssa_953, ssa_41 vec1 32 div ssa_960 = intrinsic load_ssbo (ssa_24, ssa_959) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_961 = iadd ssa_953, ssa_8 vec1 32 div ssa_962 = intrinsic load_ssbo (ssa_24, ssa_961) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_963 = iadd ssa_114, ssa_953 vec1 32 div ssa_964 = intrinsic load_ssbo (ssa_24, ssa_963) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_965 = iadd ssa_13, ssa_953 vec1 32 div ssa_966 = intrinsic load_ssbo (ssa_24, ssa_965) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_967 = iadd ssa_119, ssa_953 vec1 32 div ssa_968 = intrinsic load_ssbo (ssa_24, ssa_967) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_969 = iadd ssa_953, ssa_10 vec1 32 div ssa_970 = intrinsic load_ssbo (ssa_24, ssa_969) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_971 = iadd ssa_124, ssa_953 vec1 32 div ssa_972 = intrinsic load_ssbo (ssa_24, ssa_971) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_973 = iadd ssa_127, ssa_953 vec1 32 div ssa_974 = intrinsic load_ssbo (ssa_24, ssa_973) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_975 = iadd ssa_130, ssa_953 vec1 32 div ssa_976 = intrinsic load_ssbo (ssa_24, ssa_975) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_977 = ffma ssa_929, ssa_96, ssa_915 vec1 32 div ssa_978 = ffma ssa_937, ssa_96, ssa_916 vec1 32 div ssa_979 = ffma ssa_945, ssa_96, ssa_917 vec1 32 div ssa_980 = ffma ssa_931, ssa_96, ssa_918 vec1 32 div ssa_981 = ffma ssa_939, ssa_96, ssa_919 vec1 32 div ssa_982 = ffma ssa_947, ssa_96, ssa_920 vec1 32 div ssa_983 = ffma ssa_933, ssa_96, ssa_921 vec1 32 div ssa_984 = ffma ssa_941, ssa_96, ssa_922 vec1 32 div ssa_985 = ffma ssa_949, ssa_96, ssa_923 vec1 32 div ssa_986 = ffma ssa_935, ssa_96, ssa_924 vec1 32 div ssa_987 = ffma ssa_943, ssa_96, ssa_925 vec1 32 div ssa_988 = ffma ssa_951, ssa_96, ssa_926 vec1 32 div ssa_989 = ffma ssa_954, ssa_100, ssa_977 vec1 32 div ssa_990 = ffma ssa_962, ssa_100, ssa_978 vec1 32 div ssa_991 = ffma ssa_970, ssa_100, ssa_979 vec1 32 div ssa_992 = ffma ssa_956, ssa_100, ssa_980 vec1 32 div ssa_993 = ffma ssa_964, ssa_100, ssa_981 vec1 32 div ssa_994 = ffma ssa_972, ssa_100, ssa_982 vec1 32 div ssa_995 = ffma ssa_958, ssa_100, ssa_983 vec1 32 div ssa_996 = ffma ssa_966, ssa_100, ssa_984 vec1 32 div ssa_997 = ffma ssa_974, ssa_100, ssa_985 vec1 32 div ssa_998 = ffma ssa_960, ssa_100, ssa_986 vec1 32 div ssa_999 = ffma ssa_968, ssa_100, ssa_987 vec1 32 div ssa_1000 = ffma ssa_976, ssa_100, ssa_988 vec1 32 div ssa_1001 = fmul ssa_989, ssa_80 vec1 32 div ssa_1002 = ffma ssa_81, ssa_992, ssa_1001 vec1 32 div ssa_1003 = ffma ssa_82, ssa_995, ssa_1002 vec1 32 div ssa_1004 = fadd ssa_1003, ssa_998 vec1 32 div ssa_1005 = fmul ssa_990, ssa_80 vec1 32 div ssa_1006 = ffma ssa_81, ssa_993, ssa_1005 vec1 32 div ssa_1007 = ffma ssa_82, ssa_996, ssa_1006 vec1 32 div ssa_1008 = fadd ssa_1007, ssa_999 vec1 32 div ssa_1009 = fmul ssa_991, ssa_80 vec1 32 div ssa_1010 = ffma ssa_81, ssa_994, ssa_1009 vec1 32 div ssa_1011 = ffma ssa_82, ssa_997, ssa_1010 vec1 32 div ssa_1012 = fadd ssa_1011, ssa_1000 /* succs: block_18 block_33 */ if ssa_513 { block block_18: /* preds: block_17 */ vec1 32 con ssa_1013 = load_const (0x00000030 = 0.000000) /* succs: block_19 */ loop { block block_19: /* preds: block_18 block_31 */ vec1 32 con ssa_1014 = phi block_18: ssa_0, block_31: ssa_1128 vec1 32 con ssa_1015 = phi block_18: ssa_0, block_31: ssa_1136 vec1 32 con ssa_1016 = phi block_18: ssa_8, block_31: ssa_1137 vec1 32 con ssa_1017 = phi block_18: ssa_10, block_31: ssa_1138 vec1 32 con ssa_1018 = phi block_18: ssa_1013, block_31: ssa_1139 vec4 32 con ssa_1019 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_1015) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_1020 = unpack_half_2x16_split_x ssa_1019.x vec1 32 con ssa_1021 = unpack_half_2x16_split_y ssa_1019.x vec1 32 con ssa_1022 = unpack_half_2x16_split_x ssa_1019.y vec1 32 con ssa_1023 = unpack_half_2x16_split_y ssa_1019.y vec1 32 con ssa_1024 = unpack_half_2x16_split_x ssa_1019.z vec1 32 con ssa_1025 = unpack_half_2x16_split_y ssa_1019.z vec1 32 con ssa_1026 = unpack_half_2x16_split_x ssa_1019.w vec1 32 con ssa_1027 = unpack_half_2x16_split_y ssa_1019.w vec4 32 con ssa_1028 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_1016) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_1029 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_1017) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_1030 = intrinsic load_ssbo_uniform_block_intel (ssa_29, ssa_1018) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_1031 = fneg ssa_1028.x vec1 32 div ssa_1032 = fadd ssa_80, ssa_1031 vec1 32 con ssa_1033 = fneg ssa_1028.y vec1 32 div ssa_1034 = fadd ssa_81, ssa_1033 vec1 32 con ssa_1035 = fneg ssa_1028.z vec1 32 div ssa_1036 = fadd ssa_82, ssa_1035 vec1 32 con ssa_1037 = fadd ssa_1029.x, ssa_1031 vec1 32 con ssa_1038 = fadd ssa_1029.y, ssa_1033 vec1 32 con ssa_1039 = fadd ssa_1029.z, ssa_1035 vec1 32 div ssa_1040 = fmul ssa_1036, ssa_1039 vec1 32 div ssa_1041 = ffma ssa_1034, ssa_1038, ssa_1040 vec1 32 div ssa_1042 = ffma ssa_1032, ssa_1037, ssa_1041 vec1 32 con ssa_1043 = fmul ssa_1039, ssa_1039 vec1 32 con ssa_1044 = ffma ssa_1038, ssa_1038, ssa_1043 vec1 32 con ssa_1045 = ffma ssa_1037, ssa_1037, ssa_1044 vec1 32 con ssa_1046 = frcp ssa_1045 vec1 32 div ssa_1047 = fmul ssa_1042, ssa_1046 vec1 32 div ssa_1048 = fneg ssa_80 vec1 32 div ssa_1049 = fadd ssa_1028.x, ssa_1048 vec1 32 div ssa_1050 = ffma ssa_1047, ssa_1037, ssa_1049 vec1 32 div ssa_1051 = fneg ssa_81 vec1 32 div ssa_1052 = fadd ssa_1028.y, ssa_1051 vec1 32 div ssa_1053 = ffma ssa_1047, ssa_1038, ssa_1052 vec1 32 div ssa_1054 = fneg ssa_82 vec1 32 div ssa_1055 = fadd ssa_1028.z, ssa_1054 vec1 32 div ssa_1056 = ffma ssa_1047, ssa_1039, ssa_1055 vec1 32 div ssa_1057 = fmul ssa_1056, ssa_1056 vec1 32 div ssa_1058 = ffma ssa_1053, ssa_1053, ssa_1057 vec1 32 div ssa_1059 = ffma ssa_1050, ssa_1050, ssa_1058 vec1 32 con ssa_1060 = fmul ssa_1028.w, ssa_1028.w vec1 32 div ssa_1061 = ffma ssa_1026, ssa_82, ssa_1027 vec1 32 div ssa_1062 = ffma ssa_1025, ssa_81, ssa_1061 vec1 32 div ssa_1063 = ffma ssa_1024, ssa_80, ssa_1062 vec1 32 con ssa_1064 = ieq32 ssa_1030.w, ssa_0 vec1 32 div ssa_1065 = fge32! ssa_1060, ssa_1059 /* succs: block_20 block_24 */ if ssa_1064 { block block_20: /* preds: block_19 */ vec1 32 div ssa_1066 = ffma ssa_1022, ssa_82, ssa_1023 vec1 32 div ssa_1067 = ffma ssa_1021, ssa_81, ssa_1066 vec1 32 div ssa_1068 = ffma ssa_1020, ssa_80, ssa_1067 vec1 32 div ssa_1069 = flt32! ssa_0, ssa_1068 vec1 32 div ssa_1070 = iand ssa_1065, ssa_1069 vec1 32 div ssa_1071 = flt32! ssa_0, ssa_1063 vec1 32 div ssa_1072 = iand ssa_1070, ssa_1071 /* succs: block_21 block_22 */ if ssa_1072 { block block_21: /* preds: block_20 */ vec1 32 div ssa_1073 = fmul ssa_1020, ssa_1068 vec1 32 div ssa_1074 = fmul ssa_1021, ssa_1068 vec1 32 div ssa_1075 = fmul ssa_1022, ssa_1068 vec1 32 div ssa_1076 = fmul ssa_1075, ssa_1075 vec1 32 div ssa_1077 = ffma ssa_1074, ssa_1074, ssa_1076 vec1 32 div ssa_1078 = ffma ssa_1073, ssa_1073, ssa_1077 vec1 32 div ssa_1079 = fmul ssa_1078, ssa_5 vec1 32 div ssa_1080 = fsat! ssa_1079 vec1 32 div ssa_1081 = fneg ssa_1004 vec1 32 div ssa_1082 = fadd ssa_1030.x, ssa_1081 vec1 32 div ssa_1083 = fneg ssa_1008 vec1 32 div ssa_1084 = fadd ssa_1030.y, ssa_1083 vec1 32 div ssa_1085 = fneg ssa_1012 vec1 32 div ssa_1086 = fadd ssa_1030.z, ssa_1085 vec1 32 div ssa_1087 = ffma ssa_1080, ssa_1082, ssa_1004 vec1 32 div ssa_1088 = ffma ssa_1080, ssa_1084, ssa_1008 vec1 32 div ssa_1089 = ffma ssa_1080, ssa_1086, ssa_1012 break /* succs: block_32 */ } else { block block_22: /* preds: block_20 */ /* succs: block_23 */ } block block_23: /* preds: block_22 */ /* succs: block_28 */ } else { block block_24: /* preds: block_19 */ vec1 32 div ssa_1090 = fge32! ssa_1063, ssa_0 vec1 32 div ssa_1091 = iand ssa_1065, ssa_1090 /* succs: block_25 block_26 */ if ssa_1091 { block block_25: /* preds: block_24 */ vec1 32 div ssa_1092 = ffma ssa_1047, ssa_1037, ssa_1028.x vec1 32 div ssa_1093 = ffma ssa_1047, ssa_1038, ssa_1028.y vec1 32 div ssa_1094 = ffma ssa_1047, ssa_1039, ssa_1028.z vec1 32 con ssa_1095 = fneg ssa_1023 vec1 32 div ssa_1096 = ffma ssa_1054, ssa_1022, ssa_1095 vec1 32 div ssa_1097 = ffma ssa_1051, ssa_1021, ssa_1096 vec1 32 div ssa_1098 = ffma ssa_1048, ssa_1020, ssa_1097 vec1 32 div ssa_1099 = ffma ssa_1098, ssa_1020, ssa_80 vec1 32 div ssa_1100 = ffma ssa_1098, ssa_1021, ssa_81 vec1 32 div ssa_1101 = ffma ssa_1098, ssa_1022, ssa_82 vec1 32 div ssa_1102 = fneg ssa_1092 vec1 32 div ssa_1103 = fadd ssa_1099, ssa_1102 vec1 32 div ssa_1104 = fneg ssa_1093 vec1 32 div ssa_1105 = fadd ssa_1100, ssa_1104 vec1 32 div ssa_1106 = fneg ssa_1094 vec1 32 div ssa_1107 = fadd ssa_1101, ssa_1106 vec1 32 div ssa_1108 = fmul ssa_1107, ssa_1107 vec1 32 div ssa_1109 = ffma ssa_1105, ssa_1105, ssa_1108 vec1 32 div ssa_1110 = ffma ssa_1103, ssa_1103, ssa_1109 vec1 32 div ssa_1111 = frsq ssa_1110 vec1 32 div ssa_1112 = fmul ssa_1111, ssa_1028.w vec1 32 div ssa_1113 = ffma ssa_1112, ssa_1103, ssa_1092 vec1 32 div ssa_1114 = ffma ssa_1112, ssa_1105, ssa_1093 vec1 32 div ssa_1115 = ffma ssa_1112, ssa_1107, ssa_1094 vec1 32 div ssa_1116 = fmul ssa_1113, ssa_989 vec1 32 div ssa_1117 = ffma ssa_1114, ssa_992, ssa_1116 vec1 32 div ssa_1118 = ffma ssa_1115, ssa_995, ssa_1117 vec1 32 div ssa_1119 = fadd ssa_1118, ssa_998 vec1 32 div ssa_1120 = fmul ssa_1113, ssa_990 vec1 32 div ssa_1121 = ffma ssa_1114, ssa_993, ssa_1120 vec1 32 div ssa_1122 = ffma ssa_1115, ssa_996, ssa_1121 vec1 32 div ssa_1123 = fadd ssa_1122, ssa_999 vec1 32 div ssa_1124 = fmul ssa_1113, ssa_991 vec1 32 div ssa_1125 = ffma ssa_1114, ssa_994, ssa_1124 vec1 32 div ssa_1126 = ffma ssa_1115, ssa_997, ssa_1125 vec1 32 div ssa_1127 = fadd ssa_1126, ssa_1000 break /* succs: block_32 */ } else { block block_26: /* preds: block_24 */ /* succs: block_27 */ } block block_27: /* preds: block_26 */ /* succs: block_28 */ } block block_28: /* preds: block_23 block_27 */ vec1 32 con ssa_1128 = iadd ssa_1014, ssa_4 vec1 32 con ssa_1129 = uge32 ssa_1128, ssa_509 /* succs: block_29 block_30 */ if ssa_1129 { block block_29: /* preds: block_28 */ break /* succs: block_32 */ } else { block block_30: /* preds: block_28 */ /* succs: block_31 */ } block block_31: /* preds: block_30 */ vec1 32 con ssa_1130 = ishl ssa_1014, ssa_7 vec1 32 con ssa_1131 = iadd ssa_1130, ssa_76 vec1 32 con ssa_1132 = ior ssa_1131, ssa_4 vec1 32 con ssa_1133 = ior ssa_1131, ssa_7 vec1 32 con ssa_1134 = ior ssa_1131, ssa_6 vec1 32 con ssa_1135 = ishl ssa_1014, ssa_19 vec1 32 con ssa_1136 = iadd ssa_1135, ssa_74 vec1 32 con ssa_1137 = ishl ssa_1132, ssa_76 vec1 32 con ssa_1138 = ishl ssa_1133, ssa_76 vec1 32 con ssa_1139 = ishl ssa_1134, ssa_76 /* succs: block_19 */ } block block_32: /* preds: block_21 block_25 block_29 */ vec1 32 div ssa_1140 = phi block_21: ssa_1087, block_25: ssa_1119, block_29: ssa_1004 vec1 32 div ssa_1141 = phi block_21: ssa_1088, block_25: ssa_1123, block_29: ssa_1008 vec1 32 div ssa_1142 = phi block_21: ssa_1089, block_25: ssa_1127, block_29: ssa_1012 /* succs: block_34 */ } else { block block_33: /* preds: block_17 */ /* succs: block_34 */ } block block_34: /* preds: block_32 block_33 */ vec1 32 div ssa_1143 = phi block_33: ssa_1004, block_32: ssa_1140 vec1 32 div ssa_1144 = phi block_33: ssa_1008, block_32: ssa_1141 vec1 32 div ssa_1145 = phi block_33: ssa_1012, block_32: ssa_1142 vec8 32 con ssa_1146 = intrinsic load_ssbo_uniform_block_intel (ssa_46, ssa_10) (access=80, align_mul=1073741824, align_offset=32) vec4 32 con ssa_1147 = intrinsic load_ssbo_uniform_block_intel (ssa_46, ssa_74) (access=80, align_mul=1073741824, align_offset=64) vec1 32 con ssa_1148 = iadd ssa_1146.d, ssa_62 vec1 32 con ssa_1149 = iadd ssa_1146.h, ssa_64 vec1 32 con ssa_1150 = iadd ssa_1147.w, ssa_66 vec1 32 con ssa_1151 = i2f32 ssa_1148 vec1 32 con ssa_1152 = i2f32 ssa_1149 vec1 32 con ssa_1153 = i2f32 ssa_1150 vec1 32 div ssa_1154 = fmul ssa_1146.a, ssa_1143 vec1 32 div ssa_1155 = ffma ssa_1146.b, ssa_1144, ssa_1154 vec1 32 div ssa_1156 = ffma ssa_1146.c, ssa_1145, ssa_1155 vec1 32 div ssa_1157 = ffma ssa_1151, ssa_3, ssa_1156 vec1 32 div ssa_1158 = fmul ssa_1146.e, ssa_1143 vec1 32 div ssa_1159 = ffma ssa_1146.f, ssa_1144, ssa_1158 vec1 32 div ssa_1160 = ffma ssa_1146.g, ssa_1145, ssa_1159 vec1 32 div ssa_1161 = ffma ssa_1152, ssa_3, ssa_1160 vec1 32 div ssa_1162 = fmul ssa_1147.x, ssa_1143 vec1 32 div ssa_1163 = ffma ssa_1147.y, ssa_1144, ssa_1162 vec1 32 div ssa_1164 = ffma ssa_1147.z, ssa_1145, ssa_1163 vec1 32 div ssa_1165 = ffma ssa_1153, ssa_3, ssa_1164 vec1 32 con ssa_1166 = load_const (0x00000100 = 0.000000) vec16 32 con ssa_1167 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_1166) (access=80, align_mul=1073741824, align_offset=256) vec1 32 div ssa_1168 = fmul ssa_1167.a, ssa_1157 vec1 32 div ssa_1169 = ffma ssa_1161, ssa_1167.b, ssa_1168 vec1 32 div ssa_1170 = ffma ssa_1165, ssa_1167.c, ssa_1169 vec1 32 div ssa_1171 = fadd ssa_1170, ssa_1167.d vec1 32 div ssa_1172 = fmul ssa_1167.e, ssa_1157 vec1 32 div ssa_1173 = ffma ssa_1161, ssa_1167.f, ssa_1172 vec1 32 div ssa_1174 = ffma ssa_1165, ssa_1167.g, ssa_1173 vec1 32 div ssa_1175 = fadd ssa_1174, ssa_1167.h vec1 32 div ssa_1176 = fmul ssa_1167.i, ssa_1157 vec1 32 div ssa_1177 = ffma ssa_1161, ssa_1167.j, ssa_1176 vec1 32 div ssa_1178 = ffma ssa_1165, ssa_1167.k, ssa_1177 vec1 32 div ssa_1179 = fadd ssa_1178, ssa_1167.l vec1 32 div ssa_1180 = fmul ssa_1167.m, ssa_1157 vec1 32 div ssa_1181 = ffma ssa_1161, ssa_1167.n, ssa_1180 vec1 32 div ssa_1182 = ffma ssa_1165, ssa_1167.o, ssa_1181 vec1 32 div ssa_1183 = fadd ssa_1182, ssa_1167.p vec1 32 div ssa_1184 = fmul ssa_483, ssa_57.x vec1 32 div ssa_1185 = ffma ssa_484, ssa_57.y, ssa_1184 vec1 32 div ssa_1186 = ffma ssa_485, ssa_57.z, ssa_1185 vec1 32 div ssa_1187 = fmul ssa_483, ssa_58.x vec1 32 div ssa_1188 = ffma ssa_484, ssa_58.y, ssa_1187 vec1 32 div ssa_1189 = ffma ssa_485, ssa_58.z, ssa_1188 vec1 32 div ssa_1190 = fmul ssa_483, ssa_59.x vec1 32 div ssa_1191 = ffma ssa_484, ssa_59.y, ssa_1190 vec1 32 div ssa_1192 = ffma ssa_485, ssa_59.z, ssa_1191 vec1 32 div ssa_1193 = fmul ssa_486, ssa_57.x vec1 32 div ssa_1194 = ffma ssa_487, ssa_57.y, ssa_1193 vec1 32 div ssa_1195 = ffma ssa_488, ssa_57.z, ssa_1194 vec1 32 div ssa_1196 = fmul ssa_486, ssa_58.x vec1 32 div ssa_1197 = ffma ssa_487, ssa_58.y, ssa_1196 vec1 32 div ssa_1198 = ffma ssa_488, ssa_58.z, ssa_1197 vec1 32 div ssa_1199 = fmul ssa_486, ssa_59.x vec1 32 div ssa_1200 = ffma ssa_487, ssa_59.y, ssa_1199 vec1 32 div ssa_1201 = ffma ssa_488, ssa_59.z, ssa_1200 vec1 32 div ssa_1202 = fmul ssa_489, ssa_57.x vec1 32 div ssa_1203 = ffma ssa_490, ssa_57.y, ssa_1202 vec1 32 div ssa_1204 = ffma ssa_491, ssa_57.z, ssa_1203 vec1 32 div ssa_1205 = fmul ssa_489, ssa_58.x vec1 32 div ssa_1206 = ffma ssa_490, ssa_58.y, ssa_1205 vec1 32 div ssa_1207 = ffma ssa_491, ssa_58.z, ssa_1206 vec1 32 div ssa_1208 = fmul ssa_489, ssa_59.x vec1 32 div ssa_1209 = ffma ssa_490, ssa_59.y, ssa_1208 vec1 32 div ssa_1210 = ffma ssa_491, ssa_59.z, ssa_1209 vec1 32 div ssa_1211 = ffma ssa_50.x, ssa_2, ssa_1 vec1 32 div ssa_1212 = ffma ssa_50.y, ssa_2, ssa_1 vec1 32 div ssa_1213 = ffma ssa_50.z, ssa_2, ssa_1 vec1 32 div ssa_1214 = ffma ssa_49.x, ssa_2, ssa_1 vec1 32 div ssa_1215 = ffma ssa_49.y, ssa_2, ssa_1 vec1 32 div ssa_1216 = ffma ssa_49.z, ssa_2, ssa_1 vec1 32 div ssa_1217 = ffma ssa_49.w, ssa_2, ssa_1 vec1 32 div ssa_1218 = fneg ssa_1213 vec1 32 div ssa_1219 = fmul ssa_1218, ssa_1215 vec1 32 div ssa_1220 = ffma ssa_1212, ssa_1216, ssa_1219 vec1 32 div ssa_1221 = fneg ssa_1211 vec1 32 div ssa_1222 = fmul ssa_1221, ssa_1216 vec1 32 div ssa_1223 = ffma ssa_1213, ssa_1214, ssa_1222 vec1 32 div ssa_1224 = fneg ssa_1212 vec1 32 div ssa_1225 = fmul ssa_1224, ssa_1214 vec1 32 div ssa_1226 = ffma ssa_1211, ssa_1215, ssa_1225 vec1 32 div ssa_1227 = fmul ssa_1220, ssa_1217 vec1 32 div ssa_1228 = fmul ssa_1223, ssa_1217 vec1 32 div ssa_1229 = fmul ssa_1226, ssa_1217 vec1 32 div ssa_1230 = fmul ssa_1186, ssa_1214 vec1 32 div ssa_1231 = ffma ssa_1215, ssa_1195, ssa_1230 vec1 32 div ssa_1232 = ffma ssa_1216, ssa_1204, ssa_1231 vec1 32 div ssa_1233 = fmul ssa_1189, ssa_1214 vec1 32 div ssa_1234 = ffma ssa_1215, ssa_1198, ssa_1233 vec1 32 div ssa_1235 = ffma ssa_1216, ssa_1207, ssa_1234 vec1 32 div ssa_1236 = fmul ssa_1192, ssa_1214 vec1 32 div ssa_1237 = ffma ssa_1215, ssa_1201, ssa_1236 vec1 32 div ssa_1238 = ffma ssa_1216, ssa_1210, ssa_1237 vec1 32 div ssa_1239 = fmul ssa_1186, ssa_1227 vec1 32 div ssa_1240 = ffma ssa_1228, ssa_1195, ssa_1239 vec1 32 div ssa_1241 = ffma ssa_1229, ssa_1204, ssa_1240 vec1 32 div ssa_1242 = fmul ssa_1189, ssa_1227 vec1 32 div ssa_1243 = ffma ssa_1228, ssa_1198, ssa_1242 vec1 32 div ssa_1244 = ffma ssa_1229, ssa_1207, ssa_1243 vec1 32 div ssa_1245 = fmul ssa_1192, ssa_1227 vec1 32 div ssa_1246 = ffma ssa_1228, ssa_1201, ssa_1245 vec1 32 div ssa_1247 = ffma ssa_1229, ssa_1210, ssa_1246 vec1 32 div ssa_1248 = fmul ssa_1186, ssa_1211 vec1 32 div ssa_1249 = ffma ssa_1212, ssa_1195, ssa_1248 vec1 32 div ssa_1250 = ffma ssa_1213, ssa_1204, ssa_1249 vec1 32 div ssa_1251 = fmul ssa_1189, ssa_1211 vec1 32 div ssa_1252 = ffma ssa_1212, ssa_1198, ssa_1251 vec1 32 div ssa_1253 = ffma ssa_1213, ssa_1207, ssa_1252 vec1 32 div ssa_1254 = fmul ssa_1192, ssa_1211 vec1 32 div ssa_1255 = ffma ssa_1212, ssa_1201, ssa_1254 vec1 32 div ssa_1256 = ffma ssa_1213, ssa_1210, ssa_1255 vec1 32 div ssa_1257 = fmul ssa_1256, ssa_1256 vec1 32 div ssa_1258 = ffma ssa_1253, ssa_1253, ssa_1257 vec1 32 div ssa_1259 = ffma ssa_1250, ssa_1250, ssa_1258 vec1 32 div ssa_1260 = frsq ssa_1259 vec1 32 div ssa_1261 = fmul ssa_1260, ssa_1250 vec1 32 div ssa_1262 = fmul ssa_1260, ssa_1253 vec1 32 div ssa_1263 = fmul ssa_1260, ssa_1256 vec1 32 div ssa_1264 = fmul ssa_1247, ssa_1247 vec1 32 div ssa_1265 = ffma ssa_1244, ssa_1244, ssa_1264 vec1 32 div ssa_1266 = ffma ssa_1241, ssa_1241, ssa_1265 vec1 32 div ssa_1267 = frsq ssa_1266 vec1 32 div ssa_1268 = fmul ssa_1267, ssa_1241 vec1 32 div ssa_1269 = fmul ssa_1267, ssa_1244 vec1 32 div ssa_1270 = fmul ssa_1267, ssa_1247 vec1 32 div ssa_1271 = fmul ssa_1238, ssa_1238 vec1 32 div ssa_1272 = ffma ssa_1235, ssa_1235, ssa_1271 vec1 32 div ssa_1273 = ffma ssa_1232, ssa_1232, ssa_1272 vec1 32 div ssa_1274 = frsq ssa_1273 vec1 32 div ssa_1275 = fmul ssa_1274, ssa_1232 vec1 32 div ssa_1276 = fmul ssa_1274, ssa_1235 vec1 32 div ssa_1277 = fmul ssa_1274, ssa_1238 vec1 32 con ssa_1278 = load_const (0x00000320 = 0.000000) vec8 32 con ssa_1279 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_1278) (access=80, align_mul=1073741824, align_offset=800) vec1 32 con ssa_1280 = fneg ssa_1279.e vec1 32 div ssa_1281 = ffma ssa_1280, ssa_704, ssa_692 vec1 32 con ssa_1282 = fneg ssa_1279.f vec1 32 div ssa_1283 = ffma ssa_1282, ssa_704, ssa_696 vec1 32 div ssa_1284 = ffma ssa_1279.c, ssa_686, ssa_1279.d vec1 32 div ssa_1285 = ffma ssa_1279.b, ssa_685, ssa_1284 vec1 32 div ssa_1286 = ffma ssa_1279.a, ssa_684, ssa_1285 vec4 32 div ssa_1287 = vec4 ssa_692, ssa_696, ssa_700, ssa_704 intrinsic store_output (ssa_1287, ssa_0) (base=0, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_POS slots=1 /*67108992*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Position */ intrinsic store_output (ssa_1286, ssa_0) (base=17, wrmask=x /*1*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_CLIP_DIST0 slots=1 /*145*/, xfb() /*0*/, xfb2() /*0*/) vec4 32 div ssa_1288 = vec4 ssa_51.x, ssa_51.y, ssa_1261, ssa_1262 intrinsic store_output (ssa_1288, ssa_0) (base=33, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR1 slots=1 /*161*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD */ vec4 32 div ssa_1289 = vec4 ssa_1263, ssa_1268, ssa_1269, ssa_1270 intrinsic store_output (ssa_1289, ssa_0) (base=34, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR2 slots=1 /*162*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_1 */ vec3 32 div ssa_1290 = vec3 ssa_1275, ssa_1276, ssa_1277 intrinsic store_output (ssa_1290, ssa_0) (base=35, wrmask=xyz /*7*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR3 slots=1 /*163*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_2 */ vec2 32 div ssa_1291 = vec2 ssa_684, ssa_685 intrinsic store_output (ssa_1291, ssa_0) (base=36, wrmask=xy /*3*/, component=2, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR4 slots=1 /*164*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_3 */ vec4 32 div ssa_1292 = vec4 ssa_686, ssa_681, ssa_1281, ssa_1283 intrinsic store_output (ssa_1292, ssa_0) (base=37, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR5 slots=1 /*165*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_4 */ vec4 32 div ssa_1293 = vec4 ssa_700, ssa_704, ssa_1171, ssa_1175 intrinsic store_output (ssa_1293, ssa_0) (base=38, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR6 slots=1 /*166*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_5 */ vec3 32 div ssa_1294 = vec3 ssa_1179, ssa_1183, ssa_47 intrinsic store_output (ssa_1294, ssa_0) (base=39, wrmask=xyz /*7*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR7 slots=1 /*167*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_6 */ /* succs: block_35 */ block block_35: } NIR (final form) for vertex shader: shader: MESA_SHADER_VERTEX source_sha1: {0x44ea7faa, 0x2435d792, 0xdd0f3938, 0x25cede04, 0x94926cc7} stage: 0 next_stage: 0 num_ssbos: 2 inputs_read: 15-22,25-28,30 outputs_written: 0,17,24,33-39 subgroup_size: 2 clip_distance_array_size: 1 divergence_analysis_run: true bit_sizes_float: 0x20 bit_sizes_int: 0x21 separate_shader: true inputs: 0 outputs: 0 uniforms: 256 decl_var push_const INTERP_MODE_NONE RootConstants registers decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @0 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly BindlessCBV[] @1 (~0, 0, 2) decl_var shader_in INTERP_MODE_NONE vec3 POSITION (VERT_ATTRIB_GENERIC0.xyz, 15, 0) decl_var shader_in INTERP_MODE_NONE uvec4 BLENDINDICES (VERT_ATTRIB_GENERIC1.xyzw, 16, 0) decl_var shader_in INTERP_MODE_NONE vec4 BLENDWEIGHT (VERT_ATTRIB_GENERIC2.xyzw, 17, 0) decl_var shader_in INTERP_MODE_NONE uvec4 BLENDINDICES_1 (VERT_ATTRIB_GENERIC3.xyzw, 18, 0) decl_var shader_in INTERP_MODE_NONE vec4 BLENDWEIGHT_1 (VERT_ATTRIB_GENERIC4.xyzw, 19, 0) decl_var shader_in INTERP_MODE_NONE vec2 TEXCOORD (VERT_ATTRIB_GENERIC5.xy, 20, 0) decl_var shader_in INTERP_MODE_NONE vec3 NORMAL (VERT_ATTRIB_GENERIC6.xyz, 21, 0) decl_var shader_in INTERP_MODE_NONE vec4 TANGENT (VERT_ATTRIB_GENERIC7.xyzw, 22, 0) decl_var shader_in INTERP_MODE_NONE vec4[3] INSTANCE_TRANSFORM (VERT_ATTRIB_GENERIC10.xyzw, 25, 0) decl_var shader_in INTERP_MODE_NONE uvec4 INSTANCE_SKINNING_DATA (VERT_ATTRIB_GENERIC13.xyzw, 28, 0) decl_var shader_in INTERP_MODE_NONE float LIGHT_BLOCKER_INTENSITY (VERT_ATTRIB_GENERIC15.x, 30, 0) decl_var invariant shader_out INTERP_MODE_NONE vec4 SV_Position (VARYING_SLOT_POS.xyzw, 0, 0) decl_var shader_out INTERP_MODE_NONE float[1] @2 (VARYING_SLOT_CLIP_DIST0.x, 17, 0) compact decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD@3 (VARYING_SLOT_VAR1.xyzw, 33, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_1 (VARYING_SLOT_VAR2.xyzw, 34, 0) decl_var shader_out INTERP_MODE_NONE vec3 TEXCOORD_2 (VARYING_SLOT_VAR3.xyz, 35, 0) decl_var shader_out INTERP_MODE_NONE vec2 TEXCOORD_3 (VARYING_SLOT_VAR4.zw, 36, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_4 (VARYING_SLOT_VAR5.xyzw, 37, 0) decl_var shader_out INTERP_MODE_NONE vec4 TEXCOORD_5 (VARYING_SLOT_VAR6.xyzw, 38, 0) decl_var shader_out INTERP_MODE_NONE vec3 TEXCOORD_6 (VARYING_SLOT_VAR7.xyz, 39, 0) decl_function main (0 params) impl main { decl_reg vec1 32 div r8 decl_reg vec1 32 div r9 decl_reg vec1 32 div r10 decl_reg vec1 32 con r11 decl_reg vec1 32 con r12 decl_reg vec1 32 con r13 decl_reg vec1 32 con r14 decl_reg vec1 32 con r15 decl_reg vec1 32 con r16 decl_reg vec1 32 div r17 decl_reg vec1 32 div r18 decl_reg vec1 32 div r19 decl_reg vec1 32 con r20 decl_reg vec1 32 con r21 decl_reg vec1 32 con r22 decl_reg vec1 32 con r23 decl_reg vec1 32 con r24 decl_reg vec1 32 con r25 block block_0: /* preds: */ vec1 32 con ssa_0 = load_const (0x00000000 = 0.000000) vec1 32 con ssa_1 = load_const (0xbf800000 = -1.000000) vec1 32 con ssa_2 = load_const (0x40000000 = 2.000000) vec1 32 con ssa_3 = load_const (0x37000000 = 0.000008) vec1 32 con ssa_4 = load_const (0x00000001 = 0.000000) vec1 32 con ssa_5 = load_const (0x4479ffff = 999.999939) vec1 32 con ssa_6 = load_const (0x00000003 = 0.000000) vec1 32 con ssa_7 = load_const (0x00000002 = 0.000000) vec1 32 con ssa_8 = load_const (0x00000010 = 0.000000) vec1 32 con ssa_9 = load_const (0x00000008 = 0.000000) vec1 32 con ssa_10 = load_const (0x00000020 = 0.000000) vec1 32 con ssa_11 = load_const (0x3f000000 = 0.500000) vec1 32 con ssa_12 = load_const (0x3727c5ac = 0.000010) vec1 32 con ssa_13 = load_const (0x00000018 = 0.000000) vec1 32 con ssa_14 = intrinsic load_uniform (ssa_13) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_15 = iadd ssa_14, ssa_7 vec1 32 con ssa_16 = load_const (0x000f423f = 0.000000) vec1 32 con ssa_17 = umin ssa_15, ssa_16 vec1 32 con ssa_18 = intrinsic load_uniform (ssa_0) (base=252, range=4, dest_type=uint /*4*/) vec1 32 con ssa_19 = load_const (0x00000006 = 0.000000) vec1 32 con ssa_20 = ishl ssa_17, ssa_19 vec1 32 con ssa_21 = load_const (0x00000080 = 0.000000) vec1 32 con ssa_22 = iadd3 ssa_21, ssa_20, ssa_18 vec1 32 con ssa_23 = load_const (0xdeaddeed = -6264355898823540736.000000) vec1 32 con ssa_24 = intrinsic resource_intel (ssa_23, ssa_22, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_25 = intrinsic load_uniform (ssa_8) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_26 = umin ssa_25, ssa_16 vec1 32 con ssa_27 = ishl ssa_26, ssa_19 vec1 32 con ssa_28 = iadd3 ssa_21, ssa_27, ssa_18 vec1 32 con ssa_29 = intrinsic resource_intel (ssa_23, ssa_28, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_30 = intrinsic load_uniform (ssa_9) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_31 = umin ssa_30, ssa_16 vec1 32 con ssa_32 = ishl ssa_31, ssa_19 vec1 32 con ssa_33 = iadd3 ssa_21, ssa_32, ssa_18 vec1 32 con ssa_34 = intrinsic resource_intel (ssa_23, ssa_33, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_35 = intrinsic load_uniform (ssa_0) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_36 = iadd ssa_35, ssa_4 vec1 32 con ssa_37 = umin ssa_36, ssa_16 vec1 32 con ssa_38 = ishl ssa_37, ssa_19 vec1 32 con ssa_39 = iadd3 ssa_21, ssa_38, ssa_18 vec1 32 con ssa_40 = intrinsic resource_intel (ssa_23, ssa_39, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_41 = load_const (0x0000000c = 0.000000) vec1 32 con ssa_42 = intrinsic load_uniform (ssa_41) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_43 = umin ssa_42, ssa_16 vec1 32 con ssa_44 = ishl ssa_43, ssa_19 vec1 32 con ssa_45 = iadd3 ssa_21, ssa_44, ssa_18 vec1 32 con ssa_46 = intrinsic resource_intel (ssa_23, ssa_45, ssa_23) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 div ssa_47 = intrinsic load_input (ssa_0) (base=12, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC15 slots=1 /*158*/) vec3 32 div ssa_48 = intrinsic load_input (ssa_0) (base=11, component=0, dest_type=uint32 /*36*/, io location=VERT_ATTRIB_GENERIC13 slots=1 /*156*/) vec4 32 div ssa_49 = intrinsic load_input (ssa_0) (base=7, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC7 slots=1 /*150*/) vec3 32 div ssa_50 = intrinsic load_input (ssa_0) (base=6, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC6 slots=1 /*149*/) vec2 32 div ssa_51 = intrinsic load_input (ssa_0) (base=5, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC5 slots=1 /*148*/) vec4 32 div ssa_52 = intrinsic load_input (ssa_0) (base=4, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC4 slots=1 /*147*/) vec4 32 div ssa_53 = intrinsic load_input (ssa_0) (base=3, component=0, dest_type=uint32 /*36*/, io location=VERT_ATTRIB_GENERIC3 slots=1 /*146*/) vec4 32 div ssa_54 = intrinsic load_input (ssa_0) (base=2, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC2 slots=1 /*145*/) vec4 32 div ssa_55 = intrinsic load_input (ssa_0) (base=1, component=0, dest_type=uint32 /*36*/, io location=VERT_ATTRIB_GENERIC1 slots=1 /*144*/) vec3 32 div ssa_56 = intrinsic load_input (ssa_0) (base=0, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC0 slots=1 /*143*/) vec4 32 div ssa_57 = intrinsic load_input (ssa_0) (base=8, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC10 slots=1 /*153*/) vec4 32 div ssa_58 = intrinsic load_input (ssa_0) (base=9, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC11 slots=1 /*154*/) vec4 32 div ssa_59 = intrinsic load_input (ssa_0) (base=10, component=0, dest_type=float32 /*160*/, io location=VERT_ATTRIB_GENERIC12 slots=1 /*155*/) vec1 32 con ssa_60 = load_const (0x00000260 = 0.000000) vec4 32 con ssa_61 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_60) (access=80, align_mul=1073741824, align_offset=608) vec1 32 con ssa_62 = ineg! ssa_61.x vec1 32 div ssa_63 = iadd! ssa_57.w, ssa_62 vec1 32 con ssa_64 = ineg! ssa_61.y vec1 32 div ssa_65 = iadd! ssa_58.w, ssa_64 vec1 32 con ssa_66 = ineg! ssa_61.z vec1 32 div ssa_67 = iadd! ssa_59.w, ssa_66 vec1 32 div ssa_68 = i2f32! ssa_63 vec1 32 div ssa_69 = i2f32! ssa_65 vec1 32 div ssa_70 = i2f32! ssa_67 vec1 32 div ssa_71 = fmul! ssa_68, ssa_3 vec1 32 div ssa_72 = fmul! ssa_69, ssa_3 vec1 32 div ssa_73 = fmul! ssa_70, ssa_3 vec1 32 con ssa_74 = load_const (0x00000040 = 0.000000) vec8 32 con ssa_75 = intrinsic load_ssbo_uniform_block_intel (ssa_34, ssa_74) (access=80, align_mul=1073741824, align_offset=64) vec1 32 con ssa_76 = load_const (0x00000004 = 0.000000) vec1 32 div ssa_77 = fmul! ssa_75.a, ssa_56.x vec1 32 div ssa_78 = fmul! ssa_75.b, ssa_56.y vec1 32 div ssa_79 = fmul! ssa_75.c, ssa_56.z vec1 32 div ssa_80 = fadd! ssa_77, ssa_75.e vec1 32 div ssa_81 = fadd! ssa_78, ssa_75.f vec1 32 div ssa_82 = fadd! ssa_79, ssa_75.g vec4 32 con ssa_83 = intrinsic load_ssbo_uniform_block_intel (ssa_46, ssa_8) (access=80, align_mul=1073741824, align_offset=16) vec1 32 div ssa_84 = fadd! ssa_54.y, ssa_54.x vec1 32 div ssa_85 = fadd! ssa_54.z, ssa_84 vec1 32 div ssa_86 = fadd! ssa_54.w, ssa_85 vec1 32 div ssa_87 = fadd! ssa_52.y, ssa_52.x vec1 32 div ssa_88 = fadd! ssa_52.z, ssa_87 vec1 32 div ssa_89 = fadd! ssa_52.w, ssa_88 vec1 32 div ssa_90 = fadd! ssa_89, ssa_86 vec1 32 div ssa_91 = fmax! ssa_12, ssa_90 vec1 32 div ssa_92 = frcp! ssa_91 vec1 32 div ssa_93 = fmul! ssa_54.x, ssa_92 vec1 32 div ssa_94 = fmul! ssa_54.y, ssa_92 vec1 32 div ssa_95 = fmul! ssa_54.z, ssa_92 vec1 32 div ssa_96 = fmul! ssa_54.w, ssa_92 vec1 32 div ssa_97 = fmul! ssa_52.x, ssa_92 vec1 32 div ssa_98 = fmul! ssa_52.y, ssa_92 vec1 32 div ssa_99 = fmul! ssa_52.z, ssa_92 vec1 32 div ssa_100 = fmul! ssa_52.w, ssa_92 vec1 32 div ssa_101 = imul ssa_55.x, ssa_48.y vec1 32 div ssa_102 = iadd ssa_101, ssa_48.x vec1 32 con ssa_103 = load_const (0xfffffffc = -nan) vec1 32 div ssa_104 = iand ssa_102, ssa_103 vec1 32 div ssa_105 = intrinsic load_ssbo (ssa_24, ssa_104) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_106 = iadd ssa_104, ssa_76 vec1 32 div ssa_107 = intrinsic load_ssbo (ssa_24, ssa_106) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_108 = iadd ssa_104, ssa_9 vec1 32 div ssa_109 = intrinsic load_ssbo (ssa_24, ssa_108) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_110 = iadd ssa_104, ssa_41 vec1 32 div ssa_111 = intrinsic load_ssbo (ssa_24, ssa_110) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_112 = iadd ssa_104, ssa_8 vec1 32 div ssa_113 = intrinsic load_ssbo (ssa_24, ssa_112) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_114 = load_const (0x00000014 = 0.000000) vec1 32 div ssa_115 = iadd ssa_114, ssa_104 vec1 32 div ssa_116 = intrinsic load_ssbo (ssa_24, ssa_115) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_117 = iadd ssa_13, ssa_104 vec1 32 div ssa_118 = intrinsic load_ssbo (ssa_24, ssa_117) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_119 = load_const (0x0000001c = 0.000000) vec1 32 div ssa_120 = iadd ssa_119, ssa_104 vec1 32 div ssa_121 = intrinsic load_ssbo (ssa_24, ssa_120) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_122 = iadd ssa_104, ssa_10 vec1 32 div ssa_123 = intrinsic load_ssbo (ssa_24, ssa_122) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_124 = load_const (0x00000024 = 0.000000) vec1 32 div ssa_125 = iadd ssa_124, ssa_104 vec1 32 div ssa_126 = intrinsic load_ssbo (ssa_24, ssa_125) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_127 = load_const (0x00000028 = 0.000000) vec1 32 div ssa_128 = iadd ssa_127, ssa_104 vec1 32 div ssa_129 = intrinsic load_ssbo (ssa_24, ssa_128) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_130 = load_const (0x0000002c = 0.000000) vec1 32 div ssa_131 = iadd ssa_130, ssa_104 vec1 32 div ssa_132 = intrinsic load_ssbo (ssa_24, ssa_131) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_133 = imul ssa_53.x, ssa_48.y vec1 32 div ssa_134 = iadd ssa_133, ssa_48.x vec1 32 div ssa_135 = iand ssa_134, ssa_103 vec1 32 div ssa_136 = intrinsic load_ssbo (ssa_24, ssa_135) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_137 = iadd ssa_135, ssa_76 vec1 32 div ssa_138 = intrinsic load_ssbo (ssa_24, ssa_137) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_139 = iadd ssa_135, ssa_9 vec1 32 div ssa_140 = intrinsic load_ssbo (ssa_24, ssa_139) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_141 = iadd ssa_135, ssa_41 vec1 32 div ssa_142 = intrinsic load_ssbo (ssa_24, ssa_141) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_143 = iadd ssa_135, ssa_8 vec1 32 div ssa_144 = intrinsic load_ssbo (ssa_24, ssa_143) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_145 = iadd ssa_114, ssa_135 vec1 32 div ssa_146 = intrinsic load_ssbo (ssa_24, ssa_145) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_147 = iadd ssa_13, ssa_135 vec1 32 div ssa_148 = intrinsic load_ssbo (ssa_24, ssa_147) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_149 = iadd ssa_119, ssa_135 vec1 32 div ssa_150 = intrinsic load_ssbo (ssa_24, ssa_149) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_151 = iadd ssa_135, ssa_10 vec1 32 div ssa_152 = intrinsic load_ssbo (ssa_24, ssa_151) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_153 = iadd ssa_124, ssa_135 vec1 32 div ssa_154 = intrinsic load_ssbo (ssa_24, ssa_153) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_155 = iadd ssa_127, ssa_135 vec1 32 div ssa_156 = intrinsic load_ssbo (ssa_24, ssa_155) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_157 = iadd ssa_130, ssa_135 vec1 32 div ssa_158 = intrinsic load_ssbo (ssa_24, ssa_157) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_159 = fmul! ssa_105, ssa_93 vec1 32 div ssa_160 = fmul! ssa_113, ssa_93 vec1 32 div ssa_161 = fmul! ssa_123, ssa_93 vec1 32 div ssa_162 = fmul! ssa_107, ssa_93 vec1 32 div ssa_163 = fmul! ssa_116, ssa_93 vec1 32 div ssa_164 = fmul! ssa_126, ssa_93 vec1 32 div ssa_165 = fmul! ssa_109, ssa_93 vec1 32 div ssa_166 = fmul! ssa_118, ssa_93 vec1 32 div ssa_167 = fmul! ssa_129, ssa_93 vec1 32 div ssa_168 = fmul! ssa_111, ssa_93 vec1 32 div ssa_169 = fmul! ssa_121, ssa_93 vec1 32 div ssa_170 = fmul! ssa_132, ssa_93 vec1 32 div ssa_171 = fmul! ssa_136, ssa_97 vec1 32 div ssa_172 = fmul! ssa_144, ssa_97 vec1 32 div ssa_173 = fmul! ssa_152, ssa_97 vec1 32 div ssa_174 = fmul! ssa_138, ssa_97 vec1 32 div ssa_175 = fmul! ssa_146, ssa_97 vec1 32 div ssa_176 = fmul! ssa_154, ssa_97 vec1 32 div ssa_177 = fmul! ssa_140, ssa_97 vec1 32 div ssa_178 = fmul! ssa_148, ssa_97 vec1 32 div ssa_179 = fmul! ssa_156, ssa_97 vec1 32 div ssa_180 = fmul! ssa_142, ssa_97 vec1 32 div ssa_181 = fmul! ssa_150, ssa_97 vec1 32 div ssa_182 = fmul! ssa_158, ssa_97 vec1 32 div ssa_183 = fadd! ssa_171, ssa_159 vec1 32 div ssa_184 = fadd! ssa_172, ssa_160 vec1 32 div ssa_185 = fadd! ssa_173, ssa_161 vec1 32 div ssa_186 = fadd! ssa_174, ssa_162 vec1 32 div ssa_187 = fadd! ssa_175, ssa_163 vec1 32 div ssa_188 = fadd! ssa_176, ssa_164 vec1 32 div ssa_189 = fadd! ssa_177, ssa_165 vec1 32 div ssa_190 = fadd! ssa_178, ssa_166 vec1 32 div ssa_191 = fadd! ssa_179, ssa_167 vec1 32 div ssa_192 = fadd! ssa_180, ssa_168 vec1 32 div ssa_193 = fadd! ssa_181, ssa_169 vec1 32 div ssa_194 = fadd! ssa_182, ssa_170 vec1 32 div ssa_195 = imul ssa_55.y, ssa_48.y vec1 32 div ssa_196 = iadd ssa_195, ssa_48.x vec1 32 div ssa_197 = iand ssa_196, ssa_103 vec1 32 div ssa_198 = intrinsic load_ssbo (ssa_24, ssa_197) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_199 = iadd ssa_197, ssa_76 vec1 32 div ssa_200 = intrinsic load_ssbo (ssa_24, ssa_199) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_201 = iadd ssa_197, ssa_9 vec1 32 div ssa_202 = intrinsic load_ssbo (ssa_24, ssa_201) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_203 = iadd ssa_197, ssa_41 vec1 32 div ssa_204 = intrinsic load_ssbo (ssa_24, ssa_203) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_205 = iadd ssa_197, ssa_8 vec1 32 div ssa_206 = intrinsic load_ssbo (ssa_24, ssa_205) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_207 = iadd ssa_114, ssa_197 vec1 32 div ssa_208 = intrinsic load_ssbo (ssa_24, ssa_207) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_209 = iadd ssa_13, ssa_197 vec1 32 div ssa_210 = intrinsic load_ssbo (ssa_24, ssa_209) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_211 = iadd ssa_119, ssa_197 vec1 32 div ssa_212 = intrinsic load_ssbo (ssa_24, ssa_211) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_213 = iadd ssa_197, ssa_10 vec1 32 div ssa_214 = intrinsic load_ssbo (ssa_24, ssa_213) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_215 = iadd ssa_124, ssa_197 vec1 32 div ssa_216 = intrinsic load_ssbo (ssa_24, ssa_215) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_217 = iadd ssa_127, ssa_197 vec1 32 div ssa_218 = intrinsic load_ssbo (ssa_24, ssa_217) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_219 = iadd ssa_130, ssa_197 vec1 32 div ssa_220 = intrinsic load_ssbo (ssa_24, ssa_219) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_221 = imul ssa_53.y, ssa_48.y vec1 32 div ssa_222 = iadd ssa_221, ssa_48.x vec1 32 div ssa_223 = iand ssa_222, ssa_103 vec1 32 div ssa_224 = intrinsic load_ssbo (ssa_24, ssa_223) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_225 = iadd ssa_223, ssa_76 vec1 32 div ssa_226 = intrinsic load_ssbo (ssa_24, ssa_225) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_227 = iadd ssa_223, ssa_9 vec1 32 div ssa_228 = intrinsic load_ssbo (ssa_24, ssa_227) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_229 = iadd ssa_223, ssa_41 vec1 32 div ssa_230 = intrinsic load_ssbo (ssa_24, ssa_229) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_231 = iadd ssa_223, ssa_8 vec1 32 div ssa_232 = intrinsic load_ssbo (ssa_24, ssa_231) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_233 = iadd ssa_114, ssa_223 vec1 32 div ssa_234 = intrinsic load_ssbo (ssa_24, ssa_233) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_235 = iadd ssa_13, ssa_223 vec1 32 div ssa_236 = intrinsic load_ssbo (ssa_24, ssa_235) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_237 = iadd ssa_119, ssa_223 vec1 32 div ssa_238 = intrinsic load_ssbo (ssa_24, ssa_237) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_239 = iadd ssa_223, ssa_10 vec1 32 div ssa_240 = intrinsic load_ssbo (ssa_24, ssa_239) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_241 = iadd ssa_124, ssa_223 vec1 32 div ssa_242 = intrinsic load_ssbo (ssa_24, ssa_241) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_243 = iadd ssa_127, ssa_223 vec1 32 div ssa_244 = intrinsic load_ssbo (ssa_24, ssa_243) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_245 = iadd ssa_130, ssa_223 vec1 32 div ssa_246 = intrinsic load_ssbo (ssa_24, ssa_245) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_247 = fmul! ssa_198, ssa_94 vec1 32 div ssa_248 = fmul! ssa_206, ssa_94 vec1 32 div ssa_249 = fmul! ssa_214, ssa_94 vec1 32 div ssa_250 = fmul! ssa_200, ssa_94 vec1 32 div ssa_251 = fmul! ssa_208, ssa_94 vec1 32 div ssa_252 = fmul! ssa_216, ssa_94 vec1 32 div ssa_253 = fmul! ssa_202, ssa_94 vec1 32 div ssa_254 = fmul! ssa_210, ssa_94 vec1 32 div ssa_255 = fmul! ssa_218, ssa_94 vec1 32 div ssa_256 = fmul! ssa_204, ssa_94 vec1 32 div ssa_257 = fmul! ssa_212, ssa_94 vec1 32 div ssa_258 = fmul! ssa_220, ssa_94 vec1 32 div ssa_259 = fadd! ssa_183, ssa_247 vec1 32 div ssa_260 = fadd! ssa_184, ssa_248 vec1 32 div ssa_261 = fadd! ssa_185, ssa_249 vec1 32 div ssa_262 = fadd! ssa_186, ssa_250 vec1 32 div ssa_263 = fadd! ssa_187, ssa_251 vec1 32 div ssa_264 = fadd! ssa_188, ssa_252 vec1 32 div ssa_265 = fadd! ssa_189, ssa_253 vec1 32 div ssa_266 = fadd! ssa_190, ssa_254 vec1 32 div ssa_267 = fadd! ssa_191, ssa_255 vec1 32 div ssa_268 = fadd! ssa_192, ssa_256 vec1 32 div ssa_269 = fadd! ssa_193, ssa_257 vec1 32 div ssa_270 = fadd! ssa_194, ssa_258 vec1 32 div ssa_271 = fmul! ssa_224, ssa_98 vec1 32 div ssa_272 = fmul! ssa_232, ssa_98 vec1 32 div ssa_273 = fmul! ssa_240, ssa_98 vec1 32 div ssa_274 = fmul! ssa_226, ssa_98 vec1 32 div ssa_275 = fmul! ssa_234, ssa_98 vec1 32 div ssa_276 = fmul! ssa_242, ssa_98 vec1 32 div ssa_277 = fmul! ssa_228, ssa_98 vec1 32 div ssa_278 = fmul! ssa_236, ssa_98 vec1 32 div ssa_279 = fmul! ssa_244, ssa_98 vec1 32 div ssa_280 = fmul! ssa_230, ssa_98 vec1 32 div ssa_281 = fmul! ssa_238, ssa_98 vec1 32 div ssa_282 = fmul! ssa_246, ssa_98 vec1 32 div ssa_283 = fadd! ssa_259, ssa_271 vec1 32 div ssa_284 = fadd! ssa_260, ssa_272 vec1 32 div ssa_285 = fadd! ssa_261, ssa_273 vec1 32 div ssa_286 = fadd! ssa_262, ssa_274 vec1 32 div ssa_287 = fadd! ssa_263, ssa_275 vec1 32 div ssa_288 = fadd! ssa_264, ssa_276 vec1 32 div ssa_289 = fadd! ssa_265, ssa_277 vec1 32 div ssa_290 = fadd! ssa_266, ssa_278 vec1 32 div ssa_291 = fadd! ssa_267, ssa_279 vec1 32 div ssa_292 = fadd! ssa_268, ssa_280 vec1 32 div ssa_293 = fadd! ssa_269, ssa_281 vec1 32 div ssa_294 = fadd! ssa_270, ssa_282 vec1 32 div ssa_295 = imul ssa_55.z, ssa_48.y vec1 32 div ssa_296 = iadd ssa_295, ssa_48.x vec1 32 div ssa_297 = iand ssa_296, ssa_103 vec1 32 div ssa_298 = intrinsic load_ssbo (ssa_24, ssa_297) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_299 = iadd ssa_297, ssa_76 vec1 32 div ssa_300 = intrinsic load_ssbo (ssa_24, ssa_299) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_301 = iadd ssa_297, ssa_9 vec1 32 div ssa_302 = intrinsic load_ssbo (ssa_24, ssa_301) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_303 = iadd ssa_297, ssa_41 vec1 32 div ssa_304 = intrinsic load_ssbo (ssa_24, ssa_303) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_305 = iadd ssa_297, ssa_8 vec1 32 div ssa_306 = intrinsic load_ssbo (ssa_24, ssa_305) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_307 = iadd ssa_114, ssa_297 vec1 32 div ssa_308 = intrinsic load_ssbo (ssa_24, ssa_307) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_309 = iadd ssa_13, ssa_297 vec1 32 div ssa_310 = intrinsic load_ssbo (ssa_24, ssa_309) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_311 = iadd ssa_119, ssa_297 vec1 32 div ssa_312 = intrinsic load_ssbo (ssa_24, ssa_311) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_313 = iadd ssa_297, ssa_10 vec1 32 div ssa_314 = intrinsic load_ssbo (ssa_24, ssa_313) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_315 = iadd ssa_124, ssa_297 vec1 32 div ssa_316 = intrinsic load_ssbo (ssa_24, ssa_315) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_317 = iadd ssa_127, ssa_297 vec1 32 div ssa_318 = intrinsic load_ssbo (ssa_24, ssa_317) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_319 = iadd ssa_130, ssa_297 vec1 32 div ssa_320 = intrinsic load_ssbo (ssa_24, ssa_319) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_321 = imul ssa_53.z, ssa_48.y vec1 32 div ssa_322 = iadd ssa_321, ssa_48.x vec1 32 div ssa_323 = iand ssa_322, ssa_103 vec1 32 div ssa_324 = intrinsic load_ssbo (ssa_24, ssa_323) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_325 = iadd ssa_323, ssa_76 vec1 32 div ssa_326 = intrinsic load_ssbo (ssa_24, ssa_325) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_327 = iadd ssa_323, ssa_9 vec1 32 div ssa_328 = intrinsic load_ssbo (ssa_24, ssa_327) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_329 = iadd ssa_323, ssa_41 vec1 32 div ssa_330 = intrinsic load_ssbo (ssa_24, ssa_329) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_331 = iadd ssa_323, ssa_8 vec1 32 div ssa_332 = intrinsic load_ssbo (ssa_24, ssa_331) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_333 = iadd ssa_114, ssa_323 vec1 32 div ssa_334 = intrinsic load_ssbo (ssa_24, ssa_333) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_335 = iadd ssa_13, ssa_323 vec1 32 div ssa_336 = intrinsic load_ssbo (ssa_24, ssa_335) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_337 = iadd ssa_119, ssa_323 vec1 32 div ssa_338 = intrinsic load_ssbo (ssa_24, ssa_337) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_339 = iadd ssa_323, ssa_10 vec1 32 div ssa_340 = intrinsic load_ssbo (ssa_24, ssa_339) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_341 = iadd ssa_124, ssa_323 vec1 32 div ssa_342 = intrinsic load_ssbo (ssa_24, ssa_341) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_343 = iadd ssa_127, ssa_323 vec1 32 div ssa_344 = intrinsic load_ssbo (ssa_24, ssa_343) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_345 = iadd ssa_130, ssa_323 vec1 32 div ssa_346 = intrinsic load_ssbo (ssa_24, ssa_345) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_347 = fmul! ssa_298, ssa_95 vec1 32 div ssa_348 = fmul! ssa_306, ssa_95 vec1 32 div ssa_349 = fmul! ssa_314, ssa_95 vec1 32 div ssa_350 = fmul! ssa_300, ssa_95 vec1 32 div ssa_351 = fmul! ssa_308, ssa_95 vec1 32 div ssa_352 = fmul! ssa_316, ssa_95 vec1 32 div ssa_353 = fmul! ssa_302, ssa_95 vec1 32 div ssa_354 = fmul! ssa_310, ssa_95 vec1 32 div ssa_355 = fmul! ssa_318, ssa_95 vec1 32 div ssa_356 = fmul! ssa_304, ssa_95 vec1 32 div ssa_357 = fmul! ssa_312, ssa_95 vec1 32 div ssa_358 = fmul! ssa_320, ssa_95 vec1 32 div ssa_359 = fadd! ssa_283, ssa_347 vec1 32 div ssa_360 = fadd! ssa_284, ssa_348 vec1 32 div ssa_361 = fadd! ssa_285, ssa_349 vec1 32 div ssa_362 = fadd! ssa_286, ssa_350 vec1 32 div ssa_363 = fadd! ssa_287, ssa_351 vec1 32 div ssa_364 = fadd! ssa_288, ssa_352 vec1 32 div ssa_365 = fadd! ssa_289, ssa_353 vec1 32 div ssa_366 = fadd! ssa_290, ssa_354 vec1 32 div ssa_367 = fadd! ssa_291, ssa_355 vec1 32 div ssa_368 = fadd! ssa_292, ssa_356 vec1 32 div ssa_369 = fadd! ssa_293, ssa_357 vec1 32 div ssa_370 = fadd! ssa_294, ssa_358 vec1 32 div ssa_371 = fmul! ssa_324, ssa_99 vec1 32 div ssa_372 = fmul! ssa_332, ssa_99 vec1 32 div ssa_373 = fmul! ssa_340, ssa_99 vec1 32 div ssa_374 = fmul! ssa_326, ssa_99 vec1 32 div ssa_375 = fmul! ssa_334, ssa_99 vec1 32 div ssa_376 = fmul! ssa_342, ssa_99 vec1 32 div ssa_377 = fmul! ssa_328, ssa_99 vec1 32 div ssa_378 = fmul! ssa_336, ssa_99 vec1 32 div ssa_379 = fmul! ssa_344, ssa_99 vec1 32 div ssa_380 = fmul! ssa_330, ssa_99 vec1 32 div ssa_381 = fmul! ssa_338, ssa_99 vec1 32 div ssa_382 = fmul! ssa_346, ssa_99 vec1 32 div ssa_383 = fadd! ssa_359, ssa_371 vec1 32 div ssa_384 = fadd! ssa_360, ssa_372 vec1 32 div ssa_385 = fadd! ssa_361, ssa_373 vec1 32 div ssa_386 = fadd! ssa_362, ssa_374 vec1 32 div ssa_387 = fadd! ssa_363, ssa_375 vec1 32 div ssa_388 = fadd! ssa_364, ssa_376 vec1 32 div ssa_389 = fadd! ssa_365, ssa_377 vec1 32 div ssa_390 = fadd! ssa_366, ssa_378 vec1 32 div ssa_391 = fadd! ssa_367, ssa_379 vec1 32 div ssa_392 = fadd! ssa_368, ssa_380 vec1 32 div ssa_393 = fadd! ssa_369, ssa_381 vec1 32 div ssa_394 = fadd! ssa_370, ssa_382 vec1 32 div ssa_395 = imul ssa_55.w, ssa_48.y vec1 32 div ssa_396 = iadd ssa_395, ssa_48.x vec1 32 div ssa_397 = iand ssa_396, ssa_103 vec1 32 div ssa_398 = intrinsic load_ssbo (ssa_24, ssa_397) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_399 = iadd ssa_397, ssa_76 vec1 32 div ssa_400 = intrinsic load_ssbo (ssa_24, ssa_399) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_401 = iadd ssa_397, ssa_9 vec1 32 div ssa_402 = intrinsic load_ssbo (ssa_24, ssa_401) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_403 = iadd ssa_397, ssa_41 vec1 32 div ssa_404 = intrinsic load_ssbo (ssa_24, ssa_403) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_405 = iadd ssa_397, ssa_8 vec1 32 div ssa_406 = intrinsic load_ssbo (ssa_24, ssa_405) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_407 = iadd ssa_114, ssa_397 vec1 32 div ssa_408 = intrinsic load_ssbo (ssa_24, ssa_407) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_409 = iadd ssa_13, ssa_397 vec1 32 div ssa_410 = intrinsic load_ssbo (ssa_24, ssa_409) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_411 = iadd ssa_119, ssa_397 vec1 32 div ssa_412 = intrinsic load_ssbo (ssa_24, ssa_411) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_413 = iadd ssa_397, ssa_10 vec1 32 div ssa_414 = intrinsic load_ssbo (ssa_24, ssa_413) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_415 = iadd ssa_124, ssa_397 vec1 32 div ssa_416 = intrinsic load_ssbo (ssa_24, ssa_415) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_417 = iadd ssa_127, ssa_397 vec1 32 div ssa_418 = intrinsic load_ssbo (ssa_24, ssa_417) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_419 = iadd ssa_130, ssa_397 vec1 32 div ssa_420 = intrinsic load_ssbo (ssa_24, ssa_419) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_421 = imul ssa_53.w, ssa_48.y vec1 32 div ssa_422 = iadd ssa_421, ssa_48.x vec1 32 div ssa_423 = iand ssa_422, ssa_103 vec1 32 div ssa_424 = intrinsic load_ssbo (ssa_24, ssa_423) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_425 = iadd ssa_423, ssa_76 vec1 32 div ssa_426 = intrinsic load_ssbo (ssa_24, ssa_425) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_427 = iadd ssa_423, ssa_9 vec1 32 div ssa_428 = intrinsic load_ssbo (ssa_24, ssa_427) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_429 = iadd ssa_423, ssa_41 vec1 32 div ssa_430 = intrinsic load_ssbo (ssa_24, ssa_429) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_431 = iadd ssa_423, ssa_8 vec1 32 div ssa_432 = intrinsic load_ssbo (ssa_24, ssa_431) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_433 = iadd ssa_114, ssa_423 vec1 32 div ssa_434 = intrinsic load_ssbo (ssa_24, ssa_433) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_435 = iadd ssa_13, ssa_423 vec1 32 div ssa_436 = intrinsic load_ssbo (ssa_24, ssa_435) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_437 = iadd ssa_119, ssa_423 vec1 32 div ssa_438 = intrinsic load_ssbo (ssa_24, ssa_437) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_439 = iadd ssa_423, ssa_10 vec1 32 div ssa_440 = intrinsic load_ssbo (ssa_24, ssa_439) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_441 = iadd ssa_124, ssa_423 vec1 32 div ssa_442 = intrinsic load_ssbo (ssa_24, ssa_441) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_443 = iadd ssa_127, ssa_423 vec1 32 div ssa_444 = intrinsic load_ssbo (ssa_24, ssa_443) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_445 = iadd ssa_130, ssa_423 vec1 32 div ssa_446 = intrinsic load_ssbo (ssa_24, ssa_445) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_447 = fmul! ssa_398, ssa_96 vec1 32 div ssa_448 = fmul! ssa_406, ssa_96 vec1 32 div ssa_449 = fmul! ssa_414, ssa_96 vec1 32 div ssa_450 = fmul! ssa_400, ssa_96 vec1 32 div ssa_451 = fmul! ssa_408, ssa_96 vec1 32 div ssa_452 = fmul! ssa_416, ssa_96 vec1 32 div ssa_453 = fmul! ssa_402, ssa_96 vec1 32 div ssa_454 = fmul! ssa_410, ssa_96 vec1 32 div ssa_455 = fmul! ssa_418, ssa_96 vec1 32 div ssa_456 = fmul! ssa_404, ssa_96 vec1 32 div ssa_457 = fmul! ssa_412, ssa_96 vec1 32 div ssa_458 = fmul! ssa_420, ssa_96 vec1 32 div ssa_459 = fadd! ssa_383, ssa_447 vec1 32 div ssa_460 = fadd! ssa_384, ssa_448 vec1 32 div ssa_461 = fadd! ssa_385, ssa_449 vec1 32 div ssa_462 = fadd! ssa_386, ssa_450 vec1 32 div ssa_463 = fadd! ssa_387, ssa_451 vec1 32 div ssa_464 = fadd! ssa_388, ssa_452 vec1 32 div ssa_465 = fadd! ssa_389, ssa_453 vec1 32 div ssa_466 = fadd! ssa_390, ssa_454 vec1 32 div ssa_467 = fadd! ssa_391, ssa_455 vec1 32 div ssa_468 = fadd! ssa_392, ssa_456 vec1 32 div ssa_469 = fadd! ssa_393, ssa_457 vec1 32 div ssa_470 = fadd! ssa_394, ssa_458 vec1 32 div ssa_471 = fmul! ssa_424, ssa_100 vec1 32 div ssa_472 = fmul! ssa_432, ssa_100 vec1 32 div ssa_473 = fmul! ssa_440, ssa_100 vec1 32 div ssa_474 = fmul! ssa_426, ssa_100 vec1 32 div ssa_475 = fmul! ssa_434, ssa_100 vec1 32 div ssa_476 = fmul! ssa_442, ssa_100 vec1 32 div ssa_477 = fmul! ssa_428, ssa_100 vec1 32 div ssa_478 = fmul! ssa_436, ssa_100 vec1 32 div ssa_479 = fmul! ssa_444, ssa_100 vec1 32 div ssa_480 = fmul! ssa_430, ssa_100 vec1 32 div ssa_481 = fmul! ssa_438, ssa_100 vec1 32 div ssa_482 = fmul! ssa_446, ssa_100 vec1 32 div ssa_483 = fadd! ssa_459, ssa_471 vec1 32 div ssa_484 = fadd! ssa_460, ssa_472 vec1 32 div ssa_485 = fadd! ssa_461, ssa_473 vec1 32 div ssa_486 = fadd! ssa_462, ssa_474 vec1 32 div ssa_487 = fadd! ssa_463, ssa_475 vec1 32 div ssa_488 = fadd! ssa_464, ssa_476 vec1 32 div ssa_489 = fadd! ssa_465, ssa_477 vec1 32 div ssa_490 = fadd! ssa_466, ssa_478 vec1 32 div ssa_491 = fadd! ssa_467, ssa_479 vec1 32 div ssa_492 = fadd! ssa_468, ssa_480 vec1 32 div ssa_493 = fadd! ssa_469, ssa_481 vec1 32 div ssa_494 = fadd! ssa_470, ssa_482 vec1 32 div ssa_495 = fmul! ssa_483, ssa_80 vec1 32 div ssa_496 = ffma! ssa_81, ssa_486, ssa_495 vec1 32 div ssa_497 = ffma! ssa_82, ssa_489, ssa_496 div r8 = fadd! ssa_497, ssa_492 vec1 32 div ssa_499 = fmul! ssa_484, ssa_80 vec1 32 div ssa_500 = ffma! ssa_81, ssa_487, ssa_499 vec1 32 div ssa_501 = ffma! ssa_82, ssa_490, ssa_500 div r9 = fadd! ssa_501, ssa_493 vec1 32 div ssa_503 = fmul! ssa_485, ssa_80 vec1 32 div ssa_504 = ffma! ssa_81, ssa_488, ssa_503 vec1 32 div ssa_505 = ffma! ssa_82, ssa_491, ssa_504 div r10 = fadd! ssa_505, ssa_494 vec1 32 con ssa_507 = fmax! ssa_83.y, ssa_83.z vec1 32 con ssa_508 = f2u32 ssa_83.x vec1 32 con ssa_509 = umin ssa_508, ssa_9 vec1 32 con ssa_510 = ieq32 ssa_509, ssa_0 vec1 32 con ssa_511 = flt32! ssa_11, ssa_507 vec1 32 con ssa_512 = ior! ssa_510, ssa_511 vec1 32 con ssa_513 = inot! ssa_512 /* succs: block_1 block_16 */ if ssa_513 { block block_1: /* preds: block_0 */ vec1 32 con ssa_514 = load_const (0x00000030 = 0.000000) con r15 = mov ssa_514 con r14 = mov ssa_10 con r13 = mov ssa_8 con r12 = mov ssa_0 con r11 = mov ssa_0 /* succs: block_2 */ loop { block block_2: /* preds: block_1 block_14 */ vec4 32 con ssa_520 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r12) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_521 = unpack_half_2x16_split_x! ssa_520.x vec1 32 con ssa_522 = unpack_half_2x16_split_y! ssa_520.x vec1 32 con ssa_523 = unpack_half_2x16_split_x! ssa_520.y vec1 32 con ssa_524 = unpack_half_2x16_split_y! ssa_520.y vec1 32 con ssa_525 = unpack_half_2x16_split_x ssa_520.z vec1 32 con ssa_526 = unpack_half_2x16_split_y ssa_520.z vec1 32 con ssa_527 = unpack_half_2x16_split_x ssa_520.w vec1 32 con ssa_528 = unpack_half_2x16_split_y ssa_520.w vec4 32 con ssa_529 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r13) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_530 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r14) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_531 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r15) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_532 = fneg! ssa_529.x vec1 32 div ssa_533 = fadd! ssa_80, ssa_532 vec1 32 con ssa_534 = fneg! ssa_529.y vec1 32 div ssa_535 = fadd! ssa_81, ssa_534 vec1 32 con ssa_536 = fneg! ssa_529.z vec1 32 div ssa_537 = fadd! ssa_82, ssa_536 vec1 32 con ssa_538 = fadd! ssa_530.x, ssa_532 vec1 32 con ssa_539 = fadd! ssa_530.y, ssa_534 vec1 32 con ssa_540 = fadd! ssa_530.z, ssa_536 vec1 32 div ssa_541 = fmul! ssa_533, ssa_538 vec1 32 div ssa_542 = ffma! ssa_535, ssa_539, ssa_541 vec1 32 div ssa_543 = ffma! ssa_537, ssa_540, ssa_542 vec1 32 con ssa_544 = fmul! ssa_538, ssa_538 vec1 32 con ssa_545 = ffma! ssa_539, ssa_539, ssa_544 vec1 32 con ssa_546 = ffma! ssa_540, ssa_540, ssa_545 vec1 32 con ssa_547 = frcp! ssa_546 vec1 32 div ssa_548 = fmul! ssa_543, ssa_547 vec1 32 div ssa_549 = fmul! ssa_548, ssa_538 vec1 32 div ssa_550 = fmul! ssa_548, ssa_539 vec1 32 div ssa_551 = fmul! ssa_548, ssa_540 vec1 32 div ssa_552 = fneg ssa_80 vec1 32 div ssa_553 = fadd ssa_529.x, ssa_552 vec1 32 div ssa_554 = fadd ssa_553, ssa_549 vec1 32 div ssa_555 = fneg ssa_81 vec1 32 div ssa_556 = fadd ssa_529.y, ssa_555 vec1 32 div ssa_557 = fadd ssa_556, ssa_550 vec1 32 div ssa_558 = fneg ssa_82 vec1 32 div ssa_559 = fadd ssa_529.z, ssa_558 vec1 32 div ssa_560 = fadd ssa_559, ssa_551 vec1 32 div ssa_561 = fmul ssa_560, ssa_560 vec1 32 div ssa_562 = ffma ssa_557, ssa_557, ssa_561 vec1 32 div ssa_563 = ffma ssa_554, ssa_554, ssa_562 vec1 32 con ssa_564 = fmul ssa_529.w, ssa_529.w vec1 32 div ssa_565 = ffma ssa_527, ssa_82, ssa_528 vec1 32 div ssa_566 = ffma ssa_526, ssa_81, ssa_565 vec1 32 div ssa_567 = ffma ssa_525, ssa_80, ssa_566 vec1 32 con ssa_568 = ieq32 ssa_531.w, ssa_0 vec1 32 div ssa_569 = fge32! ssa_564, ssa_563 /* succs: block_3 block_7 */ if ssa_568 { block block_3: /* preds: block_2 */ vec1 32 div ssa_570 = ffma ssa_523, ssa_82, ssa_524 vec1 32 div ssa_571 = ffma ssa_522, ssa_81, ssa_570 vec1 32 div ssa_572 = ffma ssa_521, ssa_80, ssa_571 vec1 32 div ssa_573 = flt32! ssa_0, ssa_572 vec1 32 div ssa_574 = iand ssa_569, ssa_573 vec1 32 div ssa_575 = flt32! ssa_0, ssa_567 vec1 32 div ssa_576 = iand ssa_574, ssa_575 /* succs: block_4 block_5 */ if ssa_576 { block block_4: /* preds: block_3 */ vec1 32 div ssa_577 = fmul! ssa_80, ssa_521 vec1 32 div ssa_578 = ffma! ssa_81, ssa_522, ssa_577 vec1 32 div ssa_579 = ffma! ssa_82, ssa_523, ssa_578 vec1 32 div ssa_580 = fadd! ssa_524, ssa_579 vec1 32 div ssa_581 = fmul! ssa_521, ssa_580 vec1 32 div ssa_582 = fmul! ssa_522, ssa_580 vec1 32 div ssa_583 = fmul! ssa_523, ssa_580 vec1 32 div ssa_584 = fmul! ssa_581, ssa_581 vec1 32 div ssa_585 = ffma! ssa_582, ssa_582, ssa_584 vec1 32 div ssa_586 = ffma! ssa_583, ssa_583, ssa_585 vec1 32 div ssa_587 = fmul! ssa_586, ssa_5 vec1 32 div ssa_588 = fsat! ssa_587 vec1 32 div ssa_589 = fneg! r8 vec1 32 div ssa_590 = fadd! ssa_531.x, ssa_589 vec1 32 div ssa_591 = fneg! r9 vec1 32 div ssa_592 = fadd! ssa_531.y, ssa_591 vec1 32 div ssa_593 = fneg! r10 vec1 32 div ssa_594 = fadd! ssa_531.z, ssa_593 vec1 32 div ssa_595 = fmul! ssa_588, ssa_590 vec1 32 div ssa_596 = fmul! ssa_588, ssa_592 vec1 32 div ssa_597 = fmul! ssa_588, ssa_594 div r8 = fadd! ssa_595, r8 div r9 = fadd! ssa_596, r9 div r10 = fadd! ssa_597, r10 break /* succs: block_15 */ } else { block block_5: /* preds: block_3 */ /* succs: block_6 */ } block block_6: /* preds: block_5 */ /* succs: block_11 */ } else { block block_7: /* preds: block_2 */ vec1 32 div ssa_601 = fge32! ssa_567, ssa_0 vec1 32 div ssa_602 = iand ssa_569, ssa_601 /* succs: block_8 block_9 */ if ssa_602 { block block_8: /* preds: block_7 */ vec1 32 div ssa_603 = fadd! ssa_549, ssa_529.x vec1 32 div ssa_604 = fadd! ssa_550, ssa_529.y vec1 32 div ssa_605 = fadd! ssa_551, ssa_529.z vec1 32 div ssa_606 = fmul! ssa_80, ssa_521 vec1 32 div ssa_607 = ffma! ssa_81, ssa_522, ssa_606 vec1 32 div ssa_608 = ffma! ssa_82, ssa_523, ssa_607 vec1 32 div ssa_609 = fadd! ssa_524, ssa_608 vec1 32 div ssa_610 = fneg! ssa_609 vec1 32 div ssa_611 = fmul! ssa_610, ssa_521 vec1 32 div ssa_612 = fmul! ssa_610, ssa_522 vec1 32 div ssa_613 = fmul! ssa_610, ssa_523 vec1 32 div ssa_614 = fadd! ssa_80, ssa_611 vec1 32 div ssa_615 = fadd! ssa_81, ssa_612 vec1 32 div ssa_616 = fadd! ssa_82, ssa_613 vec1 32 div ssa_617 = fneg! ssa_603 vec1 32 div ssa_618 = fadd! ssa_614, ssa_617 vec1 32 div ssa_619 = fneg! ssa_604 vec1 32 div ssa_620 = fadd! ssa_615, ssa_619 vec1 32 div ssa_621 = fneg! ssa_605 vec1 32 div ssa_622 = fadd! ssa_616, ssa_621 vec1 32 div ssa_623 = fmul! ssa_618, ssa_618 vec1 32 div ssa_624 = ffma! ssa_620, ssa_620, ssa_623 vec1 32 div ssa_625 = ffma! ssa_622, ssa_622, ssa_624 vec1 32 div ssa_626 = frsq! ssa_625 vec1 32 div ssa_627 = fmul! ssa_626, ssa_529.w vec1 32 div ssa_628 = fmul! ssa_627, ssa_618 vec1 32 div ssa_629 = fmul! ssa_627, ssa_620 vec1 32 div ssa_630 = fmul! ssa_627, ssa_622 vec1 32 div ssa_631 = fadd! ssa_628, ssa_603 vec1 32 div ssa_632 = fadd! ssa_629, ssa_604 vec1 32 div ssa_633 = fadd! ssa_630, ssa_605 vec1 32 div ssa_634 = fmul! ssa_631, ssa_483 vec1 32 div ssa_635 = ffma! ssa_632, ssa_486, ssa_634 vec1 32 div ssa_636 = ffma! ssa_633, ssa_489, ssa_635 div r8 = fadd! ssa_636, ssa_492 vec1 32 div ssa_638 = fmul! ssa_631, ssa_484 vec1 32 div ssa_639 = ffma! ssa_632, ssa_487, ssa_638 vec1 32 div ssa_640 = ffma! ssa_633, ssa_490, ssa_639 div r9 = fadd! ssa_640, ssa_493 vec1 32 div ssa_642 = fmul! ssa_631, ssa_485 vec1 32 div ssa_643 = ffma! ssa_632, ssa_488, ssa_642 vec1 32 div ssa_644 = ffma! ssa_633, ssa_491, ssa_643 div r10 = fadd! ssa_644, ssa_494 break /* succs: block_15 */ } else { block block_9: /* preds: block_7 */ /* succs: block_10 */ } block block_10: /* preds: block_9 */ /* succs: block_11 */ } block block_11: /* preds: block_6 block_10 */ con r16 = iadd r11, ssa_4 vec1 32 con ssa_647 = uge32 r16, ssa_509 /* succs: block_12 block_13 */ if ssa_647 { block block_12: /* preds: block_11 */ break /* succs: block_15 */ } else { block block_13: /* preds: block_11 */ /* succs: block_14 */ } block block_14: /* preds: block_13 */ vec1 32 con ssa_648 = ishl r11, ssa_7 vec1 32 con ssa_649 = iadd ssa_648, ssa_76 vec1 32 con ssa_650 = ior ssa_649, ssa_4 vec1 32 con ssa_651 = ior ssa_649, ssa_7 vec1 32 con ssa_652 = ior ssa_649, ssa_6 vec1 32 con ssa_653 = ishl r11, ssa_19 con r12 = iadd ssa_653, ssa_74 con r13 = ishl ssa_650, ssa_76 con r14 = ishl ssa_651, ssa_76 con r15 = ishl ssa_652, ssa_76 con r11 = mov r16 /* succs: block_2 */ } block block_15: /* preds: block_4 block_8 block_12 */ /* succs: block_17 */ } else { block block_16: /* preds: block_0 */ /* succs: block_17 */ } block block_17: /* preds: block_15 block_16 */ vec1 32 div ssa_664 = fmul! r8, ssa_57.x vec1 32 div ssa_665 = ffma! r9, ssa_57.y, ssa_664 vec1 32 div ssa_666 = ffma! r10, ssa_57.z, ssa_665 vec1 32 div ssa_667 = fadd! ssa_666, ssa_71 vec1 32 div ssa_668 = fmul! r8, ssa_58.x vec1 32 div ssa_669 = ffma! r9, ssa_58.y, ssa_668 vec1 32 div ssa_670 = ffma! r10, ssa_58.z, ssa_669 vec1 32 div ssa_671 = fadd! ssa_670, ssa_72 vec1 32 div ssa_672 = fmul! r8, ssa_59.x vec1 32 div ssa_673 = ffma! r9, ssa_59.y, ssa_672 vec1 32 div ssa_674 = ffma! r10, ssa_59.z, ssa_673 vec1 32 div ssa_675 = fadd! ssa_674, ssa_73 vec1 32 con ssa_676 = load_const (0x000001a0 = 0.000000) vec4 32 con ssa_677 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_676) (access=80, align_mul=1073741824, align_offset=416) vec1 32 div ssa_678 = fmul ssa_677.x, ssa_667 vec1 32 div ssa_679 = ffma ssa_671, ssa_677.y, ssa_678 vec1 32 div ssa_680 = ffma ssa_675, ssa_677.z, ssa_679 vec1 32 div ssa_681 = fadd ssa_680, ssa_677.w vec1 32 con ssa_682 = load_const (0x00000250 = 0.000000) vec4 32 con ssa_683 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_682) (access=80, align_mul=1073741824, align_offset=592) vec1 32 div ssa_684 = fadd ssa_683.x, ssa_667 vec1 32 div ssa_685 = fadd ssa_683.y, ssa_671 vec1 32 div ssa_686 = fadd ssa_683.z, ssa_675 vec1 32 con ssa_687 = load_const (0x000001c0 = 0.000000) vec16 32 con ssa_688 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_687) (access=80, align_mul=1073741824, align_offset=448) vec1 32 div ssa_689 = fmul! ssa_688.a, ssa_667 vec1 32 div ssa_690 = ffma! ssa_671, ssa_688.b, ssa_689 vec1 32 div ssa_691 = ffma! ssa_675, ssa_688.c, ssa_690 vec1 32 div ssa_692 = fadd! ssa_691, ssa_688.d vec1 32 div ssa_693 = fmul! ssa_688.e, ssa_667 vec1 32 div ssa_694 = ffma! ssa_671, ssa_688.f, ssa_693 vec1 32 div ssa_695 = ffma! ssa_675, ssa_688.g, ssa_694 vec1 32 div ssa_696 = fadd! ssa_695, ssa_688.h vec1 32 div ssa_697 = fmul! ssa_688.i, ssa_667 vec1 32 div ssa_698 = ffma! ssa_671, ssa_688.j, ssa_697 vec1 32 div ssa_699 = ffma! ssa_675, ssa_688.k, ssa_698 vec1 32 div ssa_700 = fadd! ssa_699, ssa_688.l vec1 32 div ssa_701 = fmul! ssa_688.m, ssa_667 vec1 32 div ssa_702 = ffma! ssa_671, ssa_688.n, ssa_701 vec1 32 div ssa_703 = ffma! ssa_675, ssa_688.o, ssa_702 vec1 32 div ssa_704 = fadd! ssa_703, ssa_688.p vec1 32 div ssa_705 = iadd ssa_101, ssa_48.z vec1 32 div ssa_706 = iand ssa_705, ssa_103 vec1 32 div ssa_707 = intrinsic load_ssbo (ssa_24, ssa_706) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_708 = iadd ssa_706, ssa_76 vec1 32 div ssa_709 = intrinsic load_ssbo (ssa_24, ssa_708) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_710 = iadd ssa_706, ssa_9 vec1 32 div ssa_711 = intrinsic load_ssbo (ssa_24, ssa_710) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_712 = iadd ssa_706, ssa_41 vec1 32 div ssa_713 = intrinsic load_ssbo (ssa_24, ssa_712) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_714 = iadd ssa_706, ssa_8 vec1 32 div ssa_715 = intrinsic load_ssbo (ssa_24, ssa_714) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_716 = iadd ssa_114, ssa_706 vec1 32 div ssa_717 = intrinsic load_ssbo (ssa_24, ssa_716) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_718 = iadd ssa_13, ssa_706 vec1 32 div ssa_719 = intrinsic load_ssbo (ssa_24, ssa_718) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_720 = iadd ssa_119, ssa_706 vec1 32 div ssa_721 = intrinsic load_ssbo (ssa_24, ssa_720) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_722 = iadd ssa_706, ssa_10 vec1 32 div ssa_723 = intrinsic load_ssbo (ssa_24, ssa_722) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_724 = iadd ssa_124, ssa_706 vec1 32 div ssa_725 = intrinsic load_ssbo (ssa_24, ssa_724) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_726 = iadd ssa_127, ssa_706 vec1 32 div ssa_727 = intrinsic load_ssbo (ssa_24, ssa_726) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_728 = iadd ssa_130, ssa_706 vec1 32 div ssa_729 = intrinsic load_ssbo (ssa_24, ssa_728) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_730 = iadd ssa_133, ssa_48.z vec1 32 div ssa_731 = iand ssa_730, ssa_103 vec1 32 div ssa_732 = intrinsic load_ssbo (ssa_24, ssa_731) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_733 = iadd ssa_731, ssa_76 vec1 32 div ssa_734 = intrinsic load_ssbo (ssa_24, ssa_733) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_735 = iadd ssa_731, ssa_9 vec1 32 div ssa_736 = intrinsic load_ssbo (ssa_24, ssa_735) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_737 = iadd ssa_731, ssa_41 vec1 32 div ssa_738 = intrinsic load_ssbo (ssa_24, ssa_737) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_739 = iadd ssa_731, ssa_8 vec1 32 div ssa_740 = intrinsic load_ssbo (ssa_24, ssa_739) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_741 = iadd ssa_114, ssa_731 vec1 32 div ssa_742 = intrinsic load_ssbo (ssa_24, ssa_741) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_743 = iadd ssa_13, ssa_731 vec1 32 div ssa_744 = intrinsic load_ssbo (ssa_24, ssa_743) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_745 = iadd ssa_119, ssa_731 vec1 32 div ssa_746 = intrinsic load_ssbo (ssa_24, ssa_745) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_747 = iadd ssa_731, ssa_10 vec1 32 div ssa_748 = intrinsic load_ssbo (ssa_24, ssa_747) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_749 = iadd ssa_124, ssa_731 vec1 32 div ssa_750 = intrinsic load_ssbo (ssa_24, ssa_749) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_751 = iadd ssa_127, ssa_731 vec1 32 div ssa_752 = intrinsic load_ssbo (ssa_24, ssa_751) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_753 = iadd ssa_130, ssa_731 vec1 32 div ssa_754 = intrinsic load_ssbo (ssa_24, ssa_753) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_755 = fmul ssa_707, ssa_93 vec1 32 div ssa_756 = fmul ssa_715, ssa_93 vec1 32 div ssa_757 = fmul ssa_723, ssa_93 vec1 32 div ssa_758 = fmul ssa_709, ssa_93 vec1 32 div ssa_759 = fmul ssa_717, ssa_93 vec1 32 div ssa_760 = fmul ssa_725, ssa_93 vec1 32 div ssa_761 = fmul ssa_711, ssa_93 vec1 32 div ssa_762 = fmul ssa_719, ssa_93 vec1 32 div ssa_763 = fmul ssa_727, ssa_93 vec1 32 div ssa_764 = fmul ssa_713, ssa_93 vec1 32 div ssa_765 = fmul ssa_721, ssa_93 vec1 32 div ssa_766 = fmul ssa_729, ssa_93 vec1 32 div ssa_767 = ffma ssa_732, ssa_97, ssa_755 vec1 32 div ssa_768 = ffma ssa_740, ssa_97, ssa_756 vec1 32 div ssa_769 = ffma ssa_748, ssa_97, ssa_757 vec1 32 div ssa_770 = ffma ssa_734, ssa_97, ssa_758 vec1 32 div ssa_771 = ffma ssa_742, ssa_97, ssa_759 vec1 32 div ssa_772 = ffma ssa_750, ssa_97, ssa_760 vec1 32 div ssa_773 = ffma ssa_736, ssa_97, ssa_761 vec1 32 div ssa_774 = ffma ssa_744, ssa_97, ssa_762 vec1 32 div ssa_775 = ffma ssa_752, ssa_97, ssa_763 vec1 32 div ssa_776 = ffma ssa_738, ssa_97, ssa_764 vec1 32 div ssa_777 = ffma ssa_746, ssa_97, ssa_765 vec1 32 div ssa_778 = ffma ssa_754, ssa_97, ssa_766 vec1 32 div ssa_779 = iadd ssa_195, ssa_48.z vec1 32 div ssa_780 = iand ssa_779, ssa_103 vec1 32 div ssa_781 = intrinsic load_ssbo (ssa_24, ssa_780) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_782 = iadd ssa_780, ssa_76 vec1 32 div ssa_783 = intrinsic load_ssbo (ssa_24, ssa_782) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_784 = iadd ssa_780, ssa_9 vec1 32 div ssa_785 = intrinsic load_ssbo (ssa_24, ssa_784) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_786 = iadd ssa_780, ssa_41 vec1 32 div ssa_787 = intrinsic load_ssbo (ssa_24, ssa_786) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_788 = iadd ssa_780, ssa_8 vec1 32 div ssa_789 = intrinsic load_ssbo (ssa_24, ssa_788) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_790 = iadd ssa_114, ssa_780 vec1 32 div ssa_791 = intrinsic load_ssbo (ssa_24, ssa_790) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_792 = iadd ssa_13, ssa_780 vec1 32 div ssa_793 = intrinsic load_ssbo (ssa_24, ssa_792) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_794 = iadd ssa_119, ssa_780 vec1 32 div ssa_795 = intrinsic load_ssbo (ssa_24, ssa_794) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_796 = iadd ssa_780, ssa_10 vec1 32 div ssa_797 = intrinsic load_ssbo (ssa_24, ssa_796) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_798 = iadd ssa_124, ssa_780 vec1 32 div ssa_799 = intrinsic load_ssbo (ssa_24, ssa_798) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_800 = iadd ssa_127, ssa_780 vec1 32 div ssa_801 = intrinsic load_ssbo (ssa_24, ssa_800) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_802 = iadd ssa_130, ssa_780 vec1 32 div ssa_803 = intrinsic load_ssbo (ssa_24, ssa_802) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_804 = iadd ssa_221, ssa_48.z vec1 32 div ssa_805 = iand ssa_804, ssa_103 vec1 32 div ssa_806 = intrinsic load_ssbo (ssa_24, ssa_805) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_807 = iadd ssa_805, ssa_76 vec1 32 div ssa_808 = intrinsic load_ssbo (ssa_24, ssa_807) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_809 = iadd ssa_805, ssa_9 vec1 32 div ssa_810 = intrinsic load_ssbo (ssa_24, ssa_809) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_811 = iadd ssa_805, ssa_41 vec1 32 div ssa_812 = intrinsic load_ssbo (ssa_24, ssa_811) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_813 = iadd ssa_805, ssa_8 vec1 32 div ssa_814 = intrinsic load_ssbo (ssa_24, ssa_813) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_815 = iadd ssa_114, ssa_805 vec1 32 div ssa_816 = intrinsic load_ssbo (ssa_24, ssa_815) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_817 = iadd ssa_13, ssa_805 vec1 32 div ssa_818 = intrinsic load_ssbo (ssa_24, ssa_817) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_819 = iadd ssa_119, ssa_805 vec1 32 div ssa_820 = intrinsic load_ssbo (ssa_24, ssa_819) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_821 = iadd ssa_805, ssa_10 vec1 32 div ssa_822 = intrinsic load_ssbo (ssa_24, ssa_821) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_823 = iadd ssa_124, ssa_805 vec1 32 div ssa_824 = intrinsic load_ssbo (ssa_24, ssa_823) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_825 = iadd ssa_127, ssa_805 vec1 32 div ssa_826 = intrinsic load_ssbo (ssa_24, ssa_825) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_827 = iadd ssa_130, ssa_805 vec1 32 div ssa_828 = intrinsic load_ssbo (ssa_24, ssa_827) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_829 = ffma ssa_781, ssa_94, ssa_767 vec1 32 div ssa_830 = ffma ssa_789, ssa_94, ssa_768 vec1 32 div ssa_831 = ffma ssa_797, ssa_94, ssa_769 vec1 32 div ssa_832 = ffma ssa_783, ssa_94, ssa_770 vec1 32 div ssa_833 = ffma ssa_791, ssa_94, ssa_771 vec1 32 div ssa_834 = ffma ssa_799, ssa_94, ssa_772 vec1 32 div ssa_835 = ffma ssa_785, ssa_94, ssa_773 vec1 32 div ssa_836 = ffma ssa_793, ssa_94, ssa_774 vec1 32 div ssa_837 = ffma ssa_801, ssa_94, ssa_775 vec1 32 div ssa_838 = ffma ssa_787, ssa_94, ssa_776 vec1 32 div ssa_839 = ffma ssa_795, ssa_94, ssa_777 vec1 32 div ssa_840 = ffma ssa_803, ssa_94, ssa_778 vec1 32 div ssa_841 = ffma ssa_806, ssa_98, ssa_829 vec1 32 div ssa_842 = ffma ssa_814, ssa_98, ssa_830 vec1 32 div ssa_843 = ffma ssa_822, ssa_98, ssa_831 vec1 32 div ssa_844 = ffma ssa_808, ssa_98, ssa_832 vec1 32 div ssa_845 = ffma ssa_816, ssa_98, ssa_833 vec1 32 div ssa_846 = ffma ssa_824, ssa_98, ssa_834 vec1 32 div ssa_847 = ffma ssa_810, ssa_98, ssa_835 vec1 32 div ssa_848 = ffma ssa_818, ssa_98, ssa_836 vec1 32 div ssa_849 = ffma ssa_826, ssa_98, ssa_837 vec1 32 div ssa_850 = ffma ssa_812, ssa_98, ssa_838 vec1 32 div ssa_851 = ffma ssa_820, ssa_98, ssa_839 vec1 32 div ssa_852 = ffma ssa_828, ssa_98, ssa_840 vec1 32 div ssa_853 = iadd ssa_295, ssa_48.z vec1 32 div ssa_854 = iand ssa_853, ssa_103 vec1 32 div ssa_855 = intrinsic load_ssbo (ssa_24, ssa_854) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_856 = iadd ssa_854, ssa_76 vec1 32 div ssa_857 = intrinsic load_ssbo (ssa_24, ssa_856) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_858 = iadd ssa_854, ssa_9 vec1 32 div ssa_859 = intrinsic load_ssbo (ssa_24, ssa_858) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_860 = iadd ssa_854, ssa_41 vec1 32 div ssa_861 = intrinsic load_ssbo (ssa_24, ssa_860) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_862 = iadd ssa_854, ssa_8 vec1 32 div ssa_863 = intrinsic load_ssbo (ssa_24, ssa_862) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_864 = iadd ssa_114, ssa_854 vec1 32 div ssa_865 = intrinsic load_ssbo (ssa_24, ssa_864) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_866 = iadd ssa_13, ssa_854 vec1 32 div ssa_867 = intrinsic load_ssbo (ssa_24, ssa_866) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_868 = iadd ssa_119, ssa_854 vec1 32 div ssa_869 = intrinsic load_ssbo (ssa_24, ssa_868) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_870 = iadd ssa_854, ssa_10 vec1 32 div ssa_871 = intrinsic load_ssbo (ssa_24, ssa_870) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_872 = iadd ssa_124, ssa_854 vec1 32 div ssa_873 = intrinsic load_ssbo (ssa_24, ssa_872) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_874 = iadd ssa_127, ssa_854 vec1 32 div ssa_875 = intrinsic load_ssbo (ssa_24, ssa_874) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_876 = iadd ssa_130, ssa_854 vec1 32 div ssa_877 = intrinsic load_ssbo (ssa_24, ssa_876) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_878 = iadd ssa_321, ssa_48.z vec1 32 div ssa_879 = iand ssa_878, ssa_103 vec1 32 div ssa_880 = intrinsic load_ssbo (ssa_24, ssa_879) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_881 = iadd ssa_879, ssa_76 vec1 32 div ssa_882 = intrinsic load_ssbo (ssa_24, ssa_881) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_883 = iadd ssa_879, ssa_9 vec1 32 div ssa_884 = intrinsic load_ssbo (ssa_24, ssa_883) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_885 = iadd ssa_879, ssa_41 vec1 32 div ssa_886 = intrinsic load_ssbo (ssa_24, ssa_885) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_887 = iadd ssa_879, ssa_8 vec1 32 div ssa_888 = intrinsic load_ssbo (ssa_24, ssa_887) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_889 = iadd ssa_114, ssa_879 vec1 32 div ssa_890 = intrinsic load_ssbo (ssa_24, ssa_889) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_891 = iadd ssa_13, ssa_879 vec1 32 div ssa_892 = intrinsic load_ssbo (ssa_24, ssa_891) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_893 = iadd ssa_119, ssa_879 vec1 32 div ssa_894 = intrinsic load_ssbo (ssa_24, ssa_893) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_895 = iadd ssa_879, ssa_10 vec1 32 div ssa_896 = intrinsic load_ssbo (ssa_24, ssa_895) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_897 = iadd ssa_124, ssa_879 vec1 32 div ssa_898 = intrinsic load_ssbo (ssa_24, ssa_897) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_899 = iadd ssa_127, ssa_879 vec1 32 div ssa_900 = intrinsic load_ssbo (ssa_24, ssa_899) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_901 = iadd ssa_130, ssa_879 vec1 32 div ssa_902 = intrinsic load_ssbo (ssa_24, ssa_901) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_903 = ffma ssa_855, ssa_95, ssa_841 vec1 32 div ssa_904 = ffma ssa_863, ssa_95, ssa_842 vec1 32 div ssa_905 = ffma ssa_871, ssa_95, ssa_843 vec1 32 div ssa_906 = ffma ssa_857, ssa_95, ssa_844 vec1 32 div ssa_907 = ffma ssa_865, ssa_95, ssa_845 vec1 32 div ssa_908 = ffma ssa_873, ssa_95, ssa_846 vec1 32 div ssa_909 = ffma ssa_859, ssa_95, ssa_847 vec1 32 div ssa_910 = ffma ssa_867, ssa_95, ssa_848 vec1 32 div ssa_911 = ffma ssa_875, ssa_95, ssa_849 vec1 32 div ssa_912 = ffma ssa_861, ssa_95, ssa_850 vec1 32 div ssa_913 = ffma ssa_869, ssa_95, ssa_851 vec1 32 div ssa_914 = ffma ssa_877, ssa_95, ssa_852 vec1 32 div ssa_915 = ffma ssa_880, ssa_99, ssa_903 vec1 32 div ssa_916 = ffma ssa_888, ssa_99, ssa_904 vec1 32 div ssa_917 = ffma ssa_896, ssa_99, ssa_905 vec1 32 div ssa_918 = ffma ssa_882, ssa_99, ssa_906 vec1 32 div ssa_919 = ffma ssa_890, ssa_99, ssa_907 vec1 32 div ssa_920 = ffma ssa_898, ssa_99, ssa_908 vec1 32 div ssa_921 = ffma ssa_884, ssa_99, ssa_909 vec1 32 div ssa_922 = ffma ssa_892, ssa_99, ssa_910 vec1 32 div ssa_923 = ffma ssa_900, ssa_99, ssa_911 vec1 32 div ssa_924 = ffma ssa_886, ssa_99, ssa_912 vec1 32 div ssa_925 = ffma ssa_894, ssa_99, ssa_913 vec1 32 div ssa_926 = ffma ssa_902, ssa_99, ssa_914 vec1 32 div ssa_927 = iadd ssa_395, ssa_48.z vec1 32 div ssa_928 = iand ssa_927, ssa_103 vec1 32 div ssa_929 = intrinsic load_ssbo (ssa_24, ssa_928) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_930 = iadd ssa_928, ssa_76 vec1 32 div ssa_931 = intrinsic load_ssbo (ssa_24, ssa_930) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_932 = iadd ssa_928, ssa_9 vec1 32 div ssa_933 = intrinsic load_ssbo (ssa_24, ssa_932) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_934 = iadd ssa_928, ssa_41 vec1 32 div ssa_935 = intrinsic load_ssbo (ssa_24, ssa_934) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_936 = iadd ssa_928, ssa_8 vec1 32 div ssa_937 = intrinsic load_ssbo (ssa_24, ssa_936) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_938 = iadd ssa_114, ssa_928 vec1 32 div ssa_939 = intrinsic load_ssbo (ssa_24, ssa_938) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_940 = iadd ssa_13, ssa_928 vec1 32 div ssa_941 = intrinsic load_ssbo (ssa_24, ssa_940) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_942 = iadd ssa_119, ssa_928 vec1 32 div ssa_943 = intrinsic load_ssbo (ssa_24, ssa_942) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_944 = iadd ssa_928, ssa_10 vec1 32 div ssa_945 = intrinsic load_ssbo (ssa_24, ssa_944) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_946 = iadd ssa_124, ssa_928 vec1 32 div ssa_947 = intrinsic load_ssbo (ssa_24, ssa_946) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_948 = iadd ssa_127, ssa_928 vec1 32 div ssa_949 = intrinsic load_ssbo (ssa_24, ssa_948) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_950 = iadd ssa_130, ssa_928 vec1 32 div ssa_951 = intrinsic load_ssbo (ssa_24, ssa_950) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_952 = iadd ssa_421, ssa_48.z vec1 32 div ssa_953 = iand ssa_952, ssa_103 vec1 32 div ssa_954 = intrinsic load_ssbo (ssa_24, ssa_953) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_955 = iadd ssa_953, ssa_76 vec1 32 div ssa_956 = intrinsic load_ssbo (ssa_24, ssa_955) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_957 = iadd ssa_953, ssa_9 vec1 32 div ssa_958 = intrinsic load_ssbo (ssa_24, ssa_957) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_959 = iadd ssa_953, ssa_41 vec1 32 div ssa_960 = intrinsic load_ssbo (ssa_24, ssa_959) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_961 = iadd ssa_953, ssa_8 vec1 32 div ssa_962 = intrinsic load_ssbo (ssa_24, ssa_961) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_963 = iadd ssa_114, ssa_953 vec1 32 div ssa_964 = intrinsic load_ssbo (ssa_24, ssa_963) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_965 = iadd ssa_13, ssa_953 vec1 32 div ssa_966 = intrinsic load_ssbo (ssa_24, ssa_965) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_967 = iadd ssa_119, ssa_953 vec1 32 div ssa_968 = intrinsic load_ssbo (ssa_24, ssa_967) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_969 = iadd ssa_953, ssa_10 vec1 32 div ssa_970 = intrinsic load_ssbo (ssa_24, ssa_969) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_971 = iadd ssa_124, ssa_953 vec1 32 div ssa_972 = intrinsic load_ssbo (ssa_24, ssa_971) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_973 = iadd ssa_127, ssa_953 vec1 32 div ssa_974 = intrinsic load_ssbo (ssa_24, ssa_973) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_975 = iadd ssa_130, ssa_953 vec1 32 div ssa_976 = intrinsic load_ssbo (ssa_24, ssa_975) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_977 = ffma ssa_929, ssa_96, ssa_915 vec1 32 div ssa_978 = ffma ssa_937, ssa_96, ssa_916 vec1 32 div ssa_979 = ffma ssa_945, ssa_96, ssa_917 vec1 32 div ssa_980 = ffma ssa_931, ssa_96, ssa_918 vec1 32 div ssa_981 = ffma ssa_939, ssa_96, ssa_919 vec1 32 div ssa_982 = ffma ssa_947, ssa_96, ssa_920 vec1 32 div ssa_983 = ffma ssa_933, ssa_96, ssa_921 vec1 32 div ssa_984 = ffma ssa_941, ssa_96, ssa_922 vec1 32 div ssa_985 = ffma ssa_949, ssa_96, ssa_923 vec1 32 div ssa_986 = ffma ssa_935, ssa_96, ssa_924 vec1 32 div ssa_987 = ffma ssa_943, ssa_96, ssa_925 vec1 32 div ssa_988 = ffma ssa_951, ssa_96, ssa_926 vec1 32 div ssa_989 = ffma ssa_954, ssa_100, ssa_977 vec1 32 div ssa_990 = ffma ssa_962, ssa_100, ssa_978 vec1 32 div ssa_991 = ffma ssa_970, ssa_100, ssa_979 vec1 32 div ssa_992 = ffma ssa_956, ssa_100, ssa_980 vec1 32 div ssa_993 = ffma ssa_964, ssa_100, ssa_981 vec1 32 div ssa_994 = ffma ssa_972, ssa_100, ssa_982 vec1 32 div ssa_995 = ffma ssa_958, ssa_100, ssa_983 vec1 32 div ssa_996 = ffma ssa_966, ssa_100, ssa_984 vec1 32 div ssa_997 = ffma ssa_974, ssa_100, ssa_985 vec1 32 div ssa_998 = ffma ssa_960, ssa_100, ssa_986 vec1 32 div ssa_999 = ffma ssa_968, ssa_100, ssa_987 vec1 32 div ssa_1000 = ffma ssa_976, ssa_100, ssa_988 vec1 32 div ssa_1001 = fmul ssa_989, ssa_80 vec1 32 div ssa_1002 = ffma ssa_81, ssa_992, ssa_1001 vec1 32 div ssa_1003 = ffma ssa_82, ssa_995, ssa_1002 div r17 = fadd ssa_1003, ssa_998 vec1 32 div ssa_1005 = fmul ssa_990, ssa_80 vec1 32 div ssa_1006 = ffma ssa_81, ssa_993, ssa_1005 vec1 32 div ssa_1007 = ffma ssa_82, ssa_996, ssa_1006 div r18 = fadd ssa_1007, ssa_999 vec1 32 div ssa_1009 = fmul ssa_991, ssa_80 vec1 32 div ssa_1010 = ffma ssa_81, ssa_994, ssa_1009 vec1 32 div ssa_1011 = ffma ssa_82, ssa_997, ssa_1010 div r19 = fadd ssa_1011, ssa_1000 /* succs: block_18 block_33 */ if ssa_513 { block block_18: /* preds: block_17 */ vec1 32 con ssa_1013 = load_const (0x00000030 = 0.000000) con r24 = mov ssa_1013 con r23 = mov ssa_10 con r22 = mov ssa_8 con r21 = mov ssa_0 con r20 = mov ssa_0 /* succs: block_19 */ loop { block block_19: /* preds: block_18 block_31 */ vec4 32 con ssa_1019 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r21) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_1020 = unpack_half_2x16_split_x ssa_1019.x vec1 32 con ssa_1021 = unpack_half_2x16_split_y ssa_1019.x vec1 32 con ssa_1022 = unpack_half_2x16_split_x ssa_1019.y vec1 32 con ssa_1023 = unpack_half_2x16_split_y ssa_1019.y vec1 32 con ssa_1024 = unpack_half_2x16_split_x ssa_1019.z vec1 32 con ssa_1025 = unpack_half_2x16_split_y ssa_1019.z vec1 32 con ssa_1026 = unpack_half_2x16_split_x ssa_1019.w vec1 32 con ssa_1027 = unpack_half_2x16_split_y ssa_1019.w vec4 32 con ssa_1028 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r22) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_1029 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r23) (access=80, align_mul=4, align_offset=0) vec4 32 con ssa_1030 = intrinsic load_ssbo_uniform_block_intel (ssa_29, r24) (access=80, align_mul=4, align_offset=0) vec1 32 con ssa_1031 = fneg ssa_1028.x vec1 32 div ssa_1032 = fadd ssa_80, ssa_1031 vec1 32 con ssa_1033 = fneg ssa_1028.y vec1 32 div ssa_1034 = fadd ssa_81, ssa_1033 vec1 32 con ssa_1035 = fneg ssa_1028.z vec1 32 div ssa_1036 = fadd ssa_82, ssa_1035 vec1 32 con ssa_1037 = fadd ssa_1029.x, ssa_1031 vec1 32 con ssa_1038 = fadd ssa_1029.y, ssa_1033 vec1 32 con ssa_1039 = fadd ssa_1029.z, ssa_1035 vec1 32 div ssa_1040 = fmul ssa_1036, ssa_1039 vec1 32 div ssa_1041 = ffma ssa_1034, ssa_1038, ssa_1040 vec1 32 div ssa_1042 = ffma ssa_1032, ssa_1037, ssa_1041 vec1 32 con ssa_1043 = fmul ssa_1039, ssa_1039 vec1 32 con ssa_1044 = ffma ssa_1038, ssa_1038, ssa_1043 vec1 32 con ssa_1045 = ffma ssa_1037, ssa_1037, ssa_1044 vec1 32 con ssa_1046 = frcp ssa_1045 vec1 32 div ssa_1047 = fmul ssa_1042, ssa_1046 vec1 32 div ssa_1048 = fneg ssa_80 vec1 32 div ssa_1049 = fadd ssa_1028.x, ssa_1048 vec1 32 div ssa_1050 = ffma ssa_1047, ssa_1037, ssa_1049 vec1 32 div ssa_1051 = fneg ssa_81 vec1 32 div ssa_1052 = fadd ssa_1028.y, ssa_1051 vec1 32 div ssa_1053 = ffma ssa_1047, ssa_1038, ssa_1052 vec1 32 div ssa_1054 = fneg ssa_82 vec1 32 div ssa_1055 = fadd ssa_1028.z, ssa_1054 vec1 32 div ssa_1056 = ffma ssa_1047, ssa_1039, ssa_1055 vec1 32 div ssa_1057 = fmul ssa_1056, ssa_1056 vec1 32 div ssa_1058 = ffma ssa_1053, ssa_1053, ssa_1057 vec1 32 div ssa_1059 = ffma ssa_1050, ssa_1050, ssa_1058 vec1 32 con ssa_1060 = fmul ssa_1028.w, ssa_1028.w vec1 32 div ssa_1061 = ffma ssa_1026, ssa_82, ssa_1027 vec1 32 div ssa_1062 = ffma ssa_1025, ssa_81, ssa_1061 vec1 32 div ssa_1063 = ffma ssa_1024, ssa_80, ssa_1062 vec1 32 con ssa_1064 = ieq32 ssa_1030.w, ssa_0 vec1 32 div ssa_1065 = fge32! ssa_1060, ssa_1059 /* succs: block_20 block_24 */ if ssa_1064 { block block_20: /* preds: block_19 */ vec1 32 div ssa_1066 = ffma ssa_1022, ssa_82, ssa_1023 vec1 32 div ssa_1067 = ffma ssa_1021, ssa_81, ssa_1066 vec1 32 div ssa_1068 = ffma ssa_1020, ssa_80, ssa_1067 vec1 32 div ssa_1069 = flt32! ssa_0, ssa_1068 vec1 32 div ssa_1070 = iand ssa_1065, ssa_1069 vec1 32 div ssa_1071 = flt32! ssa_0, ssa_1063 vec1 32 div ssa_1072 = iand ssa_1070, ssa_1071 /* succs: block_21 block_22 */ if ssa_1072 { block block_21: /* preds: block_20 */ vec1 32 div ssa_1073 = fmul ssa_1020, ssa_1068 vec1 32 div ssa_1074 = fmul ssa_1021, ssa_1068 vec1 32 div ssa_1075 = fmul ssa_1022, ssa_1068 vec1 32 div ssa_1076 = fmul ssa_1075, ssa_1075 vec1 32 div ssa_1077 = ffma ssa_1074, ssa_1074, ssa_1076 vec1 32 div ssa_1078 = ffma ssa_1073, ssa_1073, ssa_1077 vec1 32 div ssa_1079 = fmul ssa_1078, ssa_5 vec1 32 div ssa_1080 = fsat! ssa_1079 vec1 32 div ssa_1081 = fneg r17 vec1 32 div ssa_1082 = fadd ssa_1030.x, ssa_1081 vec1 32 div ssa_1083 = fneg r18 vec1 32 div ssa_1084 = fadd ssa_1030.y, ssa_1083 vec1 32 div ssa_1085 = fneg r19 vec1 32 div ssa_1086 = fadd ssa_1030.z, ssa_1085 div r17 = ffma ssa_1080, ssa_1082, r17 div r18 = ffma ssa_1080, ssa_1084, r18 div r19 = ffma ssa_1080, ssa_1086, r19 break /* succs: block_32 */ } else { block block_22: /* preds: block_20 */ /* succs: block_23 */ } block block_23: /* preds: block_22 */ /* succs: block_28 */ } else { block block_24: /* preds: block_19 */ vec1 32 div ssa_1090 = fge32! ssa_1063, ssa_0 vec1 32 div ssa_1091 = iand ssa_1065, ssa_1090 /* succs: block_25 block_26 */ if ssa_1091 { block block_25: /* preds: block_24 */ vec1 32 div ssa_1092 = ffma ssa_1047, ssa_1037, ssa_1028.x vec1 32 div ssa_1093 = ffma ssa_1047, ssa_1038, ssa_1028.y vec1 32 div ssa_1094 = ffma ssa_1047, ssa_1039, ssa_1028.z vec1 32 con ssa_1095 = fneg ssa_1023 vec1 32 div ssa_1096 = ffma ssa_1054, ssa_1022, ssa_1095 vec1 32 div ssa_1097 = ffma ssa_1051, ssa_1021, ssa_1096 vec1 32 div ssa_1098 = ffma ssa_1048, ssa_1020, ssa_1097 vec1 32 div ssa_1099 = ffma ssa_1098, ssa_1020, ssa_80 vec1 32 div ssa_1100 = ffma ssa_1098, ssa_1021, ssa_81 vec1 32 div ssa_1101 = ffma ssa_1098, ssa_1022, ssa_82 vec1 32 div ssa_1102 = fneg ssa_1092 vec1 32 div ssa_1103 = fadd ssa_1099, ssa_1102 vec1 32 div ssa_1104 = fneg ssa_1093 vec1 32 div ssa_1105 = fadd ssa_1100, ssa_1104 vec1 32 div ssa_1106 = fneg ssa_1094 vec1 32 div ssa_1107 = fadd ssa_1101, ssa_1106 vec1 32 div ssa_1108 = fmul ssa_1107, ssa_1107 vec1 32 div ssa_1109 = ffma ssa_1105, ssa_1105, ssa_1108 vec1 32 div ssa_1110 = ffma ssa_1103, ssa_1103, ssa_1109 vec1 32 div ssa_1111 = frsq ssa_1110 vec1 32 div ssa_1112 = fmul ssa_1111, ssa_1028.w vec1 32 div ssa_1113 = ffma ssa_1112, ssa_1103, ssa_1092 vec1 32 div ssa_1114 = ffma ssa_1112, ssa_1105, ssa_1093 vec1 32 div ssa_1115 = ffma ssa_1112, ssa_1107, ssa_1094 vec1 32 div ssa_1116 = fmul ssa_1113, ssa_989 vec1 32 div ssa_1117 = ffma ssa_1114, ssa_992, ssa_1116 vec1 32 div ssa_1118 = ffma ssa_1115, ssa_995, ssa_1117 div r17 = fadd ssa_1118, ssa_998 vec1 32 div ssa_1120 = fmul ssa_1113, ssa_990 vec1 32 div ssa_1121 = ffma ssa_1114, ssa_993, ssa_1120 vec1 32 div ssa_1122 = ffma ssa_1115, ssa_996, ssa_1121 div r18 = fadd ssa_1122, ssa_999 vec1 32 div ssa_1124 = fmul ssa_1113, ssa_991 vec1 32 div ssa_1125 = ffma ssa_1114, ssa_994, ssa_1124 vec1 32 div ssa_1126 = ffma ssa_1115, ssa_997, ssa_1125 div r19 = fadd ssa_1126, ssa_1000 break /* succs: block_32 */ } else { block block_26: /* preds: block_24 */ /* succs: block_27 */ } block block_27: /* preds: block_26 */ /* succs: block_28 */ } block block_28: /* preds: block_23 block_27 */ con r25 = iadd r20, ssa_4 vec1 32 con ssa_1129 = uge32 r25, ssa_509 /* succs: block_29 block_30 */ if ssa_1129 { block block_29: /* preds: block_28 */ break /* succs: block_32 */ } else { block block_30: /* preds: block_28 */ /* succs: block_31 */ } block block_31: /* preds: block_30 */ vec1 32 con ssa_1130 = ishl r20, ssa_7 vec1 32 con ssa_1131 = iadd ssa_1130, ssa_76 vec1 32 con ssa_1132 = ior ssa_1131, ssa_4 vec1 32 con ssa_1133 = ior ssa_1131, ssa_7 vec1 32 con ssa_1134 = ior ssa_1131, ssa_6 vec1 32 con ssa_1135 = ishl r20, ssa_19 con r21 = iadd ssa_1135, ssa_74 con r22 = ishl ssa_1132, ssa_76 con r23 = ishl ssa_1133, ssa_76 con r24 = ishl ssa_1134, ssa_76 con r20 = mov r25 /* succs: block_19 */ } block block_32: /* preds: block_21 block_25 block_29 */ /* succs: block_34 */ } else { block block_33: /* preds: block_17 */ /* succs: block_34 */ } block block_34: /* preds: block_32 block_33 */ vec8 32 con ssa_1146 = intrinsic load_ssbo_uniform_block_intel (ssa_46, ssa_10) (access=80, align_mul=1073741824, align_offset=32) vec4 32 con ssa_1147 = intrinsic load_ssbo_uniform_block_intel (ssa_46, ssa_74) (access=80, align_mul=1073741824, align_offset=64) vec1 32 con ssa_1148 = iadd ssa_1146.d, ssa_62 vec1 32 con ssa_1149 = iadd ssa_1146.h, ssa_64 vec1 32 con ssa_1150 = iadd ssa_1147.w, ssa_66 vec1 32 con ssa_1151 = i2f32 ssa_1148 vec1 32 con ssa_1152 = i2f32 ssa_1149 vec1 32 con ssa_1153 = i2f32 ssa_1150 vec1 32 div ssa_1154 = fmul ssa_1146.a, r17 vec1 32 div ssa_1155 = ffma ssa_1146.b, r18, ssa_1154 vec1 32 div ssa_1156 = ffma ssa_1146.c, r19, ssa_1155 vec1 32 div ssa_1157 = ffma ssa_1151, ssa_3, ssa_1156 vec1 32 div ssa_1158 = fmul ssa_1146.e, r17 vec1 32 div ssa_1159 = ffma ssa_1146.f, r18, ssa_1158 vec1 32 div ssa_1160 = ffma ssa_1146.g, r19, ssa_1159 vec1 32 div ssa_1161 = ffma ssa_1152, ssa_3, ssa_1160 vec1 32 div ssa_1162 = fmul ssa_1147.x, r17 vec1 32 div ssa_1163 = ffma ssa_1147.y, r18, ssa_1162 vec1 32 div ssa_1164 = ffma ssa_1147.z, r19, ssa_1163 vec1 32 div ssa_1165 = ffma ssa_1153, ssa_3, ssa_1164 vec1 32 con ssa_1166 = load_const (0x00000100 = 0.000000) vec16 32 con ssa_1167 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_1166) (access=80, align_mul=1073741824, align_offset=256) vec1 32 div ssa_1168 = fmul ssa_1167.a, ssa_1157 vec1 32 div ssa_1169 = ffma ssa_1161, ssa_1167.b, ssa_1168 vec1 32 div ssa_1170 = ffma ssa_1165, ssa_1167.c, ssa_1169 vec1 32 div ssa_1171 = fadd ssa_1170, ssa_1167.d vec1 32 div ssa_1172 = fmul ssa_1167.e, ssa_1157 vec1 32 div ssa_1173 = ffma ssa_1161, ssa_1167.f, ssa_1172 vec1 32 div ssa_1174 = ffma ssa_1165, ssa_1167.g, ssa_1173 vec1 32 div ssa_1175 = fadd ssa_1174, ssa_1167.h vec1 32 div ssa_1176 = fmul ssa_1167.i, ssa_1157 vec1 32 div ssa_1177 = ffma ssa_1161, ssa_1167.j, ssa_1176 vec1 32 div ssa_1178 = ffma ssa_1165, ssa_1167.k, ssa_1177 vec1 32 div ssa_1179 = fadd ssa_1178, ssa_1167.l vec1 32 div ssa_1180 = fmul ssa_1167.m, ssa_1157 vec1 32 div ssa_1181 = ffma ssa_1161, ssa_1167.n, ssa_1180 vec1 32 div ssa_1182 = ffma ssa_1165, ssa_1167.o, ssa_1181 vec1 32 div ssa_1183 = fadd ssa_1182, ssa_1167.p vec1 32 div ssa_1184 = fmul ssa_483, ssa_57.x vec1 32 div ssa_1185 = ffma ssa_484, ssa_57.y, ssa_1184 vec1 32 div ssa_1186 = ffma ssa_485, ssa_57.z, ssa_1185 vec1 32 div ssa_1187 = fmul ssa_483, ssa_58.x vec1 32 div ssa_1188 = ffma ssa_484, ssa_58.y, ssa_1187 vec1 32 div ssa_1189 = ffma ssa_485, ssa_58.z, ssa_1188 vec1 32 div ssa_1190 = fmul ssa_483, ssa_59.x vec1 32 div ssa_1191 = ffma ssa_484, ssa_59.y, ssa_1190 vec1 32 div ssa_1192 = ffma ssa_485, ssa_59.z, ssa_1191 vec1 32 div ssa_1193 = fmul ssa_486, ssa_57.x vec1 32 div ssa_1194 = ffma ssa_487, ssa_57.y, ssa_1193 vec1 32 div ssa_1195 = ffma ssa_488, ssa_57.z, ssa_1194 vec1 32 div ssa_1196 = fmul ssa_486, ssa_58.x vec1 32 div ssa_1197 = ffma ssa_487, ssa_58.y, ssa_1196 vec1 32 div ssa_1198 = ffma ssa_488, ssa_58.z, ssa_1197 vec1 32 div ssa_1199 = fmul ssa_486, ssa_59.x vec1 32 div ssa_1200 = ffma ssa_487, ssa_59.y, ssa_1199 vec1 32 div ssa_1201 = ffma ssa_488, ssa_59.z, ssa_1200 vec1 32 div ssa_1202 = fmul ssa_489, ssa_57.x vec1 32 div ssa_1203 = ffma ssa_490, ssa_57.y, ssa_1202 vec1 32 div ssa_1204 = ffma ssa_491, ssa_57.z, ssa_1203 vec1 32 div ssa_1205 = fmul ssa_489, ssa_58.x vec1 32 div ssa_1206 = ffma ssa_490, ssa_58.y, ssa_1205 vec1 32 div ssa_1207 = ffma ssa_491, ssa_58.z, ssa_1206 vec1 32 div ssa_1208 = fmul ssa_489, ssa_59.x vec1 32 div ssa_1209 = ffma ssa_490, ssa_59.y, ssa_1208 vec1 32 div ssa_1210 = ffma ssa_491, ssa_59.z, ssa_1209 vec1 32 div ssa_1211 = ffma ssa_50.x, ssa_2, ssa_1 vec1 32 div ssa_1212 = ffma ssa_50.y, ssa_2, ssa_1 vec1 32 div ssa_1213 = ffma ssa_50.z, ssa_2, ssa_1 vec1 32 div ssa_1214 = ffma ssa_49.x, ssa_2, ssa_1 vec1 32 div ssa_1215 = ffma ssa_49.y, ssa_2, ssa_1 vec1 32 div ssa_1216 = ffma ssa_49.z, ssa_2, ssa_1 vec1 32 div ssa_1217 = ffma ssa_49.w, ssa_2, ssa_1 vec1 32 div ssa_1218 = fneg ssa_1213 vec1 32 div ssa_1219 = fmul ssa_1218, ssa_1215 vec1 32 div ssa_1220 = ffma ssa_1212, ssa_1216, ssa_1219 vec1 32 div ssa_1221 = fneg ssa_1211 vec1 32 div ssa_1222 = fmul ssa_1221, ssa_1216 vec1 32 div ssa_1223 = ffma ssa_1213, ssa_1214, ssa_1222 vec1 32 div ssa_1224 = fneg ssa_1212 vec1 32 div ssa_1225 = fmul ssa_1224, ssa_1214 vec1 32 div ssa_1226 = ffma ssa_1211, ssa_1215, ssa_1225 vec1 32 div ssa_1227 = fmul ssa_1220, ssa_1217 vec1 32 div ssa_1228 = fmul ssa_1223, ssa_1217 vec1 32 div ssa_1229 = fmul ssa_1226, ssa_1217 vec1 32 div ssa_1230 = fmul ssa_1186, ssa_1214 vec1 32 div ssa_1231 = ffma ssa_1215, ssa_1195, ssa_1230 vec1 32 div ssa_1232 = ffma ssa_1216, ssa_1204, ssa_1231 vec1 32 div ssa_1233 = fmul ssa_1189, ssa_1214 vec1 32 div ssa_1234 = ffma ssa_1215, ssa_1198, ssa_1233 vec1 32 div ssa_1235 = ffma ssa_1216, ssa_1207, ssa_1234 vec1 32 div ssa_1236 = fmul ssa_1192, ssa_1214 vec1 32 div ssa_1237 = ffma ssa_1215, ssa_1201, ssa_1236 vec1 32 div ssa_1238 = ffma ssa_1216, ssa_1210, ssa_1237 vec1 32 div ssa_1239 = fmul ssa_1186, ssa_1227 vec1 32 div ssa_1240 = ffma ssa_1228, ssa_1195, ssa_1239 vec1 32 div ssa_1241 = ffma ssa_1229, ssa_1204, ssa_1240 vec1 32 div ssa_1242 = fmul ssa_1189, ssa_1227 vec1 32 div ssa_1243 = ffma ssa_1228, ssa_1198, ssa_1242 vec1 32 div ssa_1244 = ffma ssa_1229, ssa_1207, ssa_1243 vec1 32 div ssa_1245 = fmul ssa_1192, ssa_1227 vec1 32 div ssa_1246 = ffma ssa_1228, ssa_1201, ssa_1245 vec1 32 div ssa_1247 = ffma ssa_1229, ssa_1210, ssa_1246 vec1 32 div ssa_1248 = fmul ssa_1186, ssa_1211 vec1 32 div ssa_1249 = ffma ssa_1212, ssa_1195, ssa_1248 vec1 32 div ssa_1250 = ffma ssa_1213, ssa_1204, ssa_1249 vec1 32 div ssa_1251 = fmul ssa_1189, ssa_1211 vec1 32 div ssa_1252 = ffma ssa_1212, ssa_1198, ssa_1251 vec1 32 div ssa_1253 = ffma ssa_1213, ssa_1207, ssa_1252 vec1 32 div ssa_1254 = fmul ssa_1192, ssa_1211 vec1 32 div ssa_1255 = ffma ssa_1212, ssa_1201, ssa_1254 vec1 32 div ssa_1256 = ffma ssa_1213, ssa_1210, ssa_1255 vec1 32 div ssa_1257 = fmul ssa_1256, ssa_1256 vec1 32 div ssa_1258 = ffma ssa_1253, ssa_1253, ssa_1257 vec1 32 div ssa_1259 = ffma ssa_1250, ssa_1250, ssa_1258 vec1 32 div ssa_1260 = frsq ssa_1259 vec1 32 div ssa_1261 = fmul ssa_1260, ssa_1250 vec1 32 div ssa_1262 = fmul ssa_1260, ssa_1253 vec1 32 div ssa_1263 = fmul ssa_1260, ssa_1256 vec1 32 div ssa_1264 = fmul ssa_1247, ssa_1247 vec1 32 div ssa_1265 = ffma ssa_1244, ssa_1244, ssa_1264 vec1 32 div ssa_1266 = ffma ssa_1241, ssa_1241, ssa_1265 vec1 32 div ssa_1267 = frsq ssa_1266 vec1 32 div ssa_1268 = fmul ssa_1267, ssa_1241 vec1 32 div ssa_1269 = fmul ssa_1267, ssa_1244 vec1 32 div ssa_1270 = fmul ssa_1267, ssa_1247 vec1 32 div ssa_1271 = fmul ssa_1238, ssa_1238 vec1 32 div ssa_1272 = ffma ssa_1235, ssa_1235, ssa_1271 vec1 32 div ssa_1273 = ffma ssa_1232, ssa_1232, ssa_1272 vec1 32 div ssa_1274 = frsq ssa_1273 vec1 32 div ssa_1275 = fmul ssa_1274, ssa_1232 vec1 32 div ssa_1276 = fmul ssa_1274, ssa_1235 vec1 32 div ssa_1277 = fmul ssa_1274, ssa_1238 vec1 32 con ssa_1278 = load_const (0x00000320 = 0.000000) vec8 32 con ssa_1279 = intrinsic load_ssbo_uniform_block_intel (ssa_40, ssa_1278) (access=80, align_mul=1073741824, align_offset=800) vec1 32 con ssa_1280 = fneg ssa_1279.e vec1 32 div ssa_1281 = ffma ssa_1280, ssa_704, ssa_692 vec1 32 con ssa_1282 = fneg ssa_1279.f vec1 32 div ssa_1283 = ffma ssa_1282, ssa_704, ssa_696 vec1 32 div ssa_1284 = ffma ssa_1279.c, ssa_686, ssa_1279.d vec1 32 div ssa_1285 = ffma ssa_1279.b, ssa_685, ssa_1284 vec1 32 div ssa_1286 = ffma ssa_1279.a, ssa_684, ssa_1285 vec4 32 div ssa_1287 = vec4 ssa_692, ssa_696, ssa_700, ssa_704 intrinsic store_output (ssa_1287, ssa_0) (base=0, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_POS slots=1 /*67108992*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Position */ intrinsic store_output (ssa_1286, ssa_0) (base=17, wrmask=x /*1*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_CLIP_DIST0 slots=1 /*145*/, xfb() /*0*/, xfb2() /*0*/) vec4 32 div ssa_1288 = vec4 ssa_51.x, ssa_51.y, ssa_1261, ssa_1262 intrinsic store_output (ssa_1288, ssa_0) (base=33, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR1 slots=1 /*161*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD */ vec4 32 div ssa_1289 = vec4 ssa_1263, ssa_1268, ssa_1269, ssa_1270 intrinsic store_output (ssa_1289, ssa_0) (base=34, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR2 slots=1 /*162*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_1 */ vec3 32 div ssa_1290 = vec3 ssa_1275, ssa_1276, ssa_1277 intrinsic store_output (ssa_1290, ssa_0) (base=35, wrmask=xyz /*7*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR3 slots=1 /*163*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_2 */ vec2 32 div ssa_1291 = vec2 ssa_684, ssa_685 intrinsic store_output (ssa_1291, ssa_0) (base=36, wrmask=xy /*3*/, component=2, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR4 slots=1 /*164*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_3 */ vec4 32 div ssa_1292 = vec4 ssa_686, ssa_681, ssa_1281, ssa_1283 intrinsic store_output (ssa_1292, ssa_0) (base=37, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR5 slots=1 /*165*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_4 */ vec4 32 div ssa_1293 = vec4 ssa_700, ssa_704, ssa_1171, ssa_1175 intrinsic store_output (ssa_1293, ssa_0) (base=38, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR6 slots=1 /*166*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_5 */ vec3 32 div ssa_1294 = vec3 ssa_1179, ssa_1183, ssa_47 intrinsic store_output (ssa_1294, ssa_0) (base=39, wrmask=xyz /*7*/, component=0, src_type=float32 /*160*/, io location=VARYING_SLOT_VAR7 slots=1 /*167*/, xfb() /*0*/, xfb2() /*0*/) /* TEXCOORD_6 */ /* succs: block_35 */ block block_35: } VS Output VUE map (12 slots, SSO) [0] VARYING_SLOT_PSIZ [1] VARYING_SLOT_POS [2] VARYING_SLOT_CLIP_DIST0 [3] VARYING_SLOT_CLIP_DIST1 [4] BRW_VARYING_SLOT_PAD [5] VARYING_SLOT_VAR1 [6] VARYING_SLOT_VAR2 [7] VARYING_SLOT_VAR3 [8] VARYING_SLOT_VAR4 [9] VARYING_SLOT_VAR5 [10] VARYING_SLOT_VAR6 [11] VARYING_SLOT_VAR7 ../../SOURCE/master/src/intel/compiler/brw_eu_compact.c:2355:50: runtime error: left shift of negative value -328 Native code for unnamed vertex shader (null) (sha1 3e9b2e3a1a46071464d3a26a3e9ecd80d10a60f5) SIMD8 shader: 1974 instructions. 2 loops. 33225 cycles. 0:0 spills:fills, 216 sends, scheduled with mode non-lifo. Promoted 3 constants. Compacted 31584 to 26608 bytes (16%) START B0 (3490 cycles) add(8) g32<1>F g19<1,1,0>F g18<1,1,0>F { align1 1Q compacted }; add(8) g85<1>F g27<1,1,0>F g26<1,1,0>F { align1 1Q compacted }; mul(8) g37<1>D g14<8,8,1>D g55<16,8,2>UW { align1 1Q }; mul(8) g102<1>D g14<8,8,1>D g55.1<16,8,2>UW { align1 1Q }; mul(8) g115<1>D g22<8,8,1>D g55<16,8,2>UW { align1 1Q }; mul(8) g64<1>D g22<8,8,1>D g55.1<16,8,2>UW { align1 1Q }; mul(8) g98<1>D g15<8,8,1>D g55.1<16,8,2>UW { align1 1Q }; mul(8) g99<1>D g23<8,8,1>D g55.1<16,8,2>UW { align1 1Q }; mul(8) g100<1>D g16<8,8,1>D g55.1<16,8,2>UW { align1 1Q }; mul(8) g101<1>D g24<8,8,1>D g55.1<16,8,2>UW { align1 1Q }; mul(8) g60<1>D g17<8,8,1>D g55.1<16,8,2>UW { align1 1Q }; sel.l(8) g62<1>UD g2.2<0,1,0>UD 0x000f423fUD { align1 1Q }; add(8) g33<1>F g20<1,1,0>F g32<1,1,0>F { align1 1Q F@2 compacted }; add(8) g116<1>F g28<1,1,0>F g85<1,1,0>F { align1 1Q F@2 compacted }; add(8) g37.1<2>UW g37.1<16,8,2>UW g102<16,8,2>UW { align1 1Q I@7 }; add(8) g115.1<2>UW g115.1<16,8,2>UW g64<16,8,2>UW { align1 1Q I@7 }; mul(8) g85<1>D g23<8,8,1>D g55<16,8,2>UW { align1 1Q F@1 }; add(8) g64<1>D g2.6<0,1,0>D 2D { align1 1Q compacted }; add(8) g117<1>F g21<1,1,0>F g33<1,1,0>F { align1 1Q F@2 compacted }; add(8) g89<1>F g29<1,1,0>F g116<1,1,0>F { align1 1Q F@2 compacted }; mul(8) g116<1>D g16<8,8,1>D g55<16,8,2>UW { align1 1Q F@1 }; add(8) g71<1>D g37<1,1,0>D g54<1,1,0>D { align1 1Q I@5 compacted }; add(8) g111<1>D g115<1,1,0>D g54<1,1,0>D { align1 1Q I@5 compacted }; add(8) g85.1<2>UW g85.1<16,8,2>UW g99<16,8,2>UW { align1 1Q I@5 }; add(8) g120<1>F g89<1,1,0>F g117<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g89<1>D g24<8,8,1>D g55<16,8,2>UW { align1 1Q F@1 }; mul(8) g117<1>D g15<8,8,1>D g55<16,8,2>UW { align1 1Q F@1 }; add(8) g116.1<2>UW g116.1<16,8,2>UW g100<16,8,2>UW { align1 1Q I@6 }; and(8) g7<1>UD g71<8,8,1>UD 0xfffffffcUD { align1 1Q I@6 }; and(8) g91<1>UD g111<8,8,1>UD 0xfffffffcUD { align1 1Q I@6 }; add(8) g110<1>D g85<1,1,0>D g54<1,1,0>D { align1 1Q I@6 compacted }; sel.ge(8) g97<1>F g120<8,8,1>F 0x3727c5acF /* 1e-05F */ { align1 1Q F@1 }; mul(8) g120<1>D g17<8,8,1>D g55<16,8,2>UW { align1 1Q F@1 }; add(8) g89.1<2>UW g89.1<16,8,2>UW g101<16,8,2>UW { align1 1Q I@7 }; add(8) g117.1<2>UW g117.1<16,8,2>UW g98<16,8,2>UW { align1 1Q I@7 }; sel.l(8) g98<1>UD g64<8,8,1>UD 0x000f423fUD { align1 1Q }; add(8) g15<1>D g7<1,1,0>D 16D { align1 1Q I@7 compacted }; add(8) g23<1>D g7<1,1,0>D 32D { align1 1Q compacted }; add(8) g8<1>D g7<1,1,0>D 4D { align1 1Q compacted }; add(8) g16<1>D g7<1,1,0>D 20D { align1 1Q compacted }; add(8) g24<1>D g7<1,1,0>D 36D { align1 1Q compacted }; add(8) g13<1>D g7<1,1,0>D 8D { align1 1Q compacted }; add(8) g14<1>D g7<1,1,0>D 12D { align1 1Q compacted }; add(8) g64<1>D g116<1,1,0>D g54<1,1,0>D { align1 1Q compacted }; add(8) g105<1>D g91<1,1,0>D 16D { align1 1Q compacted }; add(8) g76<1>D g91<1,1,0>D 32D { align1 1Q compacted }; add(8) g82<1>D g91<1,1,0>D 20D { align1 1Q compacted }; add(8) g77<1>D g91<1,1,0>D 36D { align1 1Q compacted }; add(8) g103<1>D g91<1,1,0>D 8D { align1 1Q compacted }; add(8) g109<1>D g91<1,1,0>D 24D { align1 1Q compacted }; add(8) g78<1>D g91<1,1,0>D 40D { align1 1Q compacted }; add(8) g94<1>D g91<1,1,0>D 12D { align1 1Q compacted }; add(8) g75<1>D g91<1,1,0>D 28D { align1 1Q compacted }; add(8) g59<1>D g91<1,1,0>D 44D { align1 1Q compacted }; and(8) g93<1>UD g110<8,8,1>UD 0xfffffffcUD { align1 1Q }; math inv(8) g88<1>F g97<8,8,1>F null<8,8,1>F { align1 1Q @1 $0 }; add(8) g120.1<2>UW g120.1<16,8,2>UW g60<16,8,2>UW { align1 1Q }; add(8) g83<1>D g117<1,1,0>D g54<1,1,0>D { align1 1Q compacted }; shl(8) g99<1>D g98<8,8,1>D 0x00000006UD { align1 1Q }; and(8) g67<1>UD g64<8,8,1>UD 0xfffffffcUD { align1 1Q }; add(8) g97<1>D g91<1,1,0>D 4D { align1 1Q $0.src compacted }; add(8) g4<1>D g93<1,1,0>D 32D { align1 1Q I@6 compacted }; mul(8) g32<1>F g18<1,1,0>F g88<1,1,0>F { align1 1Q $0.dst compacted }; mul(8) g33<1>F g26<1,1,0>F g88<1,1,0>F { align1 1Q compacted }; mul(8) g79<1>F g19<1,1,0>F g88<1,1,0>F { align1 1Q compacted }; mul(8) g87<1>F g27<1,1,0>F g88<1,1,0>F { align1 1Q compacted }; and(8) g73<1>UD g83<8,8,1>UD 0xfffffffcUD { align1 1Q I@5 }; add(8) g26<1>D g7<1,1,0>D 40D { align1 1Q F@3 compacted }; add3(8) g111<1>D g9.7<0,1,0>D g99<8,8,1>D 128W { align1 1Q I@6 }; add(8) g27<1>D g7<1,1,0>D 44D { align1 1Q F@1 compacted }; add(8) g83<1>D g120<1,1,0>D g54<1,1,0>D { align1 1Q I@7 compacted }; add(8) g57<1>D g73<1,1,0>D 16D { align1 1Q I@5 compacted }; add(8) g101<1>D g73<1,1,0>D 32D { align1 1Q compacted }; add(8) g81<1>D g73<1,1,0>D 4D { align1 1Q compacted }; add(8) g60<1>D g73<1,1,0>D 36D { align1 1Q compacted }; add(8) g65<1>D g73<1,1,0>D 12D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g70UD g23UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g66UD g15UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g122UD g76UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g118UD g105UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g95UD g7UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g112UD g91UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g108UD g24UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g104UD g16UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g102UD g8UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g123UD g77UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g74UD g82UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g113UD g97UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g84UD g73UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g72UD g26UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g99UD g13UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g124UD g78UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g63UD g109UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g114UD g103UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g125UD g59UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g121UD g75UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g80UD g94UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g110UD g27UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; and(8) g98<1>UD g83<8,8,1>UD 0xfffffffcUD { align1 1Q I@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g71UD g57UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g106UD g101UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g96UD g81UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; mul(8) g97<1>D g25<8,8,1>D g55<16,8,2>UW { align1 1Q $12.src }; add(8) g26<1>D g67<1,1,0>D 16D { align1 1Q $14.src compacted }; mul(8) g59<1>F g20<1,1,0>F g88<1,1,0>F { align1 1Q $3.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g57UD g65UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; mul(8) g65<1>F g28<1,1,0>F g88<1,1,0>F { align1 1Q $10.src compacted }; mul(8) g3<1>F g70<1,1,0>F g32<1,1,0>F { align1 1Q @6 $1.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g70UD g60UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; mul(8) g127<1>F g66<1,1,0>F g32<1,1,0>F { align1 1Q $2.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g60UD g14UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; mul(8) g22<1>F g122<1,1,0>F g33<1,1,0>F { align1 1Q @7 $3.dst compacted }; mul(8) g66<1>D g25<8,8,1>D g55.1<16,8,2>UW { align1 1Q F@2 }; mul(8) g18<1>F g118<1,1,0>F g33<1,1,0>F { align1 1Q $4.dst compacted }; mul(8) g126<1>F g95<1,1,0>F g32<1,1,0>F { align1 1Q $5.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g122UD g4UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; mul(8) g17<1>F g112<1,1,0>F g33<1,1,0>F { align1 1Q $6.dst compacted }; mul(8) g6<1>F g108<1,1,0>F g32<1,1,0>F { align1 1Q $7.dst compacted }; add(8) g118<1>D g93<1,1,0>D 16D { align1 1Q F@4 compacted }; mul(8) g5<1>F g104<1,1,0>F g32<1,1,0>F { align1 1Q $8.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; add(8) g95<1>D g73<1,1,0>D 20D { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mul(8) g4<1>F g102<1,1,0>F g32<1,1,0>F { align1 1Q $9.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; mul(8) g25<1>F g123<1,1,0>F g33<1,1,0>F { align1 1Q I@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g112UD g93UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mul(8) g24<1>F g74<1,1,0>F g33<1,1,0>F { align1 1Q $11.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mul(8) g23<1>F g113<1,1,0>F g33<1,1,0>F { align1 1Q $12.dst compacted }; add(8) g90<1>F g22<1,1,0>F g3<1,1,0>F { align1 1Q F@7 compacted }; add(8) g104<1>D g73<1,1,0>D 44D { align1 1Q F@6 compacted }; add(8) g97.1<2>UW g97.1<16,8,2>UW g66<16,8,2>UW { align1 1Q I@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mul(8) g13<1>F g72<1,1,0>F g32<1,1,0>F { align1 1Q $14.dst compacted }; add(8) g107<1>F g18<1,1,0>F g127<1,1,0>F { align1 1Q compacted }; add(8) g102<1>D g73<1,1,0>D 24D { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.dst }; mul(8) g55<1>F g124<1,1,0>F g33<1,1,0>F { align1 1Q I@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul(8) g27<1>F g63<1,1,0>F g33<1,1,0>F { align1 1Q $1.dst compacted }; add(8) g82<1>F g17<1,1,0>F g126<1,1,0>F { align1 1Q $11.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; add(8) g74<1>D g93<1,1,0>D 20D { align1 1Q F@7 compacted }; add(8) g113<1>D g93<1,1,0>D 4D { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g61UD g118UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; add(8) g22<1>D g89<1,1,0>D g54<1,1,0>D { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g100UD g95UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mul(8) g16<1>F g110<1,1,0>F g32<1,1,0>F { align1 1Q $6.dst compacted }; add(8) g66<1>D g73<1,1,0>D 40D { align1 1Q compacted }; add(8) g119<1>F g25<1,1,0>F g6<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g127<1>F g71<1,1,0>F g79<1,1,0>F { align1 1Q $7.dst compacted }; mul(8) g3<1>F g106<1,1,0>F g79<1,1,0>F { align1 1Q $8.dst compacted }; add(8) g92<1>F g24<1,1,0>F g5<1,1,0>F { align1 1Q compacted }; add(8) g109<1>F g23<1,1,0>F g4<1,1,0>F { align1 1Q $1.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; add(8) g17<1>D g7<1,1,0>D 24D { align1 1Q F@7 compacted }; mul(8) g126<1>F g84<1,1,0>F g79<1,1,0>F { align1 1Q $13.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g72UD g104UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; add(8) g63<1>D g97<1,1,0>D g54<1,1,0>D { align1 1Q A@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; add(8) g77<1>F g55<1,1,0>F g13<1,1,0>F { align1 1Q F@7 compacted }; add(8) g110<1>D g98<1,1,0>D 8D { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g118UD g74UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g113UD g113UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; add(8) g5<1>D g93<1,1,0>D 36D { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; and(8) g95<1>UD g22<8,8,1>UD 0xfffffffcUD { align1 1Q I@6 }; mul(8) g4<1>F g96<1,1,0>F g79<1,1,0>F { align1 1Q $9.dst compacted }; add(8) g84<1>D g73<1,1,0>D 8D { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g108UD g66UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; add(8) g18<1>F g107<1,1,0>F g127<1,1,0>F { align1 1Q F@7 compacted }; add(8) g55<1>D g67<1,1,0>D 24D { align1 1Q F@3 compacted }; add(8) g19<1>F g90<1,1,0>F g3<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g68UD g17UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; add(8) g22<1>D g7<1,1,0>D 28D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; and(8) g101<1>UD g63<8,8,1>UD 0xfffffffcUD { align1 1Q I@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g66UD g102UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; add(8) g127<1>D g93<1,1,0>D 28D { align1 1Q F@2 compacted }; add(8) g90<1>D g67<1,1,0>D 32D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g123UD g5UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; add(8) g17<1>F g82<1,1,0>F g126<1,1,0>F { align1 1Q F@5 compacted }; add(8) g64<1>D g95<1,1,0>D 32D { align1 1Q I@7 compacted }; add(8) g91<1>D g95<1,1,0>D 20D { align1 1Q $6.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.dst }; mul(8) g7<1>F g99<1,1,0>F g32<1,1,0>F { align1 1Q I@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g86UD g84UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; add(8) g102<1>D g95<1,1,0>D 16D { align1 1Q $6.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g3UD g55UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g106UD g22UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g99UD g67UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g63UD g127UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g14UD g90UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; add(8) g22<1>F g109<1,1,0>F g4<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; add(8) g109<1>D g67<1,1,0>D 36D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g54UD g102UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; add(8) g102<1>D g95<1,1,0>D 24D { align1 1Q $14.src compacted }; mul(8) g6<1>F g70<1,1,0>F g79<1,1,0>F { align1 1Q $11.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mul(8) g105<1>F g122<1,1,0>F g87<1,1,0>F { align1 1Q $13.dst compacted }; add(8) g122<1>D g93<1,1,0>D 24D { align1 1Q F@1 compacted }; add(8) g24<1>F g119<1,1,0>F g6<1,1,0>F { align1 1Q F@2 compacted }; add(8) g6<1>D g93<1,1,0>D 40D { align1 1Q F@1 compacted }; add(8) g69<1>F g19<1,1,0>F g105<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; mul(8) g103<1>F g112<1,1,0>F g87<1,1,0>F { align1 1Q $14.dst compacted }; mul(8) g105<1>F g125<1,1,0>F g33<1,1,0>F { align1 1Q $3.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g74UD g122UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g112UD g26UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g124UD g6UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; sel.l(8) g122<1>UD g2.3<0,1,0>UD 0x000f423fUD { align1 1Q $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; mul(8) g26<1>F g114<1,1,0>F g33<1,1,0>F { align1 1Q $2.dst compacted }; add(8) g114<1>D g93<1,1,0>D 8D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; add(8) g78<1>F g17<1,1,0>F g103<1,1,0>F { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g17UD g109UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; mul(8) g103<1>F g80<1,1,0>F g33<1,1,0>F { align1 1Q $5.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g109UD g64UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; add(8) g80<1>D g93<1,1,0>D 12D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mul(8) g94<1>F g61<1,1,0>F g87<1,1,0>F { align1 1Q $15.dst compacted }; shl(8) g61<1>D g62<8,8,1>D 0x00000006UD { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; add(8) g75<1>F g26<1,1,0>F g7<1,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g114UD g114UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mul(8) g5<1>F g100<1,1,0>F g79<1,1,0>F { align1 1Q $0.dst compacted }; add(8) g100<1>D g73<1,1,0>D 28D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; add(8) g81<1>F g18<1,1,0>F g94<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g94<1>F g121<1,1,0>F g33<1,1,0>F { align1 1Q $4.dst compacted }; add(8) g23<1>F g92<1,1,0>F g5<1,1,0>F { align1 1Q F@3 compacted }; add(8) g92<1>D g67<1,1,0>D 40D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g104UD g100UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; add(8) g100<1>D g98<1,1,0>D 32D { align1 1Q $5.src compacted }; mul(8) g107<1>F g118<1,1,0>F g87<1,1,0>F { align1 1Q $2.dst compacted }; add(8) g118<1>D g2<0,1,0>D 1D { align1 1Q F@1 compacted }; mul(8) g82<1>F g113<1,1,0>F g87<1,1,0>F { align1 1Q $3.dst compacted }; add(8) g113<1>D g98<1,1,0>D 20D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g18UD g92UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; mul(8) g13<1>F g108<1,1,0>F g79<1,1,0>F { align1 1Q $4.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mul(8) g8<1>F g68<1,1,0>F g32<1,1,0>F { align1 1Q $5.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g64UD g113UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mul(8) g90<1>F g123<1,1,0>F g87<1,1,0>F { align1 1Q $7.dst compacted }; shl(8) g123<1>D g122<8,8,1>D 0x00000006UD { align1 1Q A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; add(8) g76<1>F g27<1,1,0>F g8<1,1,0>F { align1 1Q F@2 compacted }; add(8) g27<1>D g67<1,1,0>D 20D { align1 1Q F@1 compacted }; add(8) g8<1>D g67<1,1,0>D 4D { align1 1Q F@1 compacted }; mul(8) g7<1>F g86<1,1,0>F g79<1,1,0>F { align1 1Q $8.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; mul(8) g15<1>F g106<1,1,0>F g32<1,1,0>F { align1 1Q $10.dst compacted }; mul(8) g83<1>F g99<1,1,0>F g59<1,1,0>F { align1 1Q $11.dst compacted }; add(8) g99<1>D g98<1,1,0>D 16D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g126UD g27UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; mul(8) g96<1>F g14<1,1,0>F g59<1,1,0>F { align1 1Q $13.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; add(8) g27<1>F g77<1,1,0>F g13<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g68UD g8UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; mul(8) g14<1>F g60<1,1,0>F g32<1,1,0>F { align1 1Q $12.dst compacted }; add(8) g25<1>F g75<1,1,0>F g7<1,1,0>F { align1 1Q F@6 compacted }; add(8) g77<1>D g95<1,1,0>D 8D { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mul(8) g8<1>F g66<1,1,0>F g79<1,1,0>F { align1 1Q $6.dst compacted }; add(8) g60<1>D g95<1,1,0>D 36D { align1 1Q F@3 compacted }; add(8) g7<1>D g93<1,1,0>D 44D { align1 1Q F@2 compacted }; add(8) g70<1>F g78<1,1,0>F g83<1,1,0>F { align1 1Q F@6 compacted }; add(8) g66<1>D g101<1,1,0>D 32D { align1 1Q F@2 compacted }; add(8) g62<1>F g69<1,1,0>F g96<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g93UD g99UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; add(8) g83<1>F g22<1,1,0>F g82<1,1,0>F { align1 1Q compacted }; add(8) g78<1>F g103<1,1,0>F g14<1,1,0>F { align1 1Q F@6 compacted }; add(8) g69<1>F g105<1,1,0>F g16<1,1,0>F { align1 1Q compacted }; add(8) g96<1>F g24<1,1,0>F g90<1,1,0>F { align1 1Q compacted }; add(8) g99<1>D g95<1,1,0>D 28D { align1 1Q $10.src compacted }; add(8) g26<1>F g76<1,1,0>F g8<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g103UD g91UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; mul(8) g14<1>F g54<1,1,0>F g65<1,1,0>F { align1 1Q $14.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g75UD g60UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; add(8) g16<1>D g67<1,1,0>D 8D { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g125UD g7UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; add(8) g76<1>D g95<1,1,0>D 4D { align1 1Q F@2 compacted }; add(8) g60<1>D g101<1,1,0>D 16D { align1 1Q $12.src compacted }; add(8) g7<1>D g101<1,1,0>D 36D { align1 1Q $13.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g90UD g99UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g106UD g16UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mul(8) g16<1>F g72<1,1,0>F g79<1,1,0>F { align1 1Q $1.dst compacted }; add(8) g72<1>D g98<1,1,0>D 4D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g24UD g76UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N $1.src }; send(8) g6UD g60UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; mul(8) g76<1>F g63<1,1,0>F g87<1,1,0>F { align1 1Q $12.dst compacted }; add(8) g63<1>D g101<1,1,0>D 4D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; add(8) g55<1>F g69<1,1,0>F g16<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul(8) g92<1>F g74<1,1,0>F g87<1,1,0>F { align1 1Q $15.dst compacted }; sel.l(8) g74<1>UD g118<8,8,1>UD 0x000f423fUD { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mul(8) g84<1>F g112<1,1,0>F g59<1,1,0>F { align1 1Q $0.dst compacted }; add(8) g118<1>D g98<1,1,0>D 44D { align1 1Q compacted }; mul(8) g119<1>F g124<1,1,0>F g87<1,1,0>F { align1 1Q $1.dst compacted }; add(8) g124<1>D g101<1,1,0>D 24D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; mul(8) g91<1>F g17<1,1,0>F g59<1,1,0>F { align1 1Q $2.dst compacted }; add(8) g17<1>D g67<1,1,0>D 12D { align1 1Q F@1 compacted }; add(8) g112<1>F g81<1,1,0>F g84<1,1,0>F { align1 1Q F@3 compacted }; add(8) g81<1>F g94<1,1,0>F g15<1,1,0>F { align1 1Q compacted }; add(8) g84<1>F g23<1,1,0>F g107<1,1,0>F { align1 1Q compacted }; mul(8) g15<1>F g109<1,1,0>F g65<1,1,0>F { align1 1Q $3.dst compacted }; add(8) g73<1>F g27<1,1,0>F g119<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g23UD g95UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; add(8) g107<1>D g67<1,1,0>D 28D { align1 1Q F@3 compacted }; mul(8) g109<1>F g114<1,1,0>F g87<1,1,0>F { align1 1Q $4.dst compacted }; add(8) g119<1>D g67<1,1,0>D 44D { align1 1Q F@2 compacted }; add(8) g114<1>D g98<1,1,0>D 24D { align1 1Q F@1 compacted }; add(8) g105<1>F g112<1,1,0>F g14<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.dst }; mul(8) g67<1>F g3<1,1,0>F g59<1,1,0>F { align1 1Q I@2 compacted }; add(8) g112<1>D g98<1,1,0>D 12D { align1 1Q F@2 compacted }; mul(8) g14<1>F g57<1,1,0>F g79<1,1,0>F { align1 1Q $10.dst compacted }; add(8) g82<1>F g62<1,1,0>F g15<1,1,0>F { align1 1Q F@6 compacted }; mul(8) g57<1>F g29<1,1,0>F g88<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g4UD g107UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g62UD g80UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; mul(8) g15<1>F g104<1,1,0>F g79<1,1,0>F { align1 1Q $5.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g20UD g119UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g60UD g114UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; add3(8) g80<1>D g9.7<0,1,0>D g61<8,8,1>D 128W { align1 1Q $4.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; add(8) g104<1>D g95<1,1,0>D 40D { align1 1Q F@1 compacted }; add(8) g28<1>F g78<1,1,0>F g14<1,1,0>F { align1 1Q F@4 compacted }; add(8) g61<1>D g98<1,1,0>D 36D { align1 1Q compacted }; mov(8) g14<1>UD 0x00000040UD { align1 WE_all 1Q F@1 }; add(8) g54<1>F g81<1,1,0>F g15<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g78UD g104UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g108UD g61UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; mul(8) g71<1>F g126<1,1,0>F g59<1,1,0>F { align1 1Q $8.dst compacted }; add(8) g126<1>F g96<1,1,0>F g91<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g96UD g72UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; add(8) g91<1>F g26<1,1,0>F g92<1,1,0>F { align1 1Q compacted }; mul(8) g86<1>F g68<1,1,0>F g59<1,1,0>F { align1 1Q $9.dst compacted }; add3(8) g26<1>D g9.7<0,1,0>D g123<8,8,1>D 128W { align1 1Q F@2 }; add(8) g68<1>D g95<1,1,0>D 44D { align1 1Q F@1 compacted }; add(8) g123<1>D g101<1,1,0>D 20D { align1 1Q compacted }; add(8) g122<1>F g84<1,1,0>F g71<1,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g84UD g98UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; add(8) g71<1>F g25<1,1,0>F g109<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g25UD g77UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; add(8) g3<1>F g91<1,1,0>F g67<1,1,0>F { align1 1Q F@4 compacted }; add(8) g121<1>F g83<1,1,0>F g86<1,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g91UD g112UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; add(8) g67<1>F g54<1,1,0>F g76<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g86<1>F g21<1,1,0>F g88<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g81UD g68UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g21UD g66UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; mul(8) g19<1>F g75<1,1,0>F g65<1,1,0>F { align1 1Q $12.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; mul(8) g77<1>F g125<1,1,0>F g87<1,1,0>F { align1 1Q $13.dst compacted }; mov(8) g125<1>UD 0x00000260UD { align1 WE_all 1Q F@1 }; add(8) g92<1>F g126<1,1,0>F g19<1,1,0>F { align1 1Q F@2 compacted }; add(8) g126<1>D g101<1,1,0>D 28D { align1 1Q F@1 compacted }; mov(8) g19<1>UD 0x00000010UD { align1 WE_all 1Q F@1 }; mul(8) g16<1>F g24<1,1,0>F g65<1,1,0>F { align1 1Q $0.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g24UD g7UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mul(8) g104<1>F g6<1,1,0>F g57<1,1,0>F { align1 1Q $1.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; add(8) g107<1>F g121<1,1,0>F g16<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g121UD g101UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; mul(8) g13<1>F g23<1,1,0>F g65<1,1,0>F { align1 1Q $2.dst compacted }; add(8) g94<1>F g70<1,1,0>F g13<1,1,0>F { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g70UD g100UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g80<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g13UD g14UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $2 }; add(8) g80<1>D g98<1,1,0>D 40D { align1 1Q $10.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g14UD g123UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mul(8) g99<1>F g4<1,1,0>F g59<1,1,0>F { align1 1Q $3.dst compacted }; mul(8) g75<1>F g62<1,1,0>F g87<1,1,0>F { align1 1Q $4.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mul(8) g100<1>F g20<1,1,0>F g59<1,1,0>F { align1 1Q $5.dst compacted }; add(8) g62<1>D g98<1,1,0>D 28D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g72UD g80UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; add(8) g7<1>F g67<1,1,0>F g99<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g23<1>F g78<1,1,0>F g65<1,1,0>F { align1 1Q $7.dst compacted }; mul(8) g88<1>F g84<1,1,0>F g86<1,1,0>F { align1 1Q $10.dst compacted }; add(8) g69<1>F g94<1,1,0>F g88<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g94<1>F g93<1,1,0>F g86<1,1,0>F { align1 1Q $10.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mul(8) g68<1>F g21<1,1,0>F g57<1,1,0>F { align1 1Q $14.dst compacted }; mul(8) g93<1>F g106<1,1,0>F g59<1,1,0>F { align1 1Q $15.dst compacted }; mul(8) g21<1>F g25<1,1,0>F g65<1,1,0>F { align1 1Q $11.dst compacted }; mul(8) g25<1>F g90<1,1,0>F g65<1,1,0>F { align1 1Q $14.dst compacted }; add(8) g83<1>F g105<1,1,0>F g94<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; add(8) g127<1>F g71<1,1,0>F g93<1,1,0>F { align1 1Q F@4 compacted }; add(8) g71<1>D g95<1,1,0>D 12D { align1 1Q F@1 compacted }; add(8) g93<1>F g28<1,1,0>F g75<1,1,0>F { align1 1Q compacted }; add(8) g78<1>F g7<1,1,0>F g25<1,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; add(8) g119<1>F g127<1,1,0>F g21<1,1,0>F { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g28UD g71UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g71UD g110UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g110UD g118UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mul(8) g66<1>F g121<1,1,0>F g57<1,1,0>F { align1 1Q $0.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; add(8) g121<1>D g101<1,1,0>D 8D { align1 1Q F@1 compacted }; add(8) g88<1>F g69<1,1,0>F g66<1,1,0>F { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g66UD g62UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g123UD g121UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; mul(8) g105<1>F g70<1,1,0>F g86<1,1,0>F { align1 1Q $1.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g70UD g17UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; mul(8) g15<1>F g13<0,1,0>F g10<1,1,0>F { align1 1Q compacted }; mul(8) g16<1>F g13.1<0,1,0>F g11<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; mul(8) g17<1>F g103<1,1,0>F g65<1,1,0>F { align1 1Q $11.dst compacted }; add(8) g10<1>D g101<1,1,0>D 40D { align1 1Q F@3 compacted }; add(8) g11<1>D g101<1,1,0>D 44D { align1 1Q F@2 compacted }; add(8) g103<1>F g83<1,1,0>F g104<1,1,0>F { align1 1Q compacted }; add(8) g84<1>F g82<1,1,0>F g105<1,1,0>F { align1 1Q F@5 compacted }; mul(8) g82<1>F g96<1,1,0>F g86<1,1,0>F { align1 1Q $9.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g105UD g102UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; add(8) g27<1>F g15<1,1,0>F g13.4<0,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; mul(8) g102<1>F g18<1,1,0>F g59<1,1,0>F { align1 1Q $6.dst compacted }; add(8) g109<1>F g122<1,1,0>F g17<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g26<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g18UD g19UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $12 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g29UD g10UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g122UD g63UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g17UD g124UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g54UD g11UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g19UD g126UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; add(8) g94<1>F g84<1,1,0>F g68<1,1,0>F { align1 1Q F@5 compacted }; shl(8) g63<1>D g74<8,8,1>D 0x00000006UD { align1 1Q $14.src }; add(8) g96<1>F g107<1,1,0>F g82<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mul(8) g126<1>F g103<1,1,0>F g27<1,1,0>F { align1 1Q F@5 compacted }; add(8) g5<1>F g73<1,1,0>F g102<1,1,0>F { align1 1Q F@5 compacted }; mul(8) g107<1>F g64<1,1,0>F g86<1,1,0>F { align1 1Q $7.dst compacted }; add(8) g102<1>F g55<1,1,0>F g77<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mul(8) g10<1>F g94<1,1,0>F g27<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; add3(8) g121<1>D g9.7<0,1,0>D g63<8,8,1>D 128W { align1 1Q I@1 }; mul(8) g63<1>F g88<1,1,0>F g27<1,1,0>F { align1 1Q I@1 compacted }; add(8) g76<1>F g5<1,1,0>F g23<1,1,0>F { align1 1Q F@5 compacted }; add(8) g73<1>F g109<1,1,0>F g107<1,1,0>F { align1 1Q F@5 compacted }; mul(8) g109<1>F g108<1,1,0>F g86<1,1,0>F { align1 1Q $8.dst compacted }; add(8) g8<1>F g102<1,1,0>F g100<1,1,0>F { align1 1Q F@6 compacted }; mul(8) g108<1>F g24<1,1,0>F g57<1,1,0>F { align1 1Q $15.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g121<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g124UD g125UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $2 }; add(8) g95<1>F g92<1,1,0>F g109<1,1,0>F { align1 1Q A@3 compacted }; add(8) g107<1>F g95<1,1,0>F g108<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g24<1>F g28<1,1,0>F g65<1,1,0>F { align1 1Q $5.dst compacted }; mul(8) g28<1>F g81<1,1,0>F g65<1,1,0>F { align1 1Q $13.dst compacted }; mul(8) g92<1>F g71<1,1,0>F g86<1,1,0>F { align1 1Q $6.dst compacted }; add(8) g81<1>F g8<1,1,0>F g28<1,1,0>F { align1 1Q F@2 compacted }; add(8) g28<1>F g16<1,1,0>F g13.5<0,1,0>F { align1 1Q compacted }; add(8) g102<1>F g119<1,1,0>F g92<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g119<1>F g60<1,1,0>F g86<1,1,0>F { align1 1Q $6.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; mad(8) g11<1>F g10<8,8,1>F g107<8,8,1>F g28<1,1,1>F { align1 1Q F@3 }; mul(8) g64<1>F g70<1,1,0>F g59<1,1,0>F { align1 1Q $10.dst compacted }; mul(8) g70<1>F g14<1,1,0>F g57<1,1,0>F { align1 1Q $3.dst compacted }; add(8) g6<1>F g93<1,1,0>F g64<1,1,0>F { align1 1Q F@2 compacted }; add(8) g82<1>F g73<1,1,0>F g70<1,1,0>F { align1 1Q F@2 compacted }; mul(8) g22<1>F g105<1,1,0>F g65<1,1,0>F { align1 1Q $11.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.dst }; mov(8) g14<1>UD g18<0,1,0>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mul(8) g113<1>F g29<1,1,0>F g57<1,1,0>F { align1 1Q $13.dst compacted }; mul(8) g106<1>F g122<1,1,0>F g57<1,1,0>F { align1 1Q $14.dst compacted }; add(8) g122<1>D g101<1,1,0>D 12D { align1 1Q F@1 compacted }; add(8) g77<1>F g6<1,1,0>F g24<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mul(8) g112<1>F g17<1,1,0>F g57<1,1,0>F { align1 1Q $15.dst compacted }; mad(8) g7<1>F g126<8,8,1>F g82<8,8,1>F g28<1,1,1>F { align1 1Q F@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mul(8) g61<1>F g54<1,1,0>F g57<1,1,0>F { align1 1Q $0.dst compacted }; add(8) g75<1>F g3<1,1,0>F g22<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mul(8) g62<1>F g19<1,1,0>F g57<1,1,0>F { align1 1Q $1.dst compacted }; mul(8) g17<1>F g13.2<0,1,0>F g12<1,1,0>F { align1 1Q compacted }; add(8) g105<1>F g96<1,1,0>F g106<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g125UD g122UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; add(8) g64<1>F g75<1,1,0>F g119<1,1,0>F { align1 1Q F@4 compacted }; mul(8) g75<1>F g72<1,1,0>F g86<1,1,0>F { align1 1Q $4.dst compacted }; sel.l(8) g119<1>UD g14<1,1,0>UD 0x00000008UD { align1 1Q A@2 compacted }; mul(8) g72<1>F g123<1,1,0>F g57<1,1,0>F { align1 1Q $9.dst compacted }; add(8) g29<1>F g17<1,1,0>F g13.6<0,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mad(8) g122<1>F g63<8,8,1>F g105<8,8,1>F g28<1,1,1>F { align1 1Q F@5 }; sel.ge(8) g13<1>F g18.1<0,1,0>F g18.2<0,1,0>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; add(8) g3<1>D g53<1,1,0>D -g124.2<0,1,0>D { align1 1Q F@7 compacted }; add(8) g126<1>D g45<1,1,0>D -g124<0,1,0>D { align1 1Q compacted }; add(8) g127<1>D g49<1,1,0>D -g124.1<0,1,0>D { align1 1Q compacted }; add(8) g109<1>F g64<1,1,0>F g112<1,1,0>F { align1 1Q F@6 compacted }; add(8) g98<1>F g76<1,1,0>F g75<1,1,0>F { align1 1Q F@6 compacted }; cmp.z.f0.0(8) g15<1>D g119<1,1,0>D 0D { align1 1Q I@4 compacted }; sel.l(8) g112<1>UD g2.4<0,1,0>UD 0x000f423fUD { align1 1Q F@2 }; add(8) g90<1>F g102<1,1,0>F g72<1,1,0>F { align1 1Q F@6 compacted }; mul(8) g76<1>F g91<1,1,0>F g86<1,1,0>F { align1 1Q $12.dst compacted }; cmp.g.f0.0(8) g16<1>F g13<8,8,1>F 0x3f000000F /* 0.5F */ { align1 1Q F@5 }; mov(8) g6<1>F g3<1,1,0>D { align1 1Q I@5 compacted }; mov(8) g4<1>F g126<1,1,0>D { align1 1Q I@4 compacted }; mov(8) g5<1>F g127<1,1,0>D { align1 1Q I@3 compacted }; mad(8) g8<1>F g7<8,8,1>F g109<8,8,1>F g29<1,1,1>F { align1 1Q F@7 }; add(8) g92<1>F g98<1,1,0>F g113<1,1,0>F { align1 1Q F@7 compacted }; shl(8) g113<1>D g112<8,8,1>D 0x00000006UD { align1 1Q A@1 }; mad(8) g123<1>F g122<8,8,1>F g90<8,8,1>F g29<1,1,1>F { align1 1Q F@7 }; add(8) g99<1>F g77<1,1,0>F g76<1,1,0>F { align1 1Q F@7 compacted }; or.nz.f0.0(8) g17<1>UD g15<8,8,1>UD g16<8,8,1>UD { align1 1Q A@3 }; mul(8) g77<1>F g66<1,1,0>F g86<1,1,0>F { align1 1Q $8.dst compacted }; mul(8) g7<1>F g4<1,1,0>F 0x37000000F /* 7.62939e-06F */ { align1 1Q F@7 compacted }; mad(8) g12<1>F g11<8,8,1>F g92<8,8,1>F g29<1,1,1>F { align1 1Q F@5 }; add(8) g100<1>F g78<1,1,0>F g77<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g78<1>F g110<1,1,0>F g86<1,1,0>F { align1 1Q $7.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; add(8) g118<1>F g100<1,1,0>F g62<1,1,0>F { align1 1Q F@2 compacted }; add(8) g101<1>F g81<1,1,0>F g78<1,1,0>F { align1 1Q A@2 compacted }; add(8) g49<1>F g8<1,1,0>F g118<1,1,0>F { align1 1Q A@2 compacted }; mul(8) g8<1>F g5<1,1,0>F 0x37000000F /* 7.62939e-06F */ { align1 1Q compacted }; add(8) g74<1>F g101<1,1,0>F g61<1,1,0>F { align1 1Q F@3 compacted }; add(8) g53<1>F g12<1,1,0>F g74<1,1,0>F { align1 1Q A@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul(8) g114<1>F g125<1,1,0>F g57<1,1,0>F { align1 1Q $3.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; add(8) g80<1>F g99<1,1,0>F g114<1,1,0>F { align1 1Q F@1 compacted }; add3(8) g114<1>D g9.7<0,1,0>D g113<8,8,1>D 128W { align1 1Q A@1 }; mul(8) g9<1>F g6<1,1,0>F 0x37000000F /* 7.62939e-06F */ { align1 1Q I@1 compacted }; add(8) g45<1>F g123<1,1,0>F g80<1,1,0>F { align1 1Q A@2 compacted }; (-f0.0) if(8) JIP: LABEL0 UIP: LABEL0 { align1 1Q }; END B0 ->B1 ->B12 START B1 <-B0 (10 cycles) mov(8) g77<1>UD 0x00000030UD { align1 1Q }; mov(8) g76<1>UD 0x00000020UD { align1 1Q }; mov(8) g75<1>UD 0x00000010UD { align1 1Q }; mov(8) g55<1>UD 0x00000000UD { align1 1Q }; mov(8) g54<1>UD 0x00000000UD { align1 1Q }; END B1 ->B2 START B3 <-B2 <-B11 (7260 cycles) LABEL7: sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; END B2 ->B3 ->B12 sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; fbl(1) g3<1>UD mask0<0,1,0>UD { align1 WE_all 1N F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; shl(1) a0<1>UD g3<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g4<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; shl(1) a0<1>UD g3<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g20<1>UD g[a0 224]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mov(1) g67<1>UD f0<0,1,0>UD { align1 WE_all 1N F@7 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g4<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g19UD g20UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $6 }; mov(1) f0<1>UD g67<0,1,0>UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.dst }; mov(8) g69<1>UD g19<0,1,0>UD { align1 1Q A@7 }; mov(8) g83<1>UD g19.1<0,1,0>UD { align1 1Q I@7 }; mov(8) g84<1>UD g19.2<0,1,0>UD { align1 1Q I@7 }; mov(8) g18<1>UD g19.3<0,1,0>UD { align1 1Q F@3 }; mov(8) g21<1>F g19<0,1,0>HF { align1 1Q F@2 }; mov(8) g23<1>F g19.2<0,1,0>HF { align1 1Q F@5 }; mov(8) g25<1>F g19.4<0,1,0>HF { align1 1Q }; mov(8) g81<1>F g19.6<0,1,0>HF { align1 1Q }; shl(1) a0<1>UD g3<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g73<1>UD g[a0 352]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g104<2>UW g69.1<16,8,2>UW { align1 1Q I@5 }; mov(8) g68<2>UW g83.1<16,8,2>UW { align1 1Q I@5 }; mov(8) g106<2>UW g84.1<16,8,2>UW { align1 1Q I@5 }; mov(8) g70<2>UW g18.1<16,8,2>UW { align1 1Q I@5 }; mov(8) g22<1>F g104<16,8,2>HF { align1 1Q A@4 }; mov(8) g24<1>F g68<16,8,2>HF { align1 1Q A@3 }; mov(8) g78<1>F g106<16,8,2>HF { align1 1Q I@2 }; mov(8) g69<1>F g70<16,8,2>HF { align1 1Q I@1 }; mov(1) g93<1>UD f0<0,1,0>UD { align1 WE_all 1N compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g4<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g91UD g73UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $7 }; mov(1) f0<1>UD g93<0,1,0>UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.dst }; mov(8) g83<1>UD g91<0,1,0>UD { align1 1Q }; mov(8) g84<1>UD g91.1<0,1,0>UD { align1 1Q }; mov(8) g96<1>UD g91.2<0,1,0>UD { align1 1Q }; mov(8) g71<1>UD g91.3<0,1,0>UD { align1 1Q }; shl(1) a0<1>UD g3<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g67<1>UD g[a0 384]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g73<1>UD f0<0,1,0>UD { align1 WE_all 1N $7.src compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g4<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g93UD g67UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $5 }; mov(1) f0<1>UD g73<0,1,0>UD { align1 WE_all 1N I@2 }; mad(8) g5<1>F g69<8,8,1>F g29<8,8,1>F g81<1,1,1>F { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.dst }; add(8) g100<1>F g93<0,1,0>F -g91<0,1,0>F { align1 1Q compacted }; add(8) g64<1>F g27<1,1,0>F -g91<0,1,0>F { align1 1Q compacted }; add(8) g101<1>F g93.1<0,1,0>F -g91.1<0,1,0>F { align1 1Q }; add(8) g98<1>F g28<1,1,0>F -g91.1<0,1,0>F { align1 1Q compacted }; add(8) g60<1>F g93.2<0,1,0>F -g91.2<0,1,0>F { align1 1Q }; add(8) g99<1>F g29<1,1,0>F -g91.2<0,1,0>F { align1 1Q compacted }; add(8) g125<1>F g91.2<0,1,0>F -g29<1,1,0>F { align1 1Q compacted }; add(8) g122<1>F g91.1<0,1,0>F -g28<1,1,0>F { align1 1Q compacted }; shl(1) a0<1>UD g3<0,1,0>UD 0x00000002UD { align1 WE_all 1N $8.src }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g102<1>UD g[a0 416]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; add(8) g61<1>F g91<0,1,0>F -g27<1,1,0>F { align1 1Q compacted }; mul(8) g106<1>F g100<1,1,0>F g100<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g66<1>F g64<1,1,0>F g100<1,1,0>F { align1 1Q F@7 compacted }; mad(8) g6<1>F g5<8,8,1>F g28<8,8,1>F g78<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g4<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g95UD g102UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $8 }; mul(8) g4<1>F g91.3<0,1,0>F g91.3<0,1,0>F { align1 1Q }; mad(8) g70<1>F g106<8,8,1>F g101<8,8,1>F g101<1,1,1>F { align1 1Q F@4 }; mad(8) g104<1>F g66<8,8,1>F g101<8,8,1>F g98<1,1,1>F { align1 1Q F@4 }; mad(8) g10<1>F g6<8,8,1>F g27<8,8,1>F g25<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mad(8) g108<1>F g70<8,8,1>F g60<8,8,1>F g60<1,1,1>F { align1 1Q F@3 }; mad(8) g68<1>F g104<8,8,1>F g60<8,8,1>F g99<1,1,1>F { align1 1Q F@3 }; math inv(8) g72<1>F g108<8,8,1>F null<8,8,1>F { align1 1Q @2 $9 }; mul(8) g110<1>F g68<1,1,0>F g72<1,1,0>F { align1 1Q @1 $9.dst compacted }; mul(8) g62<1>F g110<1,1,0>F g60<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g113<1>F g110<1,1,0>F g101<1,1,0>F { align1 1Q compacted }; mul(8) g112<1>F g110<1,1,0>F g100<1,1,0>F { align1 1Q compacted }; add(8) g126<1>F g125<1,1,0>F g62<1,1,0>F { align1 1Q F@3 compacted }; add(8) g123<1>F g122<1,1,0>F g113<1,1,0>F { align1 1Q F@3 compacted }; add(8) g63<1>F g61<1,1,0>F g112<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g127<1>F g126<1,1,0>F g126<1,1,0>F { align1 1Q F@3 compacted }; mad(8) g2<1>F g127<8,8,1>F g123<8,8,1>F g123<1,1,1>F { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mad(8) g3<1>F g2<8,8,1>F g63<8,8,1>F g63<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; mov(8) g99<1>UD g95.2<0,1,0>UD { align1 1Q }; mov(8) g98<1>UD g95.1<0,1,0>UD { align1 1Q }; cmp.z.f0.0(8) g11<1>D g95.3<0,1,0>D 0D { align1 1Q compacted }; mov(8) g64<1>UD g95<0,1,0>UD { align1 1Q }; cmp.ge.f0.0(8) g12<1>F g4<1,1,0>F g3<1,1,0>F { align1 1Q F@1 compacted }; mov.nz.f0.0(8) null<1>D g11<8,8,1>D { align1 1Q I@2 }; (+f0.0) if(8) JIP: LABEL2 UIP: LABEL1 { align1 1Q }; END B3 ->B4 ->B7 START B4 <-B3 (950 cycles) mad(8) g13<1>F g24<8,8,1>F g29<8,8,1>F g23<1,1,1>F { align1 1Q }; mad(8) g14<1>F g13<8,8,1>F g28<8,8,1>F g22<1,1,1>F { align1 1Q F@1 }; mad(8) g15<1>F g14<8,8,1>F g27<8,8,1>F g21<1,1,1>F { align1 1Q F@1 }; cmp.g.f0.0(8) g16<1>F g15<8,8,1>F 0x0F /* 0F */ { align1 1Q F@1 }; cmp.g.f0.0(8) g19<1>F g10<8,8,1>F 0x0F /* 0F */ { align1 1Q }; and(8) g18<1>UD g12<1,1,0>UD g16<1,1,0>UD { align1 1Q F@2 compacted }; and.nz.f0.0(8) null<1>UD g18<8,8,1>UD g19<8,8,1>UD { align1 1Q A@1 }; (+f0.0) if(8) JIP: LABEL3 UIP: LABEL3 { align1 1Q }; END B4 ->B5 ->B6 START B5 <-B4 (1380 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul(8) g20<1>F g27<1,1,0>F g21<1,1,0>F { align1 1Q compacted }; add(8) g64<1>F g64<1,1,0>F -g45<1,1,0>F { align1 1Q I@6 compacted }; add(8) g98<1>F g98<1,1,0>F -g49<1,1,0>F { align1 1Q I@7 compacted }; add(8) g99<1>F g99<1,1,0>F -g53<1,1,0>F { align1 1Q I@7 compacted }; mad(8) g25<1>F g20<8,8,1>F g22<8,8,1>F g28<1,1,1>F { align1 1Q F@4 }; mad(8) g78<1>F g25<8,8,1>F g23<8,8,1>F g29<1,1,1>F { align1 1Q F@1 }; add(8) g81<1>F g24<1,1,0>F g78<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g69<1>F g21<1,1,0>F g81<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g91<1>F g22<1,1,0>F g81<1,1,0>F { align1 1Q compacted }; mul(8) g73<1>F g23<1,1,0>F g81<1,1,0>F { align1 1Q compacted }; mul(8) g93<1>F g69<1,1,0>F g69<1,1,0>F { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mad(8) g67<1>F g93<8,8,1>F g91<8,8,1>F g91<1,1,1>F { align1 1Q F@1 }; mad(8) g95<1>F g67<8,8,1>F g73<8,8,1>F g73<1,1,1>F { align1 1Q A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mul.sat(8) g102<1>F g95<8,8,1>F 0x4479ffffF /* 1000F */ { align1 1Q F@1 }; mul(8) g60<1>F g102<1,1,0>F g99<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g101<1>F g102<1,1,0>F g98<1,1,0>F { align1 1Q compacted }; mul(8) g100<1>F g102<1,1,0>F g64<1,1,0>F { align1 1Q compacted }; add(8) g53<1>F g60<1,1,0>F g53<1,1,0>F { align1 1Q F@3 compacted }; add(8) g49<1>F g101<1,1,0>F g49<1,1,0>F { align1 1Q F@3 compacted }; add(8) g45<1>F g100<1,1,0>F g45<1,1,0>F { align1 1Q F@3 compacted }; break(8) JIP: LABEL3 UIP: LABEL4 { align1 1Q }; END B5 ->B2 ->B12 ->B6 START B6 <-B5 <-B4 (160 cycles) LABEL3: endif(8) JIP: LABEL5 { align1 1Q }; LABEL5: else(8) JIP: LABEL1 UIP: LABEL1 { align1 1Q }; END B6 ->B7 ->B10 START B7 <-B3 <-B6 (400 cycles) LABEL2: cmp.ge.f0.0(8) g66<1>F g10<1,1,0>F 0x0F /* 0F */ { align1 1Q F@1 compacted }; and.nz.f0.0(8) null<1>UD g12<8,8,1>UD g66<8,8,1>UD { align1 1Q A@1 }; (+f0.0) if(8) JIP: LABEL6 UIP: LABEL6 { align1 1Q }; END B7 ->B8 ->B9 START B8 <-B7 (2340 cycles) add(8) g104<1>F g112<1,1,0>F g83<1,1,0>F { align1 1Q compacted }; add(8) g68<1>F g113<1,1,0>F g84<1,1,0>F { align1 1Q compacted }; add(8) g106<1>F g62<1,1,0>F g96<1,1,0>F { align1 1Q compacted }; mul(8) g70<1>F g27<1,1,0>F g21<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mad(8) g108<1>F g70<8,8,1>F g22<8,8,1>F g28<1,1,1>F { align1 1Q F@1 }; mad(8) g72<1>F g108<8,8,1>F g23<8,8,1>F g29<1,1,1>F { align1 1Q F@1 }; add(8) g110<1>F g24<1,1,0>F g72<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g112<1>F -g110<1,1,0>F g21<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g113<1>F -g110<1,1,0>F g22<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g62<1>F -g110<1,1,0>F g23<1,1,0>F { align1 1Q compacted }; add(8) g61<1>F g27<1,1,0>F g112<1,1,0>F { align1 1Q F@3 compacted }; add(8) g63<1>F g28<1,1,0>F g113<1,1,0>F { align1 1Q F@3 compacted }; add(8) g122<1>F g29<1,1,0>F g62<1,1,0>F { align1 1Q F@3 compacted }; add(8) g123<1>F g61<1,1,0>F -g104<1,1,0>F { align1 1Q F@3 compacted }; add(8) g125<1>F g63<1,1,0>F -g68<1,1,0>F { align1 1Q F@3 compacted }; add(8) g126<1>F g122<1,1,0>F -g106<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g127<1>F g123<1,1,0>F g123<1,1,0>F { align1 1Q F@3 compacted }; mad(8) g2<1>F g127<8,8,1>F g125<8,8,1>F g125<1,1,1>F { align1 1Q F@1 }; mad(8) g3<1>F g2<8,8,1>F g126<8,8,1>F g126<1,1,1>F { align1 1Q F@1 }; math rsq(8) g4<1>F g3<8,8,1>F null<8,8,1>F { align1 1Q @1 $4 }; mul(8) g5<1>F g4<1,1,0>F g71<1,1,0>F { align1 1Q $4.dst compacted }; mul(8) g6<1>F g5<1,1,0>F g123<1,1,0>F { align1 1Q F@1 compacted }; mul(8) g10<1>F g5<1,1,0>F g125<1,1,0>F { align1 1Q compacted }; mul(8) g11<1>F g5<1,1,0>F g126<1,1,0>F { align1 1Q I@4 compacted }; add(8) g12<1>F g6<1,1,0>F g104<1,1,0>F { align1 1Q A@2 compacted }; add(8) g13<1>F g10<1,1,0>F g68<1,1,0>F { align1 1Q F@3 compacted }; add(8) g14<1>F g11<1,1,0>F g106<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g15<1>F g12<1,1,0>F g88<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g19<1>F g12<1,1,0>F g103<1,1,0>F { align1 1Q I@6 compacted }; mul(8) g22<1>F g12<1,1,0>F g94<1,1,0>F { align1 1Q compacted }; mad(8) g16<1>F g15<8,8,1>F g105<8,8,1>F g13<1,1,1>F { align1 1Q A@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mad(8) g20<1>F g19<8,8,1>F g82<8,8,1>F g13<1,1,1>F { align1 1Q F@3 }; mad(8) g23<1>F g22<8,8,1>F g107<8,8,1>F g13<1,1,1>F { align1 1Q F@3 }; mad(8) g18<1>F g16<8,8,1>F g90<8,8,1>F g14<1,1,1>F { align1 1Q A@3 }; mad(8) g21<1>F g20<8,8,1>F g109<8,8,1>F g14<1,1,1>F { align1 1Q F@3 }; mad(8) g24<1>F g23<8,8,1>F g92<8,8,1>F g14<1,1,1>F { align1 1Q F@3 }; add(8) g45<1>F g18<1,1,0>F g80<1,1,0>F { align1 1Q F@3 compacted }; add(8) g49<1>F g21<1,1,0>F g118<1,1,0>F { align1 1Q F@3 compacted }; add(8) g53<1>F g24<1,1,0>F g74<1,1,0>F { align1 1Q F@3 compacted }; break(8) JIP: LABEL6 UIP: LABEL4 { align1 1Q }; END B8 ->B2 ->B12 ->B9 START B9 <-B8 <-B7 (80 cycles) LABEL6: endif(8) JIP: LABEL1 { align1 1Q }; END B9 ->B10 START B10 <-B9 <-B6 (440 cycles) LABEL1: endif(8) JIP: LABEL4 { align1 1Q }; add(8) g78<1>D g54<1,1,0>D 1D { align1 1Q compacted }; cmp.ge.f0.0(8) null<1>UD g78<8,8,1>UD g119<8,8,1>UD { align1 1Q I@1 }; (+f0.0) break(8) JIP: LABEL4 UIP: LABEL4 { align1 1Q }; END B10 ->B2 ->B12 ->B11 ->B11 START B11 <-B10 (500 cycles) shl(8) g25<1>D g54<8,8,1>D 0x00000002UD { align1 1Q }; shl(8) g96<1>D g54<8,8,1>D 0x00000006UD { align1 1Q }; mov(8) g54<1>UD g78<8,8,1>UD { align1 1Q }; add(8) g81<1>D g25<1,1,0>D 4D { align1 1Q I@3 compacted }; add(8) g55<1>D g96<1,1,0>D 64D { align1 1Q I@3 compacted }; or(8) g69<1>UD g81<1,1,0>UD 0x00000001UD { align1 1Q A@2 compacted }; or(8) g83<1>UD g81<1,1,0>UD 0x00000002UD { align1 1Q compacted }; or(8) g84<1>UD g81<1,1,0>UD 0x00000003UD { align1 1Q compacted }; shl(8) g75<1>D g69<8,8,1>D 0x00000004UD { align1 1Q I@3 }; shl(8) g76<1>D g83<8,8,1>D 0x00000004UD { align1 1Q I@3 }; shl(8) g77<1>D g84<8,8,1>D 0x00000004UD { align1 1Q I@3 }; LABEL4: while(8) JIP: LABEL7 { align1 1Q }; END B11 ->B3 START B12 <-B2 <-B5 <-B8 <-B0 <-B10 (2903 cycles) LABEL0: endif(8) JIP: LABEL8 { align1 1Q }; LABEL8: mov.nz.f0.0(8) null<1>D g17<8,8,1>D { align1 1Q I@4 }; add(8) g2<1>D g37<1,1,0>D g56<1,1,0>D { align1 1Q A@3 compacted }; add(8) g18<1>D g115<1,1,0>D g56<1,1,0>D { align1 1Q A@3 compacted }; add(8) g68<1>D g117<1,1,0>D g56<1,1,0>D { align1 1Q compacted }; add(8) g123<1>D g85<1,1,0>D g56<1,1,0>D { align1 1Q F@1 compacted }; add(8) g99<1>D g89<1,1,0>D g56<1,1,0>D { align1 1Q F@3 compacted }; and(8) g13<1>UD g2<8,8,1>UD 0xfffffffcUD { align1 1Q A@4 }; and(8) g104<1>UD g18<8,8,1>UD 0xfffffffcUD { align1 1Q I@5 }; and(8) g68<1>UD g68<8,8,1>UD 0xfffffffcUD { align1 1Q I@5 }; and(8) g106<1>UD g123<8,8,1>UD 0xfffffffcUD { align1 1Q I@5 }; add(8) g21<1>D g13<1,1,0>D 16D { align1 1Q A@2 compacted }; add(8) g25<1>D g13<1,1,0>D 32D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g3UD g13UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; add(8) g16<1>D g13<1,1,0>D 4D { align1 1Q F@6 compacted }; add(8) g22<1>D g13<1,1,0>D 20D { align1 1Q F@4 compacted }; add(8) g37<1>D g13<1,1,0>D 36D { align1 1Q compacted }; add(8) g17<1>D g13<1,1,0>D 8D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; add(8) g76<1>D g104<1,1,0>D 16D { align1 1Q compacted }; add(8) g83<1>D g104<1,1,0>D 32D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g19UD g104UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; add(8) g115<1>D g104<1,1,0>D 4D { align1 1Q $11.src compacted }; add(8) g77<1>D g104<1,1,0>D 20D { align1 1Q compacted }; add(8) g84<1>D g104<1,1,0>D 36D { align1 1Q compacted }; add(8) g117<1>D g104<1,1,0>D 8D { align1 1Q compacted }; add(8) g96<1>D g104<1,1,0>D 40D { align1 1Q compacted }; add(8) g71<1>D g104<1,1,0>D 44D { align1 1Q compacted }; add(8) g95<1>D g68<1,1,0>D 16D { align1 1Q F@7 compacted }; add(8) g108<1>D g68<1,1,0>D 32D { align1 1Q $9.src compacted }; add(8) g91<1>D g68<1,1,0>D 4D { align1 1Q F@7 compacted }; add(8) g98<1>D g68<1,1,0>D 20D { align1 1Q F@5 compacted }; add(8) g72<1>D g68<1,1,0>D 36D { align1 1Q compacted }; add(8) g101<1>D g68<1,1,0>D 24D { align1 1Q F@2 compacted }; add(8) g110<1>D g68<1,1,0>D 40D { align1 1Q F@7 compacted }; add(8) g112<1>D g68<1,1,0>D 44D { align1 1Q F@6 compacted }; add(8) g80<1>D g106<1,1,0>D 16D { align1 1Q F@1 compacted }; add(8) g126<1>D g106<1,1,0>D 32D { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g125UD g106UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; add(8) g127<1>D g106<1,1,0>D 36D { align1 1Q F@4 compacted }; add(8) g2<1>D g106<1,1,0>D 40D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g10UD g21UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g14UD g25UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g4UD g16UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g11UD g22UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g15UD g37UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g5UD g17UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g23UD g76UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g54UD g83UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g20UD g115UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g24UD g77UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g55UD g84UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g113UD g95UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g118UD g108UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g62UD g98UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g74UD g72UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g61UD g101UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N $13.src }; send(8) g21UD g117UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g63UD g110UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g122UD g112UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; add(8) g83<1>D g116<1,1,0>D g56<1,1,0>D { align1 1Q $4.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g115UD g96UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g108UD g68UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g72UD g91UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g117UD g71UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; and(8) g70<1>UD g83<8,8,1>UD 0xfffffffcUD { align1 1Q I@1 }; mul(8) g85<1>F g3<1,1,0>F g32<1,1,0>F { align1 1Q $10.dst compacted }; add(8) g3<1>D g106<1,1,0>D 44D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; mad(8) g73<1>F g85<8,8,1>F g33<8,8,1>F g19<1,1,1>F { align1 1Q $11.dst }; mul(8) g75<1>F g10<1,1,0>F g32<1,1,0>F { align1 1Q $13.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mul(8) g76<1>F g14<1,1,0>F g32<1,1,0>F { align1 1Q $14.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g14UD g3UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul(8) g77<1>F g4<1,1,0>F g32<1,1,0>F { align1 1Q $15.dst compacted }; mul(8) g78<1>F g11<1,1,0>F g32<1,1,0>F { align1 1Q $0.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g4UD g80UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; mul(8) g81<1>F g15<1,1,0>F g32<1,1,0>F { align1 1Q $1.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g11UD g126UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; mul(8) g69<1>F g5<1,1,0>F g32<1,1,0>F { align1 1Q $2.dst compacted }; mad(8) g93<1>F g75<8,8,1>F g33<8,8,1>F g23<1,1,1>F { align1 1Q @6 $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mad(8) g67<1>F g76<8,8,1>F g33<8,8,1>F g54<1,1,1>F { align1 1Q @6 $4.dst }; add(8) g75<1>D g104<1,1,0>D 12D { align1 1Q F@2 compacted }; add(8) g23<1>D g13<1,1,0>D 24D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mad(8) g95<1>F g77<8,8,1>F g33<8,8,1>F g20<1,1,1>F { align1 1Q @6 $5.dst }; add(8) g54<1>D g13<1,1,0>D 40D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mad(8) g102<1>F g78<8,8,1>F g33<8,8,1>F g24<1,1,1>F { align1 1Q @6 $6.dst }; mad(8) g64<1>F g81<8,8,1>F g33<8,8,1>F g55<1,1,1>F { align1 1Q @6 $7.dst }; add(8) g20<1>D g13<1,1,0>D 12D { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; mad(8) g98<1>F g69<8,8,1>F g33<8,8,1>F g21<1,1,1>F { align1 1Q @6 $13.dst }; add(8) g78<1>D g104<1,1,0>D 24D { align1 1Q F@3 compacted }; add(8) g24<1>D g13<1,1,0>D 28D { align1 1Q F@3 compacted }; add(8) g81<1>D g97<1,1,0>D g56<1,1,0>D { align1 1Q F@2 compacted }; add(8) g55<1>D g13<1,1,0>D 44D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g22UD g75UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; mad(8) g18<1>F g93<8,8,1>F g79<8,8,1>F g113<1,1,1>F { align1 1Q @6 $8.dst }; add(8) g69<1>D g104<1,1,0>D 28D { align1 1Q F@2 compacted }; add(8) g21<1>D g120<1,1,0>D g56<1,1,0>D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g12UD g23UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; mad(8) g19<1>F g67<8,8,1>F g79<8,8,1>F g118<1,1,1>F { align1 1Q @6 $9.dst }; mad(8) g15<1>F g73<8,8,1>F g79<8,8,1>F g108<1,1,1>F { align1 1Q $1.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g16UD g54UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g6UD g20UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; add(8) g93<1>D g68<1,1,0>D 12D { align1 1Q F@3 compacted }; add(8) g113<1>D g106<1,1,0>D 4D { align1 1Q F@3 compacted }; add(8) g104<1>D g68<1,1,0>D 28D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g25UD g78UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g13UD g24UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mad(8) g23<1>F g102<8,8,1>F g79<8,8,1>F g62<1,1,1>F { align1 1Q @6 $10.dst }; mul(8) g67<1>F g45<1,1,0>F g46<1,1,0>F { align1 1Q compacted }; add(8) g118<1>D g106<1,1,0>D 20D { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g17UD g55UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; add(8) g73<1>D g68<1,1,0>D 8D { align1 1Q F@3 compacted }; and(8) g108<1>UD g99<8,8,1>UD 0xfffffffcUD { align1 1Q F@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; mad(8) g20<1>F g95<8,8,1>F g79<8,8,1>F g72<1,1,1>F { align1 1Q @7 $2.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g37UD g69UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mad(8) g24<1>F g64<8,8,1>F g79<8,8,1>F g74<1,1,1>F { align1 1Q @7 $11.dst }; add(8) g62<1>D g106<1,1,0>D 8D { align1 1Q F@4 compacted }; mad(8) g85<1>F g15<8,8,1>F g87<8,8,1>F g125<1,1,1>F { align1 1Q @5 $12.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N $15.src }; send(8) g112UD g93UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g126UD g113UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g80UD g104UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; and(8) g72<1>UD g21<8,8,1>UD 0xfffffffcUD { align1 1Q A@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g5UD g118UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; add(8) g15<1>D g70<1,1,0>D 12D { align1 1Q F@1 compacted }; add(8) g125<1>D g106<1,1,0>D 28D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g110UD g73UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; add(8) g69<1>D g108<1,1,0>D 16D { align1 1Q I@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; add(8) g113<1>D g72<1,1,0>D 32D { align1 1Q I@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g10UD g125UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g104UD g69UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; mad(8) g116<1>F g18<8,8,1>F g87<8,8,1>F g4<1,1,1>F { align1 1Q @7 $5.dst }; add(8) g18<1>D g70<1,1,0>D 24D { align1 1Q F@1 compacted }; add(8) g4<1>D g70<1,1,0>D 4D { align1 1Q F@1 compacted }; mad(8) g89<1>F g19<8,8,1>F g87<8,8,1>F g11<1,1,1>F { align1 1Q @7 $6.dst }; add(8) g19<1>D g70<1,1,0>D 28D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g93UD g18UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; mul(8) g83<1>F g12<1,1,0>F g32<1,1,0>F { align1 1Q $8.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g12UD g127UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mul(8) g84<1>F g16<1,1,0>F g32<1,1,0>F { align1 1Q $9.dst compacted }; add(8) g16<1>D g70<1,1,0>D 16D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; mul(8) g96<1>F g6<1,1,0>F g32<1,1,0>F { align1 1Q $10.dst compacted }; mad(8) g99<1>F g83<8,8,1>F g33<8,8,1>F g25<1,1,1>F { align1 1Q @3 $11.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mul(8) g71<1>F g13<1,1,0>F g32<1,1,0>F { align1 1Q $12.dst compacted }; add(8) g83<1>D g108<1,1,0>D 20D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g13UD g2UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; mad(8) g100<1>F g84<8,8,1>F g33<8,8,1>F g115<1,1,1>F { align1 1Q @4 $0.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g91UD g16UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g2UD g62UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mad(8) g101<1>F g96<8,8,1>F g33<8,8,1>F g22<1,1,1>F { align1 1Q @4 $7.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g84UD g70UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; add(8) g115<1>D g70<1,1,0>D 32D { align1 1Q F@2 compacted }; add(8) g62<1>D g72<1,1,0>D 36D { align1 1Q $10.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g96UD g4UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g22UD g72UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; mad(8) g60<1>F g71<8,8,1>F g33<8,8,1>F g37<1,1,1>F { align1 1Q @3 $14.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g68UD g83UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mad(8) g54<1>F g99<8,8,1>F g79<8,8,1>F g61<1,1,1>F { align1 1Q @5 $12.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mad(8) g55<1>F g100<8,8,1>F g79<8,8,1>F g63<1,1,1>F { align1 1Q @4 $14.dst }; mad(8) g120<1>F g20<8,8,1>F g87<8,8,1>F g126<1,1,1>F { align1 1Q $0.dst }; add(8) g71<1>D g108<1,1,0>D 32D { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; add(8) g99<1>D g72<1,1,0>D 16D { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g95UD g115UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; mad(8) g56<1>F g101<8,8,1>F g79<8,8,1>F g112<1,1,1>F { align1 1Q @5 $15.dst }; mad(8) g97<1>F g23<8,8,1>F g87<8,8,1>F g5<1,1,1>F { align1 1Q $2.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g100UD g108UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; add(8) g63<1>D g106<1,1,0>D 24D { align1 1Q F@4 compacted }; mad(8) g25<1>F g98<8,8,1>F g79<8,8,1>F g110<1,1,1>F { align1 1Q $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mad(8) g115<1>F g60<8,8,1>F g79<8,8,1>F g80<1,1,1>F { align1 1Q @7 $1.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g110UD g71UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; add(8) g80<1>D g72<1,1,0>D 44D { align1 1Q F@1 compacted }; mul(8) g71<1>F g45<1,1,0>F g42<1,1,0>F { align1 1Q $1.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g6UD g63UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; mad(8) g75<1>F g24<8,8,1>F g87<8,8,1>F g12<1,1,1>F { align1 1Q $7.dst }; add(8) g12<1>D g70<1,1,0>D 8D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; mad(8) g78<1>F g55<8,8,1>F g87<8,8,1>F g13<1,1,1>F { align1 1Q @7 $8.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g55UD g62UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; mad(8) g118<1>F g116<8,8,1>F g59<8,8,1>F g91<1,1,1>F { align1 1Q $9.dst }; add(8) g116<1>D g70<1,1,0>D 40D { align1 1Q F@1 compacted }; mul(8) g91<1>F g17<1,1,0>F g32<1,1,0>F { align1 1Q $13.dst compacted }; mad(8) g76<1>F g25<8,8,1>F g87<8,8,1>F g2<1,1,1>F { align1 1Q @7 $10.dst }; add(8) g17<1>D g70<1,1,0>D 20D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g32UD g99UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; mad(8) g61<1>F g85<8,8,1>F g59<8,8,1>F g84<1,1,1>F { align1 1Q $11.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; mad(8) g63<1>F g120<8,8,1>F g59<8,8,1>F g96<1,1,1>F { align1 1Q $12.dst }; add(8) g85<1>D g70<1,1,0>D 36D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; add(8) g84<1>D g108<1,1,0>D 24D { align1 1Q F@2 compacted }; add(8) g120<1>D g108<1,1,0>D 4D { align1 1Q F@1 compacted }; add(8) g96<1>D g108<1,1,0>D 28D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g64UD g116UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; mad(8) g66<1>F g91<8,8,1>F g33<8,8,1>F g117<1,1,1>F { align1 1Q @4 $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g73UD g17UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; mad(8) g91<1>F g71<8,8,1>F g43<8,8,1>F g49<1,1,1>F { align1 1Q F@7 }; mad(8) g74<1>F g89<8,8,1>F g59<8,8,1>F g95<1,1,1>F { align1 1Q $15.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g102UD g85UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g71UD g15UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; add(8) g89<1>D g70<1,1,0>D 44D { align1 1Q F@1 compacted }; mad(8) g95<1>F g67<8,8,1>F g47<8,8,1>F g49<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g101UD g120UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; mad(8) g5<1>F g61<8,8,1>F g65<8,8,1>F g100<1,1,1>F { align1 1Q @6 $0.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g70UD g96UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g67UD g19UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; mad(8) g117<1>F g66<8,8,1>F g79<8,8,1>F g122<1,1,1>F { align1 1Q @5 $15.dst }; add(8) g61<1>D g106<1,1,0>D 12D { align1 1Q F@2 compacted }; mad(8) g11<1>F g74<8,8,1>F g65<8,8,1>F g110<1,1,1>F { align1 1Q @4 $1.dst }; mov(8) g66<1>UD 0x000001a0UD { align1 WE_all 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g98UD g89UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; mad(8) g77<1>F g54<8,8,1>F g87<8,8,1>F g6<1,1,1>F { align1 1Q $2.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g106UD g84UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; and(8) g110<1>UD g81<8,8,1>UD 0xfffffffcUD { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g54UD g113UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; mad(8) g6<1>F g118<8,8,1>F g65<8,8,1>F g104<1,1,1>F { align1 1Q $5.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mad(8) g81<1>F g115<8,8,1>F g87<8,8,1>F g10<1,1,1>F { align1 1Q I@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g3UD g61UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mad(8) g69<1>F g117<8,8,1>F g87<8,8,1>F g14<1,1,1>F { align1 1Q @5 $4.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g115UD g80UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; add(8) g61<1>D g72<1,1,0>D 40D { align1 1Q $15.src compacted }; add(8) g10<1>D g110<1,1,0>D 32D { align1 1Q A@2 compacted }; add(8) g118<1>D g110<1,1,0>D 4D { align1 1Q F@3 compacted }; add(8) g74<1>D g110<1,1,0>D 8D { align1 1Q F@5 compacted }; mad(8) g126<1>F g77<8,8,1>F g59<8,8,1>F g93<1,1,1>F { align1 1Q @4 $6.dst }; add(8) g93<1>D g108<1,1,0>D 40D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g83UD g118UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g84UD g74UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g113UD g93UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; mad(8) g99<1>F g6<8,8,1>F g86<8,8,1>F g32<1,1,1>F { align1 1Q $4.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mad(8) g127<1>F g78<8,8,1>F g59<8,8,1>F g64<1,1,1>F { align1 1Q $5.dst }; mad(8) g64<1>F g5<8,8,1>F g86<8,8,1>F g22<1,1,1>F { align1 1Q @7 $13.dst }; mad(8) g122<1>F g97<8,8,1>F g59<8,8,1>F g73<1,1,1>F { align1 1Q $6.dst }; add(8) g97<1>D g108<1,1,0>D 8D { align1 1Q F@1 compacted }; add(8) g73<1>D g108<1,1,0>D 36D { align1 1Q F@1 compacted }; mad(8) g123<1>F g75<8,8,1>F g59<8,8,1>F g102<1,1,1>F { align1 1Q $7.dst }; add(8) g102<1>D g72<1,1,0>D 4D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g60UD g97UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g112UD g73UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; mad(8) g13<1>F g122<8,8,1>F g65<8,8,1>F g68<1,1,1>F { align1 1Q @2 $14.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g73UD g10UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $6 }; add(8) g122<1>D g110<1,1,0>D 12D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g23UD g102UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; mad(8) g102<1>F g95<8,8,1>F g48<8,8,1>F g53<1,1,1>F { align1 1Q $7.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mad(8) g4<1>F g69<8,8,1>F g59<8,8,1>F g98<1,1,1>F { align1 1Q $12.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g69UD g110UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; mul(8) g98<1>F g45<1,1,0>F g50<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mad(8) g16<1>F g126<8,8,1>F g65<8,8,1>F g106<1,1,1>F { align1 1Q @7 $13.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; add(8) g126<1>D g110<1,1,0>D 24D { align1 1Q F@1 compacted }; add(8) g106<1>D g72<1,1,0>D 24D { align1 1Q F@1 compacted }; mad(8) g100<1>F g11<8,8,1>F g86<8,8,1>F g54<1,1,1>F { align1 1Q $14.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g96UD g122UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; add(8) g11<1>D g110<1,1,0>D 36D { align1 1Q F@1 compacted }; mad(8) g79<1>F g56<8,8,1>F g87<8,8,1>F g3<1,1,1>F { align1 1Q $15.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g56UD g61UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g87UD g12UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; mad(8) g3<1>F g81<8,8,1>F g59<8,8,1>F g67<1,1,1>F { align1 1Q $11.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; mad(8) g12<1>F g63<8,8,1>F g65<8,8,1>F g101<1,1,1>F { align1 1Q $9.dst }; add(8) g67<1>D g108<1,1,0>D 44D { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g37UD g106UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; add(8) g101<1>D g72<1,1,0>D 20D { align1 1Q F@1 compacted }; mad(8) g2<1>F g79<8,8,1>F g59<8,8,1>F g71<1,1,1>F { align1 1Q @3 $8.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g71UD g126UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; mad(8) g19<1>F g3<8,8,1>F g65<8,8,1>F g70<1,1,1>F { align1 1Q @3 $10.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g62UD g67UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mad(8) g17<1>F g127<8,8,1>F g65<8,8,1>F g113<1,1,1>F { align1 1Q $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g67UD g11UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g33UD g101UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $0 }; add(8) g127<1>D g110<1,1,0>D 28D { align1 1Q F@1 compacted }; mad(8) g14<1>F g123<8,8,1>F g65<8,8,1>F g112<1,1,1>F { align1 1Q $5.dst }; add(8) g123<1>D g110<1,1,0>D 16D { align1 1Q F@1 compacted }; mov(8) g112<1>UD 0x000001c0UD { align1 WE_all 1Q F@1 }; mad(8) g79<1>F g100<8,8,1>F g57<8,8,1>F g73<1,1,1>F { align1 1Q @7 $6.dst }; mad(8) g73<1>F g91<8,8,1>F g44<8,8,1>F g53<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; mad(8) g101<1>F g12<8,8,1>F g86<8,8,1>F g23<1,1,1>F { align1 1Q @7 $7.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g91UD g127UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $1 }; add(8) g12<1>D g110<1,1,0>D 40D { align1 1Q F@1 compacted }; mad(8) g77<1>F g64<8,8,1>F g57<8,8,1>F g69<1,1,1>F { align1 1Q $8.dst }; add(8) g64<1>D g72<1,1,0>D 8D { align1 1Q F@1 compacted }; mad(8) g68<1>F g14<8,8,1>F g86<8,8,1>F g55<1,1,1>F { align1 1Q @5 $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mul(8) g116<1>F g79<1,1,0>F g27<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; add(8) g93<1>F g73<1,1,0>F g7<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g95UD g12UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g24UD g64UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mad(8) g125<1>F g76<8,8,1>F g59<8,8,1>F g87<1,1,1>F { align1 1Q $11.dst }; add(8) g64<1>F g102<1,1,0>F g8<1,1,0>F { align1 1Q $3.src compacted }; mul(8) g54<1>F g77<1,1,0>F g27<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g121<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g75UD g112UD nullUD 0x2220d500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V16, transpose, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $4 }; add(8) g59<1>D g108<1,1,0>D 12D { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g87UD g123UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $5 }; mad(8) g70<1>F g16<8,8,1>F g86<8,8,1>F g37<1,1,1>F { align1 1Q $12.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; add(8) g112<1>D g72<1,1,0>D 28D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; mad(8) g108<1>F g17<8,8,1>F g86<8,8,1>F g56<1,1,1>F { align1 1Q I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mad(8) g15<1>F g125<8,8,1>F g65<8,8,1>F g60<1,1,1>F { align1 1Q @5 $4.dst }; add(8) g125<1>D g110<1,1,0>D 20D { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; or(1) a0.1<1>UD g121<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g60UD g66UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $6 }; mad(8) g20<1>F g4<8,8,1>F g65<8,8,1>F g62<1,1,1>F { align1 1Q $14.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g66UD g59UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $7 }; mad(8) g69<1>F g68<8,8,1>F g57<8,8,1>F g67<1,1,1>F { align1 1Q @7 $15.dst }; mad(8) g104<1>F g13<8,8,1>F g86<8,8,1>F g33<1,1,1>F { align1 1Q $0.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mad(8) g59<1>F g101<8,8,1>F g57<8,8,1>F g83<1,1,1>F { align1 1Q $1.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g45UD g112UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $8 }; add(8) g13<1>D g110<1,1,0>D 44D { align1 1Q F@2 compacted }; mad(8) g33<1>F g70<8,8,1>F g57<8,8,1>F g71<1,1,1>F { align1 1Q @7 $13.dst }; mad(8) g62<1>F g20<8,8,1>F g86<8,8,1>F g115<1,1,1>F { align1 1Q @5 $0.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mad(8) g89<1>F g116<8,8,1>F g69<8,8,1>F g28<1,1,1>F { align1 1Q F@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g102UD g13UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $9 }; mad(8) g55<1>F g54<8,8,1>F g59<8,8,1>F g28<1,1,1>F { align1 1Q F@4 }; mad(8) g37<1>F g108<8,8,1>F g57<8,8,1>F g95<1,1,1>F { align1 1Q @7 $2.dst }; mov(8) g108<1>UD 0x00000250UD { align1 WE_all 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mad(8) g106<1>F g15<8,8,1>F g86<8,8,1>F g24<1,1,1>F { align1 1Q @7 $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mul(8) g113<1>F g75<0,1,0>F g93<1,1,0>F { align1 1Q compacted }; mul(8) g80<1>F g75.4<0,1,0>F g93<1,1,0>F { align1 1Q $0.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mul(8) g63<1>F g76<0,1,0>F g93<1,1,0>F { align1 1Q compacted }; mad(8) g78<1>F g99<8,8,1>F g57<8,8,1>F g87<1,1,1>F { align1 1Q $5.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g121<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g70UD g108UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $10 }; mad(8) g99<1>F g98<8,8,1>F g51<8,8,1>F g49<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mad(8) g120<1>F g89<8,8,1>F g37<8,8,1>F g29<1,1,1>F { align1 1Q F@7 }; mad(8) g32<1>F g106<8,8,1>F g57<8,8,1>F g84<1,1,1>F { align1 1Q @7 $2.dst }; add(8) g98<1>D g72<1,1,0>D 12D { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mad(8) g118<1>F g80<8,8,1>F g75.5<0,1,0>F g64<1,1,1>F { align1 1Q F@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mad(8) g122<1>F g63<8,8,1>F g76.1<0,1,0>F g64<1,1,1>F { align1 1Q F@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mad(8) g18<1>F g2<8,8,1>F g65<8,8,1>F g66<1,1,1>F { align1 1Q $7.dst }; mul(8) g115<1>F g78<1,1,0>F g27<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g65UD g125UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; mad(8) g112<1>F g19<8,8,1>F g86<8,8,1>F g45<1,1,1>F { align1 1Q $8.dst }; mul(8) g125<1>F g76.4<0,1,0>F g93<1,1,0>F { align1 1Q $11.src compacted }; mad(8) g100<1>F g99<8,8,1>F g52<8,8,1>F g53<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g25UD g98UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; mad(8) g56<1>F g55<8,8,1>F g32<8,8,1>F g29<1,1,1>F { align1 1Q F@7 }; mad(8) g53<1>F g62<8,8,1>F g57<8,8,1>F g102<1,1,1>F { align1 1Q $9.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mad(8) g126<1>F g125<8,8,1>F g76.5<0,1,0>F g64<1,1,1>F { align1 1Q F@4 }; mad(8) g62<1>F g113<8,8,1>F g75.1<0,1,0>F g64<1,1,1>F { align1 1Q }; mad(8) g49<1>F g112<8,8,1>F g57<8,8,1>F g91<1,1,1>F { align1 1Q @7 $1.dst }; add(8) g101<1>F g100<1,1,0>F g9<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; mad(8) g61<1>F g62<8,8,1>F g75.2<0,1,0>F g101<1,1,1>F { align1 1Q F@1 }; mad(8) g74<1>F g118<8,8,1>F g75.6<0,1,0>F g101<1,1,1>F { align1 1Q $2.src }; mad(8) g123<1>F g122<8,8,1>F g76.2<0,1,0>F g101<1,1,1>F { align1 1Q $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mad(8) g127<1>F g126<8,8,1>F g76.6<0,1,0>F g101<1,1,1>F { align1 1Q F@7 }; add(8) g6<1>F g61<1,1,0>F g75.3<0,1,0>F { align1 1Q F@4 compacted }; add(8) g7<1>F g74<1,1,0>F g75.7<0,1,0>F { align1 1Q F@4 compacted }; add(8) g8<1>F g123<1,1,0>F g76.3<0,1,0>F { align1 1Q F@4 compacted }; add(8) g9<1>F g127<1,1,0>F g76.7<0,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; add(8) g16<1>F g70<0,1,0>F g93<1,1,0>F { align1 1Q compacted }; add(8) g17<1>F g70.1<0,1,0>F g64<1,1,0>F { align1 1Q compacted }; mad(8) g81<1>F g104<8,8,1>F g57<8,8,1>F g65<1,1,1>F { align1 1Q $11.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.dst }; mul(8) g104<1>F g60<0,1,0>F g93<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.dst }; mad(8) g72<1>F g18<8,8,1>F g86<8,8,1>F g25<1,1,1>F { align1 1Q I@1 }; add(8) g18<1>F g70.2<0,1,0>F g101<1,1,0>F { align1 1Q compacted }; mad(8) g68<1>F g104<8,8,1>F g60.1<0,1,0>F g64<1,1,1>F { align1 1Q F@3 }; mad(8) g117<1>F g115<8,8,1>F g81<8,8,1>F g28<1,1,1>F { align1 1Q F@5 }; mad(8) g45<1>F g72<8,8,1>F g57<8,8,1>F g96<1,1,1>F { align1 1Q @4 $9.dst }; add(8) g96<1>F g120<1,1,0>F g53<1,1,0>F { align1 1Q compacted }; mad(8) g106<1>F g68<8,8,1>F g60.2<0,1,0>F g101<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mad(8) g85<1>F g117<8,8,1>F g33<8,8,1>F g29<1,1,1>F { align1 1Q F@4 }; add(8) g83<1>F g56<1,1,0>F g45<1,1,0>F { align1 1Q F@4 compacted }; add(8) g19<1>F g106<1,1,0>F g60.3<0,1,0>F { align1 1Q F@3 compacted }; add(8) g84<1>F g85<1,1,0>F g49<1,1,0>F { align1 1Q F@3 compacted }; (-f0.0) if(8) JIP: LABEL9 UIP: LABEL9 { align1 1Q }; END B12 ->B13 ->B24 START B13 <-B12 (10 cycles) mov(8) g95<1>UD 0x00000030UD { align1 1Q }; mov(8) g67<1>UD 0x00000020UD { align1 1Q }; mov(8) g93<1>UD 0x00000010UD { align1 1Q }; mov(8) g73<1>UD 0x00000000UD { align1 1Q }; mov(8) g91<1>UD 0x00000000UD { align1 1Q }; END B13 ->B14 START B14 <-B13 <-B17 <-B20 <-B22 (0 cycles) LABEL16: fbl(1) g57<1>UD mask0<0,1,0>UD { align1 WE_all 1N F@7 compacted }; END B14 ->B15 ->B24 shl(1) a0<1>UD g57<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g71<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g57<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g76<1>UD g[a0 288]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g56<1>UD f0<0,1,0>UD { align1 WE_all 1N F@3 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g71<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g75UD g76UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $13 }; mov(1) f0<1>UD g56<0,1,0>UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.dst }; mov(8) g100<1>UD g75<0,1,0>UD { align1 1Q F@1 }; mov(8) g101<1>UD g75.1<0,1,0>UD { align1 1Q F@5 }; mov(8) g60<1>UD g75.2<0,1,0>UD { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mov(8) g97<1>UD g75.3<0,1,0>UD { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; mov(8) g102<1>F g75<0,1,0>HF { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mov(8) g98<1>F g75.2<0,1,0>HF { align1 1Q F@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; shl(1) a0<1>UD g57<0,1,0>UD 0x00000002UD { align1 WE_all 1N F@3 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g106<1>UD g[a0 416]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; mov(8) g108<2>UW g100.1<16,8,2>UW { align1 1Q }; mov(8) g100<1>F g75.4<0,1,0>HF { align1 1Q I@1 }; mov(8) g72<2>UW g101.1<16,8,2>UW { align1 1Q A@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@5 }; mov(8) g110<2>UW g60.1<16,8,2>UW { align1 1Q }; mov(8) g111<2>UW g97.1<16,8,2>UW { align1 1Q A@5 }; mov(8) g60<1>F g75.6<0,1,0>HF { align1 1Q I@2 }; mov(8) g64<1>F g108<16,8,2>HF { align1 1Q A@4 }; mov(8) g99<1>F g72<16,8,2>HF { align1 1Q A@3 }; mov(8) g101<1>F g110<16,8,2>HF { align1 1Q I@2 }; mov(8) g66<1>F g111<16,8,2>HF { align1 1Q A@1 }; mov(1) g55<1>UD f0<0,1,0>UD { align1 WE_all 1N compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g71<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g68UD g106UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $15 }; mov(1) f0<1>UD g55<0,1,0>UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.dst }; mov(8) g86<1>UD g68<0,1,0>UD { align1 1Q }; mov(8) g87<1>UD g68.1<0,1,0>UD { align1 1Q $0.src }; mov(8) g65<1>UD g68.2<0,1,0>UD { align1 1Q }; mov(8) g104<1>UD g68.3<0,1,0>UD { align1 1Q }; shl(1) a0<1>UD g57<0,1,0>UD 0x00000002UD { align1 WE_all 1N F@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g108<1>UD g[a0 96]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g54<1>UD f0<0,1,0>UD { align1 WE_all 1N compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g71<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g70UD g108UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $10 }; mov(1) f0<1>UD g54<0,1,0>UD { align1 WE_all 1N I@2 }; mad(8) g75<1>F g66<8,8,1>F g29<8,8,1>F g60<1,1,1>F { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; add(8) g80<1>F g70.2<0,1,0>F -g68.2<0,1,0>F { align1 1Q }; add(8) g113<1>F g29<1,1,0>F -g68.2<0,1,0>F { align1 1Q compacted }; add(8) g61<1>F g70.1<0,1,0>F -g68.1<0,1,0>F { align1 1Q }; add(8) g112<1>F g28<1,1,0>F -g68.1<0,1,0>F { align1 1Q compacted }; add(8) g62<1>F g70<0,1,0>F -g68<0,1,0>F { align1 1Q compacted }; add(8) g111<1>F g27<1,1,0>F -g68<0,1,0>F { align1 1Q compacted }; add(8) g117<1>F g68.2<0,1,0>F -g29<1,1,0>F { align1 1Q compacted }; add(8) g56<1>F g68.1<0,1,0>F -g28<1,1,0>F { align1 1Q compacted }; shl(1) a0<1>UD g57<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g110<1>UD g[a0 480]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; add(8) g54<1>F g68<0,1,0>F -g27<1,1,0>F { align1 1Q compacted }; mul(8) g97<1>F g68.3<0,1,0>F g68.3<0,1,0>F { align1 1Q }; mul(8) g122<1>F g80<1,1,0>F g80<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g118<1>F g113<1,1,0>F g80<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mad(8) g76<1>F g75<8,8,1>F g28<8,8,1>F g101<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g71<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g72UD g110UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $14 }; mad(8) g123<1>F g122<8,8,1>F g61<8,8,1>F g61<1,1,1>F { align1 1Q F@3 }; mad(8) g74<1>F g118<8,8,1>F g61<8,8,1>F g112<1,1,1>F { align1 1Q F@3 }; mad(8) g100<1>F g76<8,8,1>F g27<8,8,1>F g100<1,1,1>F { align1 1Q F@3 }; mad(8) g125<1>F g123<8,8,1>F g62<8,8,1>F g62<1,1,1>F { align1 1Q F@3 }; mad(8) g63<1>F g74<8,8,1>F g62<8,8,1>F g111<1,1,1>F { align1 1Q F@3 }; math inv(8) g126<1>F g125<8,8,1>F null<8,8,1>F { align1 1Q @2 $1 }; mul(8) g127<1>F g63<1,1,0>F g126<1,1,0>F { align1 1Q @1 $1.dst compacted }; mad(8) g85<1>F g117<8,8,1>F g80<8,8,1>F g127<1,1,1>F { align1 1Q F@1 }; mad(8) g115<1>F g56<8,8,1>F g61<8,8,1>F g127<1,1,1>F { align1 1Q }; mad(8) g55<1>F g54<8,8,1>F g62<8,8,1>F g127<1,1,1>F { align1 1Q }; mul(8) g116<1>F g85<1,1,0>F g85<1,1,0>F { align1 1Q F@3 compacted }; mad(8) g89<1>F g116<8,8,1>F g115<8,8,1>F g115<1,1,1>F { align1 1Q F@1 }; mad(8) g120<1>F g89<8,8,1>F g55<8,8,1>F g55<1,1,1>F { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.dst }; mov(8) g125<1>UD g72.2<0,1,0>UD { align1 1Q $1.src }; mov(8) g123<1>UD g72.1<0,1,0>UD { align1 1Q F@7 }; mov(8) g122<1>UD g72<0,1,0>UD { align1 1Q }; cmp.z.f0.0(8) g101<1>D g72.3<0,1,0>D 0D { align1 1Q compacted }; cmp.ge.f0.0(8) g60<1>F g97<1,1,0>F g120<1,1,0>F { align1 1Q F@1 compacted }; mov.nz.f0.0(8) null<1>D g101<8,8,1>D { align1 1Q I@1 }; (+f0.0) if(8) JIP: LABEL11 UIP: LABEL10 { align1 1Q }; END B15 ->B16 ->B19 START B16 <-B15 (940 cycles) mad(8) g66<1>F g99<8,8,1>F g29<8,8,1>F g98<1,1,1>F { align1 1Q }; mad(8) g68<1>F g66<8,8,1>F g28<8,8,1>F g64<1,1,1>F { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mad(8) g106<1>F g68<8,8,1>F g27<8,8,1>F g102<1,1,1>F { align1 1Q F@1 }; cmp.g.f0.0(8) g70<1>F g106<8,8,1>F 0x0F /* 0F */ { align1 1Q F@1 }; cmp.g.f0.0(8) g72<1>F g100<8,8,1>F 0x0F /* 0F */ { align1 1Q I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; and(8) g108<1>UD g60<1,1,0>UD g70<1,1,0>UD { align1 1Q compacted }; and.nz.f0.0(8) null<1>UD g108<8,8,1>UD g72<8,8,1>UD { align1 1Q A@1 }; (+f0.0) if(8) JIP: LABEL12 UIP: LABEL12 { align1 1Q }; END B16 ->B17 ->B18 START B17 <-B16 (760 cycles) mul(8) g112<1>F g98<1,1,0>F g106<1,1,0>F { align1 1Q compacted }; mul(8) g111<1>F g64<1,1,0>F g106<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mul(8) g110<1>F g102<1,1,0>F g106<1,1,0>F { align1 1Q compacted }; add(8) g122<1>F g122<1,1,0>F -g83<1,1,0>F { align1 1Q I@7 compacted }; add(8) g123<1>F g123<1,1,0>F -g84<1,1,0>F { align1 1Q I@7 compacted }; add(8) g125<1>F g125<1,1,0>F -g96<1,1,0>F { align1 1Q I@7 compacted }; mul(8) g113<1>F g112<1,1,0>F g112<1,1,0>F { align1 1Q F@6 compacted }; mad(8) g118<1>F g113<8,8,1>F g111<8,8,1>F g111<1,1,1>F { align1 1Q F@1 }; mad(8) g74<1>F g118<8,8,1>F g110<8,8,1>F g110<1,1,1>F { align1 1Q F@1 }; mul.sat(8) g63<1>F g74<8,8,1>F 0x4479ffffF /* 1000F */ { align1 1Q F@1 }; mad(8) g83<1>F g83<8,8,1>F g122<8,8,1>F g63<1,1,1>F { align1 1Q F@1 }; mad(8) g84<1>F g84<8,8,1>F g123<8,8,1>F g63<1,1,1>F { align1 1Q F@7 }; mad(8) g96<1>F g96<8,8,1>F g125<8,8,1>F g63<1,1,1>F { align1 1Q F@7 }; break(8) JIP: LABEL12 UIP: LABEL13 { align1 1Q }; END B17 ->B14 ->B24 ->B18 START B18 <-B17 <-B16 (160 cycles) LABEL12: endif(8) JIP: LABEL14 { align1 1Q }; LABEL14: else(8) JIP: LABEL10 UIP: LABEL10 { align1 1Q }; END B18 ->B19 ->B22 START B19 <-B15 <-B18 (400 cycles) LABEL11: cmp.ge.f0.0(8) g126<1>F g100<1,1,0>F 0x0F /* 0F */ { align1 1Q F@1 compacted }; and.nz.f0.0(8) null<1>UD g60<8,8,1>UD g126<8,8,1>UD { align1 1Q A@1 }; (+f0.0) if(8) JIP: LABEL15 UIP: LABEL15 { align1 1Q }; END B19 ->B20 ->B21 START B20 <-B19 (1940 cycles) mad(8) g55<1>F g65<8,8,1>F g80<8,8,1>F g127<1,1,1>F { align1 1Q }; mad(8) g54<1>F g87<8,8,1>F g61<8,8,1>F g127<1,1,1>F { align1 1Q }; mad(8) g56<1>F -g99<8,8,1>F g98<8,8,1>F -g29<1,1,1>F { align1 1Q F@7 }; mad(8) g127<1>F g86<8,8,1>F g62<8,8,1>F g127<1,1,1>F { align1 1Q }; mad(8) g115<1>F g56<8,8,1>F g64<8,8,1>F -g28<1,1,1>F { align1 1Q F@2 }; mad(8) g117<1>F g115<8,8,1>F g102<8,8,1>F -g27<1,1,1>F { align1 1Q F@1 }; mad(8) g89<1>F g29<8,8,1>F g98<8,8,1>F g117<1,1,1>F { align1 1Q F@1 }; mad(8) g116<1>F g28<8,8,1>F g64<8,8,1>F g117<1,1,1>F { align1 1Q }; mad(8) g85<1>F g27<8,8,1>F g102<8,8,1>F g117<1,1,1>F { align1 1Q }; add(8) g75<1>F g89<1,1,0>F -g55<1,1,0>F { align1 1Q F@3 compacted }; add(8) g97<1>F g116<1,1,0>F -g54<1,1,0>F { align1 1Q F@3 compacted }; add(8) g120<1>F g85<1,1,0>F -g127<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g76<1>F g75<1,1,0>F g75<1,1,0>F { align1 1Q F@3 compacted }; mad(8) g86<1>F g76<8,8,1>F g97<8,8,1>F g97<1,1,1>F { align1 1Q F@1 }; mad(8) g87<1>F g86<8,8,1>F g120<8,8,1>F g120<1,1,1>F { align1 1Q F@1 }; math rsq(8) g65<1>F g87<8,8,1>F null<8,8,1>F { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; mul(8) g57<1>F g65<1,1,0>F g104<1,1,0>F { align1 1Q $0.dst compacted }; mad(8) g71<1>F g127<8,8,1>F g120<8,8,1>F g57<1,1,1>F { align1 1Q F@1 }; mad(8) g102<1>F g54<8,8,1>F g97<8,8,1>F g57<1,1,1>F { align1 1Q }; mad(8) g64<1>F g55<8,8,1>F g75<8,8,1>F g57<1,1,1>F { align1 1Q }; mul(8) g98<1>F g71<1,1,0>F g77<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g101<1>F g71<1,1,0>F g78<1,1,0>F { align1 1Q I@4 compacted }; mul(8) g104<1>F g71<1,1,0>F g79<1,1,0>F { align1 1Q compacted }; mad(8) g99<1>F g98<8,8,1>F g59<8,8,1>F g102<1,1,1>F { align1 1Q F@3 }; mad(8) g60<1>F g101<8,8,1>F g81<8,8,1>F g102<1,1,1>F { align1 1Q A@2 }; mad(8) g68<1>F g104<8,8,1>F g69<8,8,1>F g102<1,1,1>F { align1 1Q F@3 }; mad(8) g100<1>F g99<8,8,1>F g32<8,8,1>F g64<1,1,1>F { align1 1Q F@3 }; mad(8) g66<1>F g60<8,8,1>F g33<8,8,1>F g64<1,1,1>F { align1 1Q F@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mad(8) g106<1>F g68<8,8,1>F g37<8,8,1>F g64<1,1,1>F { align1 1Q F@3 }; add(8) g83<1>F g100<1,1,0>F g45<1,1,0>F { align1 1Q F@3 compacted }; add(8) g84<1>F g66<1,1,0>F g49<1,1,0>F { align1 1Q F@3 compacted }; add(8) g96<1>F g106<1,1,0>F g53<1,1,0>F { align1 1Q F@3 compacted }; break(8) JIP: LABEL15 UIP: LABEL13 { align1 1Q }; END B20 ->B14 ->B24 ->B21 START B21 <-B20 <-B19 (80 cycles) LABEL15: endif(8) JIP: LABEL10 { align1 1Q }; END B21 ->B22 START B22 <-B21 <-B18 (440 cycles) LABEL10: endif(8) JIP: LABEL13 { align1 1Q }; add(8) g102<1>D g91<1,1,0>D 1D { align1 1Q F@3 compacted }; cmp.ge.f0.0(8) null<1>UD g102<8,8,1>UD g119<8,8,1>UD { align1 1Q I@1 }; (+f0.0) break(8) JIP: LABEL13 UIP: LABEL13 { align1 1Q }; END B22 ->B14 ->B24 ->B23 ->B23 START B23 <-B22 (540 cycles) shl(8) g70<1>D g91<8,8,1>D 0x00000002UD { align1 1Q I@7 }; shl(8) g112<1>D g91<8,8,1>D 0x00000006UD { align1 1Q F@7 }; mov(8) g91<1>UD g102<8,8,1>UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; add(8) g108<1>D g70<1,1,0>D 4D { align1 1Q compacted }; add(8) g73<1>D g112<1,1,0>D 64D { align1 1Q I@3 compacted }; or(8) g72<1>UD g108<1,1,0>UD 0x00000001UD { align1 1Q I@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; or(8) g110<1>UD g108<1,1,0>UD 0x00000002UD { align1 1Q compacted }; or(8) g111<1>UD g108<1,1,0>UD 0x00000003UD { align1 1Q F@6 compacted }; shl(8) g93<1>D g72<8,8,1>D 0x00000004UD { align1 1Q I@3 }; shl(8) g67<1>D g110<8,8,1>D 0x00000004UD { align1 1Q I@3 }; shl(8) g95<1>D g111<8,8,1>D 0x00000004UD { align1 1Q I@3 }; LABEL13: while(8) JIP: LABEL16 { align1 1Q }; END B23 ->B15 START B24 <-B14 <-B17 <-B20 <-B12 <-B22 (682 cycles) LABEL9: endif(8) JIP: LABEL17 { align1 1Q }; LABEL17: mov(8) g114<1>UD 0x00000020UD { align1 WE_all 1Q }; mov(8) g61<1>UD 0x00000040UD { align1 WE_all 1Q F@7 }; mov(1) g95<1>D 922746880D { align1 WE_all 1N I@7 }; mul(8) g77<1>F g88<1,1,0>F g50<1,1,0>F { align1 1Q compacted }; mul(8) g119<1>F g88<1,1,0>F g46<1,1,0>F { align1 1Q I@6 compacted }; mul(8) g86<1>F g105<1,1,0>F g50<1,1,0>F { align1 1Q compacted }; mul(8) g59<1>F g105<1,1,0>F g42<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; mul(8) g73<1>F g90<1,1,0>F g46<1,1,0>F { align1 1Q compacted }; mul(8) g57<1>F g90<1,1,0>F g42<1,1,0>F { align1 1Q compacted }; mov(8) g2<1>UD 0x3c003c00UD { align1 1Q }; mov(8) g3<1>UD 0x00000000UD { align1 1Q }; mov(8) g4<1>UD 0x00000000UD { align1 1Q }; mov(8) g5<1>UD 0x00000000UD { align1 1Q }; mul(8) g88<1>F g88<1,1,0>F g42<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g26<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g113UD g114UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; or(1) a0.1<1>UD g26<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g62UD g61UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $3 }; mad(8) g78<1>F g77<8,8,1>F g51<8,8,1>F g103<1,1,1>F { align1 1Q F@7 }; mad(8) g75<1>F g119<8,8,1>F g47<8,8,1>F g103<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; mad(8) g87<1>F g86<8,8,1>F g51<8,8,1>F g82<1,1,1>F { align1 1Q F@7 }; mad(8) g81<1>F g59<8,8,1>F g43<8,8,1>F g82<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mad(8) g93<1>F g73<8,8,1>F g47<8,8,1>F g109<1,1,1>F { align1 1Q }; mad(8) g71<1>F g57<8,8,1>F g43<8,8,1>F g109<1,1,1>F { align1 1Q F@7 }; mad(8) g103<1>F g88<8,8,1>F g43<8,8,1>F g103<1,1,1>F { align1 1Q F@7 }; mad(8) g79<1>F g78<8,8,1>F g52<8,8,1>F g94<1,1,1>F { align1 1Q F@7 }; mad(8) g76<1>F g75<8,8,1>F g48<8,8,1>F g94<1,1,1>F { align1 1Q F@7 }; mad(8) g65<1>F g87<8,8,1>F g52<8,8,1>F g107<1,1,1>F { align1 1Q F@7 }; mad(8) g69<1>F g81<8,8,1>F g44<8,8,1>F g107<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mad(8) g67<1>F g93<8,8,1>F g48<8,8,1>F g92<1,1,1>F { align1 1Q }; mad(8) g91<1>F g71<8,8,1>F g44<8,8,1>F g92<1,1,1>F { align1 1Q A@7 }; mad(8) g94<1>F g103<8,8,1>F g44<8,8,1>F g94<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; mul(8) g125<1>F g113<0,1,0>F g83<1,1,0>F { align1 1Q compacted }; mul(8) g27<1>F g113.4<0,1,0>F g83<1,1,0>F { align1 1Q compacted }; add(8) g80<1>D g113.3<0,1,0>D -g124<0,1,0>D { align1 1Q compacted }; add(8) g118<1>D g113.7<0,1,0>D -g124.1<0,1,0>D { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; mul(8) g33<1>F g62<0,1,0>F g83<1,1,0>F { align1 1Q compacted }; add(8) g74<1>D g62.3<0,1,0>D -g124.2<0,1,0>D { align1 1Q }; mul(8) g83<1>F g105<1,1,0>F g46<1,1,0>F { align1 1Q compacted }; mad(8) g126<1>F g125<8,8,1>F g84<8,8,1>F g113.1<0,1,0>F { align1 1Q F@4 }; mad(8) g28<1>F g27<8,8,1>F g84<8,8,1>F g113.5<0,1,0>F { align1 1Q F@4 }; mov(8) g63<1>F g80<1,1,0>D { align1 1Q I@3 compacted }; mov(8) g122<1>F g118<1,1,0>D { align1 1Q I@2 compacted }; mad(8) g37<1>F g33<8,8,1>F g84<8,8,1>F g62.1<0,1,0>F { align1 1Q F@6 }; mov(8) g123<1>F g74<1,1,0>D { align1 1Q I@1 compacted }; mad(8) g84<1>F g83<8,8,1>F g47<8,8,1>F g82<1,1,1>F { align1 1Q F@7 }; mad(8) g127<1>F g126<8,8,1>F g96<8,8,1>F g113.2<0,1,0>F { align1 1Q F@7 }; mad(8) g29<1>F g28<8,8,1>F g96<8,8,1>F g113.6<0,1,0>F { align1 1Q F@7 }; mad(8) g45<1>F g37<8,8,1>F g96<8,8,1>F g62.2<0,1,0>F { align1 1Q F@5 }; mad(8) g96<1>F g84<8,8,1>F g48<8,8,1>F g107<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; mad(8) g26<1>F g127<8,8,1>F g95.0<0,1,0>F g63<1,1,1>F { align1 1Q }; mad(8) g32<1>F g29<8,8,1>F g95.0<0,1,0>F g122<1,1,1>F { align1 1Q F@4 }; mad(8) g49<1>F g45<8,8,1>F g95.0<0,1,0>F g123<1,1,1>F { align1 1Q F@4 }; mov(1) g95.1<1>D 1065353216D { align1 WE_all 1N F@1 }; mov(1) g95.2<1>D 1073741824D { align1 WE_all 1N I@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mad(8) g100<1>F -g95.1<0,1,0>F g95.2<0,1,0>F g36<1,1,1>F { align1 1Q }; mad(8) g60<1>F -g95.1<0,1,0>F g95.2<0,1,0>F g39<1,1,1>F { align1 1Q }; mad(8) g98<1>F -g95.1<0,1,0>F g95.2<0,1,0>F g34<1,1,1>F { align1 1Q $12.src }; mad(8) g99<1>F -g95.1<0,1,0>F g95.2<0,1,0>F g35<1,1,1>F { align1 1Q }; mad(8) g66<1>F -g95.1<0,1,0>F g95.2<0,1,0>F g40<1,1,1>F { align1 1Q }; mad(8) g101<1>F -g95.1<0,1,0>F g95.2<0,1,0>F g38<1,1,1>F { align1 1Q }; mad(8) g104<1>F -g95.1<0,1,0>F g95.2<0,1,0>F g41<1,1,1>F { align1 1Q }; mul(8) g95<1>F g90<1,1,0>F g50<1,1,0>F { align1 1Q compacted }; mul(8) g68<1>F -g100<1,1,0>F g60<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g50<1>F g79<1,1,0>F g98<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g41<1>F g94<1,1,0>F g98<1,1,0>F { align1 1Q compacted }; mul(8) g45<1>F g76<1,1,0>F g98<1,1,0>F { align1 1Q compacted }; mul(8) g70<1>F -g98<1,1,0>F g66<1,1,0>F { align1 1Q F@7 compacted }; mad(8) g102<1>F g95<8,8,1>F g51<8,8,1>F g109<1,1,1>F { align1 1Q F@6 }; mul(8) g72<1>F -g99<1,1,0>F g101<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g63<1>F g79<1,1,0>F g101<1,1,0>F { align1 1Q compacted }; mul(8) g80<1>F g76<1,1,0>F g101<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; mul(8) g114<1>F g94<1,1,0>F g101<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mad(8) g106<1>F g68<8,8,1>F g66<8,8,1>F g99<1,1,1>F { align1 1Q F@7 }; mad(8) g51<1>F g50<8,8,1>F g65<8,8,1>F g99<1,1,1>F { align1 1Q F@7 }; mad(8) g42<1>F g41<8,8,1>F g69<8,8,1>F g99<1,1,1>F { align1 1Q F@7 }; mad(8) g46<1>F g45<8,8,1>F g96<8,8,1>F g99<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mad(8) g108<1>F g70<8,8,1>F g101<8,8,1>F g100<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mad(8) g110<1>F g72<8,8,1>F g60<8,8,1>F g98<1,1,1>F { align1 1Q }; mad(8) g122<1>F g63<8,8,1>F g65<8,8,1>F g60<1,1,1>F { align1 1Q F@7 }; mad(8) g64<1>F g102<8,8,1>F g52<8,8,1>F g92<1,1,1>F { align1 1Q }; mad(8) g118<1>F g80<8,8,1>F g96<8,8,1>F g60<1,1,1>F { align1 1Q F@7 }; mad(8) g62<1>F g114<8,8,1>F g69<8,8,1>F g60<1,1,1>F { align1 1Q A@3 }; mul(8) g111<1>F g106<1,1,0>F g104<1,1,0>F { align1 1Q F@7 compacted }; mad(8) g43<1>F g42<8,8,1>F g91<8,8,1>F g100<1,1,1>F { align1 1Q F@7 }; mad(8) g47<1>F g46<8,8,1>F g67<8,8,1>F g100<1,1,1>F { align1 1Q F@7 }; mul(8) g112<1>F g108<1,1,0>F g104<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g113<1>F g110<1,1,0>F g104<1,1,0>F { align1 1Q A@4 compacted }; mad(8) g88<1>F g51<8,8,1>F g64<8,8,1>F g100<1,1,1>F { align1 1Q F@7 }; mad(8) g123<1>F g122<8,8,1>F g64<8,8,1>F g66<1,1,1>F { align1 1Q F@7 }; mul(8) g29<1>F g79<1,1,0>F g111<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g127<1>F g76<1,1,0>F g111<1,1,0>F { align1 1Q compacted }; mul(8) g124<1>F g94<1,1,0>F g111<1,1,0>F { align1 1Q I@3 compacted }; mad(8) g74<1>F g118<8,8,1>F g67<8,8,1>F g66<1,1,1>F { align1 1Q }; mov(8) g51<1>UD 0x00000100UD { align1 WE_all 1Q F@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mad(8) g61<1>F g62<8,8,1>F g91<8,8,1>F g66<1,1,1>F { align1 1Q }; mov(8) g79<1>UD 0x00000320UD { align1 WE_all 1Q F@5 }; mul(8) g103<1>F g88<1,1,0>F g88<1,1,0>F { align1 1Q F@7 compacted }; mad(8) g33<1>F g29<8,8,1>F g65<8,8,1>F g112<1,1,1>F { align1 1Q F@6 }; mad(8) g27<1>F g127<8,8,1>F g96<8,8,1>F g112<1,1,1>F { align1 1Q F@6 }; mul(8) g119<1>F g123<1,1,0>F g123<1,1,0>F { align1 1Q F@7 compacted }; mad(8) g125<1>F g124<8,8,1>F g69<8,8,1>F g112<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g121<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g50UD g51UD nullUD 0x2220d500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V16, transpose, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g121<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g78UD g79UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $5 }; mad(8) g94<1>F g103<8,8,1>F g47<8,8,1>F g47<1,1,1>F { align1 1Q F@5 }; mad(8) g40<1>F g33<8,8,1>F g64<8,8,1>F g113<1,1,1>F { align1 1Q F@5 }; mad(8) g75<1>F g119<8,8,1>F g74<8,8,1>F g74<1,1,1>F { align1 1Q F@4 }; mad(8) g28<1>F g27<8,8,1>F g67<8,8,1>F g113<1,1,1>F { align1 1Q F@6 }; mad(8) g126<1>F g125<8,8,1>F g91<8,8,1>F g113<1,1,1>F { align1 1Q F@5 }; mad(8) g105<1>F g94<8,8,1>F g43<8,8,1>F g43<1,1,1>F { align1 1Q F@5 }; mul(8) g107<1>F g40<1,1,0>F g40<1,1,0>F { align1 1Q F@5 compacted }; mad(8) g76<1>F g75<8,8,1>F g61<8,8,1>F g61<1,1,1>F { align1 1Q F@5 }; math rsq(8) g82<1>F g105<8,8,1>F null<8,8,1>F { align1 1Q @3 $6 }; mad(8) g90<1>F g107<8,8,1>F g28<8,8,1>F g28<1,1,1>F { align1 1Q F@2 }; math rsq(8) g77<1>F g76<8,8,1>F null<8,8,1>F { align1 1Q @2 $7 }; mul(8) g34<1>F g82<1,1,0>F g43<1,1,0>F { align1 1Q $6.dst compacted }; mul(8) g35<1>F g82<1,1,0>F g47<1,1,0>F { align1 1Q compacted }; mul(8) g36<1>F g82<1,1,0>F g88<1,1,0>F { align1 1Q compacted }; mad(8) g109<1>F g90<8,8,1>F g126<8,8,1>F g126<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul(8) g10<1>F g77<1,1,0>F g61<1,1,0>F { align1 1Q $7.dst compacted }; mul(8) g11<1>F g77<1,1,0>F g74<1,1,0>F { align1 1Q $15.src compacted }; mul(8) g12<1>F g77<1,1,0>F g123<1,1,0>F { align1 1Q $2.src compacted }; math rsq(8) g92<1>F g109<8,8,1>F null<8,8,1>F { align1 1Q @4 $8 }; mul(8) g37<1>F g92<1,1,0>F g126<1,1,0>F { align1 1Q $8.dst compacted }; mul(8) g38<1>F g92<1,1,0>F g28<1,1,0>F { align1 1Q compacted }; mul(8) g39<1>F g92<1,1,0>F g40<1,1,0>F { align1 1Q compacted }; mov(8) g126<1>UD g1<8,8,1>UD { align1 WE_all 1Q F@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mul(8) g52<1>F g50<0,1,0>F g26<1,1,0>F { align1 1Q compacted }; mul(8) g55<1>F g50.4<0,1,0>F g26<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mul(8) g117<1>F g51<0,1,0>F g26<1,1,0>F { align1 1Q compacted }; mul(8) g89<1>F g51.4<0,1,0>F g26<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.dst }; mad(8) g59<1>F g78.3<0,1,0>F g18<8,8,1>F g78.2<0,1,0>F { align1 1Q }; mad(8) g20<1>F g6<8,8,1>F g9<8,8,1>F -g78.4<0,1,0>F { align1 1Q }; mad(8) g21<1>F g7<8,8,1>F g9<8,8,1>F -g78.5<0,1,0>F { align1 1Q }; mad(8) g53<1>F g52<8,8,1>F g50.1<0,1,0>F g32<1,1,1>F { align1 1Q F@7 }; mad(8) g56<1>F g55<8,8,1>F g50.5<0,1,0>F g32<1,1,1>F { align1 1Q F@7 }; mad(8) g85<1>F g117<8,8,1>F g51.1<0,1,0>F g32<1,1,1>F { align1 1Q F@7 }; mad(8) g120<1>F g89<8,8,1>F g51.5<0,1,0>F g32<1,1,1>F { align1 1Q F@7 }; mad(8) g81<1>F g59<8,8,1>F g17<8,8,1>F g78.1<0,1,0>F { align1 1Q F@7 }; mad(8) g54<1>F g53<8,8,1>F g50.2<0,1,0>F g49<1,1,1>F { align1 1Q F@5 }; mad(8) g115<1>F g56<8,8,1>F g50.6<0,1,0>F g49<1,1,1>F { align1 1Q F@5 }; mad(8) g116<1>F g85<8,8,1>F g51.2<0,1,0>F g49<1,1,1>F { align1 1Q F@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mad(8) g97<1>F g120<8,8,1>F g51.6<0,1,0>F g49<1,1,1>F { align1 1Q F@5 }; mad(8) g111<1>F g81<8,8,1>F g16<8,8,1>F g78.0<0,1,0>F { align1 1Q F@5 }; add(8) g24<1>F g54<1,1,0>F g50.3<0,1,0>F { align1 1Q F@5 compacted }; add(8) g25<1>F g115<1,1,0>F g50.7<0,1,0>F { align1 1Q F@5 compacted }; add(8) g122<1>F g116<1,1,0>F g51.3<0,1,0>F { align1 1Q F@5 compacted }; add(8) g123<1>F g97<1,1,0>F g51.7<0,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; send(8) nullUD g1UD g2UD 0x02080007 0x00000200 urb MsgDesc: offset 0 SIMD8 write mlen 1 ex_mlen 8 rlen 0 { align1 1Q $9 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; send(8) nullUD g1UD g111UD 0x02080027 0x00000100 urb MsgDesc: offset 2 SIMD8 write mlen 1 ex_mlen 4 rlen 0 { align1 1Q $10 }; mov(8) g32<1>F g30<1,1,0>F { align1 1Q compacted }; mov(8) g33<1>F g31<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; send(8) nullUD g1UD g32UD 0x02080057 0x00000200 urb MsgDesc: offset 5 SIMD8 write mlen 1 ex_mlen 8 rlen 0 { align1 1Q $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; send(8) nullUD g1UD g10UD 0x02080077 0x00000200 urb MsgDesc: offset 7 SIMD8 write mlen 1 ex_mlen 8 rlen 0 { align1 1Q $12 }; mov(8) g22<1>F g8<1,1,0>F { align1 1Q $9.src compacted }; mov(8) g23<1>F g9<1,1,0>F { align1 1Q $9.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; send(8) nullUD g1UD g18UD 0x02080097 0x00000200 urb MsgDesc: offset 9 SIMD8 write mlen 1 ex_mlen 8 rlen 0 { align1 1Q $13 }; mov(8) g124<1>F g58<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; send(8) nullUD g126UD g122UD 0x020800b7 0x00000100 urb MsgDesc: offset 11 SIMD8 write mlen 1 ex_mlen 4 rlen 0 { align1 1Q A@1 EOT }; END B24 NIR (SSA form) for fragment shader: shader: MESA_SHADER_FRAGMENT source_sha1: {0x54969a6b, 0xf3bf0d03, 0x19f918a7, 0x34f0be6e, 0x8aacdd2a} stage: 4 next_stage: 0 num_ssbos: 5 inputs_read: 33-39 outputs_written: 4-7 system_values_read: 0x00000000'00000000'02480000 subgroup_size: 2 uses_wide_subgroup_intrinsics: true uses_texture_gather: true divergence_analysis_run: true bit_sizes_float: 0x20 bit_sizes_int: 0x21 separate_shader: true uses_discard: true uses_demote: true needs_quad_helper_invocations: true needs_all_helper_invocations: true origin_upper_left: true inputs: 0 outputs: 0 uniforms: 256 decl_var push_const INTERP_MODE_NONE RootConstants registers decl_var uniform INTERP_MODE_NONE restrict texture2D[] @0 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict texture2DArray[] @1 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict texture3D[] @2 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict utexture2D[] @3 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @4 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @5 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @6 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @7 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly BindlessCBV[] @8 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict sampler[] @9 (~0, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target (FRAG_RESULT_DATA0.xyzw, 8, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_1 (FRAG_RESULT_DATA1.xyzw, 10, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_2 (FRAG_RESULT_DATA2.xyzw, 12, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_3 (FRAG_RESULT_DATA3.xyzw, 14, 0) decl_var INTERP_MODE_NONE float TEXCOORD_6 decl_var INTERP_MODE_NONE float TEXCOORD_6@10 decl_var INTERP_MODE_NONE float TEXCOORD_6@11 decl_var INTERP_MODE_NONE float TEXCOORD_5 decl_var INTERP_MODE_NONE float TEXCOORD_5@12 decl_var INTERP_MODE_NONE float TEXCOORD_5@13 decl_var INTERP_MODE_NONE float TEXCOORD_5@14 decl_var INTERP_MODE_NONE float TEXCOORD_4 decl_var INTERP_MODE_NONE float TEXCOORD_4@15 decl_var INTERP_MODE_NONE float TEXCOORD_4@16 decl_var INTERP_MODE_NONE float TEXCOORD_4@17 decl_var INTERP_MODE_NONE float TEXCOORD_3 decl_var INTERP_MODE_NONE float TEXCOORD_3@18 decl_var INTERP_MODE_NONE float TEXCOORD_2 decl_var INTERP_MODE_NONE float TEXCOORD_2@19 decl_var INTERP_MODE_NONE float TEXCOORD_2@20 decl_var INTERP_MODE_NONE float TEXCOORD_1 decl_var INTERP_MODE_NONE float TEXCOORD_1@21 decl_var INTERP_MODE_NONE float TEXCOORD_1@22 decl_var INTERP_MODE_NONE float TEXCOORD_1@23 decl_var INTERP_MODE_NONE float TEXCOORD decl_var INTERP_MODE_NONE float TEXCOORD@24 decl_var INTERP_MODE_NONE float TEXCOORD@25 decl_var INTERP_MODE_NONE float TEXCOORD@26 decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD@27 (VARYING_SLOT_VAR1.xyzw, 33, 0) decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD_1@28 (VARYING_SLOT_VAR2.xyzw, 34, 0) decl_var shader_in INTERP_MODE_SMOOTH vec3 TEXCOORD_2@29 (VARYING_SLOT_VAR3.xyz, 35, 0) decl_var shader_in INTERP_MODE_SMOOTH vec2 TEXCOORD_3@30 (VARYING_SLOT_VAR4.zw, 36, 0) decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD_4@31 (VARYING_SLOT_VAR5.xyzw, 37, 0) decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD_5@32 (VARYING_SLOT_VAR6.xyzw, 38, 0) decl_var shader_in INTERP_MODE_SMOOTH vec3 TEXCOORD_6@33 (VARYING_SLOT_VAR7.xyz, 39, 0) decl_function main (0 params) impl main { block block_0: /* preds: */ vec1 32 con ssa_0 = load_const (0x00000000 = 0.000000) vec1 32 con ssa_1 = load_const (0x3f800000 = 1.000000) vec1 32 con ssa_2 = load_const (0xffffffff = -nan) vec1 32 con ssa_3 = load_const (0x3eaaaaab = 0.333333) vec1 32 con ssa_4 = load_const (0x447a0000 = 1000.000000) vec1 32 con ssa_5 = load_const (0xbf000000 = -0.500000) vec1 32 con ssa_6 = load_const (0x3f000000 = 0.500000) vec1 32 con ssa_7 = load_const (0x3b808081 = 0.003922) vec1 32 con ssa_8 = load_const (0x427c0000 = 63.000000) vec1 32 con ssa_9 = load_const (0x40000000 = 2.000000) vec1 32 con ssa_10 = load_const (0xbf800000 = -1.000000) vec1 32 con ssa_11 = load_const (0x3e800000 = 0.250000) vec1 32 con ssa_12 = load_const (0x3dcccccd = 0.100000) vec1 32 con ssa_13 = load_const (0x3a83126f = 0.001000) vec1 32 con ssa_14 = load_const (0x3c23d70a = 0.010000) vec1 32 con ssa_15 = load_const (0x00000004 = 0.000000) vec1 32 con ssa_16 = load_const (0x00000001 = 0.000000) vec1 32 con ssa_17 = load_const (0x00000002 = 0.000000) vec1 32 con ssa_18 = load_const (0x41200000 = 10.000000) vec1 32 con ssa_19 = load_const (0x37a7c5ac = 0.000020) vec1 32 con ssa_20 = load_const (0xffffff7e = -nan) vec1 32 con ssa_21 = load_const (0x00000082 = 0.000000) vec1 32 con ssa_22 = load_const (0x00000010 = 0.000000) vec1 32 con ssa_23 = load_const (0x0000000a = 0.000000) vec1 32 con ssa_24 = load_const (0x0000000f = 0.000000) vec1 32 con ssa_25 = load_const (0x0000000e = 0.000000) vec1 32 con ssa_26 = load_const (0x00000007 = 0.000000) vec1 32 con ssa_27 = load_const (0x00000006 = 0.000000) vec1 32 con ssa_28 = load_const (0x00000005 = 0.000000) vec1 32 con ssa_29 = load_const (0x00000003 = 0.000000) vec1 32 con ssa_30 = load_const (0xffffffff = -nan) vec1 32 con ssa_31 = load_const (0x0000001f = 0.000000) vec1 32 con ssa_32 = load_const (0x00000014 = 0.000000) vec1 32 con ssa_33 = load_const (0x000003ff = 0.000000) vec1 32 con ssa_34 = load_const (0x0000003f = 0.000000) vec1 32 con ssa_35 = load_const (0x00000048 = 0.000000) vec1 32 con ssa_36 = intrinsic load_uniform (ssa_35) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_37 = iadd ssa_36, ssa_29 vec1 32 con ssa_38 = load_const (0x000f423f = 0.000000) vec1 32 con ssa_39 = umin ssa_37, ssa_38 vec1 32 con ssa_40 = intrinsic load_uniform (ssa_0) (base=252, range=4, dest_type=uint /*4*/) vec1 32 con ssa_41 = ishl ssa_39, ssa_27 vec1 32 con ssa_42 = load_const (0x00000080 = 0.000000) vec1 32 con ssa_43 = iadd3 ssa_42, ssa_41, ssa_40 vec1 32 con ssa_44 = load_const (0xdeaddeed = -6264355898823540736.000000) vec1 32 con ssa_45 = intrinsic resource_intel (ssa_44, ssa_43, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_46 = iadd ssa_36, ssa_17 vec1 32 con ssa_47 = umin ssa_46, ssa_38 vec1 32 con ssa_48 = ishl ssa_47, ssa_27 vec1 32 con ssa_49 = iadd3 ssa_42, ssa_48, ssa_40 vec1 32 con ssa_50 = intrinsic resource_intel (ssa_44, ssa_49, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_51 = load_const (0x00000054 = 0.000000) vec1 32 con ssa_52 = intrinsic load_uniform (ssa_51) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_53 = iadd ssa_52, ssa_22 vec1 32 con ssa_54 = iadd ssa_52, ssa_25 vec1 32 con ssa_55 = iadd ssa_52, ssa_26 vec1 32 con ssa_56 = iadd ssa_52, ssa_27 vec1 32 con ssa_57 = iadd ssa_52, ssa_28 vec1 32 con ssa_58 = iadd ssa_52, ssa_15 vec1 32 con ssa_59 = iadd ssa_52, ssa_16 vec1 32 con ssa_60 = load_const (0x00000050 = 0.000000) vec1 32 con ssa_61 = intrinsic load_uniform (ssa_60) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_62 = iadd ssa_61, ssa_22 vec1 32 con ssa_63 = iadd ssa_36, ssa_16 vec1 32 con ssa_64 = load_const (0x00000064 = 0.000000) vec1 32 con ssa_65 = intrinsic load_uniform (ssa_64) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_66 = iadd ssa_65, ssa_29 vec1 32 con ssa_67 = iadd ssa_65, ssa_17 vec1 32 con ssa_68 = iadd ssa_65, ssa_16 vec1 32 con ssa_69 = load_const (0x00000060 = 0.000000) vec1 32 con ssa_70 = intrinsic load_uniform (ssa_69) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_71 = load_const (0x00000044 = 0.000000) vec1 32 con ssa_72 = intrinsic load_uniform (ssa_71) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_73 = iadd ssa_72, ssa_28 vec1 32 con ssa_74 = umin ssa_73, ssa_38 vec1 32 con ssa_75 = ishl ssa_74, ssa_27 vec1 32 con ssa_76 = iadd3 ssa_42, ssa_75, ssa_40 vec1 32 con ssa_77 = intrinsic resource_intel (ssa_44, ssa_76, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_78 = load_const (0x00000038 = 0.000000) vec1 32 con ssa_79 = intrinsic load_uniform (ssa_78) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_80 = umin ssa_79, ssa_38 vec1 32 con ssa_81 = ishl ssa_80, ssa_27 vec1 32 con ssa_82 = iadd3 ssa_42, ssa_81, ssa_40 vec1 32 con ssa_83 = intrinsic resource_intel (ssa_44, ssa_82, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_84 = iadd ssa_72, ssa_15 vec1 32 con ssa_85 = umin ssa_84, ssa_38 vec1 32 con ssa_86 = ishl ssa_85, ssa_27 vec1 32 con ssa_87 = iadd3 ssa_42, ssa_86, ssa_40 vec1 32 con ssa_88 = intrinsic resource_intel (ssa_44, ssa_87, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_89 = load_const (0x00000034 = 0.000000) vec1 32 con ssa_90 = intrinsic load_uniform (ssa_89) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_91 = iadd ssa_90, ssa_16 vec1 32 con ssa_92 = umin ssa_91, ssa_38 vec1 32 con ssa_93 = ishl ssa_92, ssa_27 vec1 32 con ssa_94 = iadd3 ssa_42, ssa_93, ssa_40 vec1 32 con ssa_95 = umin ssa_90, ssa_38 vec1 32 con ssa_96 = ishl ssa_95, ssa_27 vec1 32 con ssa_97 = iadd3 ssa_42, ssa_96, ssa_40 vec1 32 con ssa_98 = intrinsic resource_intel (ssa_44, ssa_97, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_99 = load_const (0x00000040 = 0.000000) vec1 32 con ssa_100 = intrinsic load_uniform (ssa_99) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_101 = umin ssa_100, ssa_38 vec1 32 con ssa_102 = ishl ssa_101, ssa_27 vec1 32 con ssa_103 = iadd3 ssa_42, ssa_102, ssa_40 vec1 32 con ssa_104 = intrinsic resource_intel (ssa_44, ssa_103, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_105 = intrinsic load_front_face () () vec2 32 div ssa_106 = intrinsic load_barycentric_pixel () (interp_mode=1) vec3 32 con ssa_107 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=39, component=0, io location=39 slots=1 /*167*/) vec1 32 div ssa_108 = ffma ssa_106.y, ssa_107.y, ssa_107.x vec1 32 div ssa_109 = ffma ssa_106.x, ssa_107.z, ssa_108 vec3 32 con ssa_110 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=39, component=1, io location=39 slots=1 /*167*/) vec1 32 div ssa_111 = ffma ssa_106.y, ssa_110.y, ssa_110.x vec1 32 div ssa_112 = ffma ssa_106.x, ssa_110.z, ssa_111 vec3 32 con ssa_113 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=39, component=2, io location=39 slots=1 /*167*/) vec1 32 div ssa_114 = ffma ssa_106.y, ssa_113.y, ssa_113.x vec3 32 con ssa_115 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=0, io location=38 slots=1 /*166*/) vec1 32 div ssa_116 = fneg ssa_106.y vec1 32 con ssa_117 = fneg ssa_115.x vec1 32 div ssa_118 = ffma ssa_116, ssa_115.y, ssa_117 vec1 32 div ssa_119 = fneg ssa_106.x vec1 32 div ssa_120 = ffma ssa_119, ssa_115.z, ssa_118 vec3 32 con ssa_121 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=1, io location=38 slots=1 /*166*/) vec1 32 div ssa_122 = ffma ssa_106.y, ssa_121.y, ssa_121.x vec1 32 div ssa_123 = ffma ssa_106.x, ssa_121.z, ssa_122 vec3 32 con ssa_124 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=2, io location=38 slots=1 /*166*/) vec1 32 div ssa_125 = ffma ssa_106.y, ssa_124.y, ssa_124.x vec1 32 div ssa_126 = ffma ssa_106.x, ssa_124.z, ssa_125 vec3 32 con ssa_127 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=3, io location=38 slots=1 /*166*/) vec1 32 div ssa_128 = ffma ssa_106.y, ssa_127.y, ssa_127.x vec1 32 div ssa_129 = ffma ssa_106.x, ssa_127.z, ssa_128 vec3 32 con ssa_130 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=0, io location=37 slots=1 /*165*/) vec1 32 div ssa_131 = ffma ssa_106.y, ssa_130.y, ssa_130.x vec1 32 div ssa_132 = ffma ssa_106.x, ssa_130.z, ssa_131 vec3 32 con ssa_133 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=1, io location=37 slots=1 /*165*/) vec1 32 div ssa_134 = ffma ssa_106.y, ssa_133.y, ssa_133.x vec1 32 div ssa_135 = ffma ssa_106.x, ssa_133.z, ssa_134 vec3 32 con ssa_136 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=2, io location=37 slots=1 /*165*/) vec1 32 con ssa_137 = fneg ssa_136.x vec1 32 div ssa_138 = ffma ssa_116, ssa_136.y, ssa_137 vec1 32 div ssa_139 = ffma ssa_119, ssa_136.z, ssa_138 vec3 32 con ssa_140 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=3, io location=37 slots=1 /*165*/) vec1 32 con ssa_141 = fneg ssa_140.x vec1 32 div ssa_142 = ffma ssa_116, ssa_140.y, ssa_141 vec1 32 div ssa_143 = ffma ssa_119, ssa_140.z, ssa_142 vec3 32 con ssa_144 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=36, component=2, io location=36 slots=1 /*164*/) vec1 32 div ssa_145 = ffma ssa_106.y, ssa_144.y, ssa_144.x vec1 32 div ssa_146 = ffma ssa_106.x, ssa_144.z, ssa_145 vec3 32 con ssa_147 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=36, component=3, io location=36 slots=1 /*164*/) vec1 32 div ssa_148 = ffma ssa_106.y, ssa_147.y, ssa_147.x vec1 32 div ssa_149 = ffma ssa_106.x, ssa_147.z, ssa_148 vec3 32 con ssa_150 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=35, component=0, io location=35 slots=1 /*163*/) vec1 32 div ssa_151 = ffma ssa_106.y, ssa_150.y, ssa_150.x vec1 32 div ssa_152 = ffma ssa_106.x, ssa_150.z, ssa_151 vec3 32 con ssa_153 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=35, component=1, io location=35 slots=1 /*163*/) vec1 32 div ssa_154 = ffma ssa_106.y, ssa_153.y, ssa_153.x vec1 32 div ssa_155 = ffma ssa_106.x, ssa_153.z, ssa_154 vec3 32 con ssa_156 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=35, component=2, io location=35 slots=1 /*163*/) vec1 32 div ssa_157 = ffma ssa_106.y, ssa_156.y, ssa_156.x vec1 32 div ssa_158 = ffma ssa_106.x, ssa_156.z, ssa_157 vec3 32 con ssa_159 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=0, io location=34 slots=1 /*162*/) vec1 32 div ssa_160 = ffma ssa_106.y, ssa_159.y, ssa_159.x vec1 32 div ssa_161 = ffma ssa_106.x, ssa_159.z, ssa_160 vec3 32 con ssa_162 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=1, io location=34 slots=1 /*162*/) vec1 32 div ssa_163 = ffma ssa_106.y, ssa_162.y, ssa_162.x vec1 32 div ssa_164 = ffma ssa_106.x, ssa_162.z, ssa_163 vec3 32 con ssa_165 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=2, io location=34 slots=1 /*162*/) vec1 32 div ssa_166 = ffma ssa_106.y, ssa_165.y, ssa_165.x vec1 32 div ssa_167 = ffma ssa_106.x, ssa_165.z, ssa_166 vec3 32 con ssa_168 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=3, io location=34 slots=1 /*162*/) vec1 32 div ssa_169 = ffma ssa_106.y, ssa_168.y, ssa_168.x vec1 32 div ssa_170 = ffma ssa_106.x, ssa_168.z, ssa_169 vec3 32 con ssa_171 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=0, io location=33 slots=1 /*161*/) vec1 32 div ssa_172 = ffma ssa_106.y, ssa_171.y, ssa_171.x vec1 32 div ssa_173 = ffma ssa_106.x, ssa_171.z, ssa_172 vec3 32 con ssa_174 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=1, io location=33 slots=1 /*161*/) vec1 32 div ssa_175 = ffma ssa_106.y, ssa_174.y, ssa_174.x vec1 32 div ssa_176 = ffma ssa_106.x, ssa_174.z, ssa_175 vec3 32 con ssa_177 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=2, io location=33 slots=1 /*161*/) vec1 32 div ssa_178 = ffma ssa_106.y, ssa_177.y, ssa_177.x vec1 32 div ssa_179 = ffma ssa_106.x, ssa_177.z, ssa_178 vec3 32 con ssa_180 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=3, io location=33 slots=1 /*161*/) vec1 32 div ssa_181 = ffma ssa_106.y, ssa_180.y, ssa_180.x vec1 32 div ssa_182 = ffma ssa_106.x, ssa_180.z, ssa_181 vec4 32 div ssa_183 = intrinsic load_frag_coord () () vec2 32 div ssa_184 = intrinsic load_sample_pos_or_center () () vec1 32 div ssa_185 = fadd ssa_183.x, ssa_184.x vec1 32 div ssa_186 = fadd ssa_183.y, ssa_184.y vec1 32 div ssa_187 = f2u32 ssa_185 vec1 32 div ssa_188 = f2u32 ssa_186 vec4 32 con ssa_189 = intrinsic load_ssbo_uniform_block_intel (ssa_104, ssa_0) (access=80, align_mul=1073741824, align_offset=0) vec1 32 div ssa_190 = iand ssa_187, ssa_34 vec1 32 div ssa_191 = iand ssa_188, ssa_34 vec1 32 con ssa_192 = load_const (0x000001c0 = 0.000000) vec4 32 con ssa_193 = intrinsic load_ssbo_uniform_block_intel (ssa_98, ssa_192) (access=80, align_mul=1073741824, align_offset=448) vec1 32 con ssa_194 = iand ssa_193.y, ssa_34 vec3 32 div ssa_195 = vec3 ssa_190, ssa_191, ssa_194 vec1 32 con ssa_196 = umin ssa_54, ssa_38 vec1 32 con ssa_197 = ishl ssa_196, ssa_27 vec1 32 con ssa_198 = iadd3 ssa_42, ssa_197, ssa_40 vec1 32 con ssa_199 = intrinsic resource_intel (ssa_44, ssa_198, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_200 = (float32)txf ssa_199 (texture_handle), ssa_195 (coord), ssa_0 (lod), 0 (texture) vec1 32 div ssa_201 = ffma ssa_200.x, ssa_189.z, ssa_189.w vec1 32 div ssa_202 = flt32! ssa_201, ssa_0 intrinsic demote_if (ssa_202) () vec16 32 con ssa_203 = intrinsic load_ssbo_uniform_block_intel (ssa_83, ssa_0) (access=80, align_mul=1073741824, align_offset=0) vec1 32 div ssa_204 = ffma ssa_203.e, ssa_173, ssa_203.i vec1 32 div ssa_205 = ffma ssa_203.f, ssa_176, ssa_203.j vec2 32 div ssa_206 = vec2 ssa_204, ssa_205 vec1 32 con ssa_207 = umin ssa_36, ssa_38 vec1 32 con ssa_208 = ishl ssa_207, ssa_27 vec1 32 con ssa_209 = iadd3 ssa_42, ssa_208, ssa_40 vec1 32 con ssa_210 = intrinsic resource_intel (ssa_44, ssa_209, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_211 = load_const (0x000007ff = 0.000000) vec1 32 con ssa_212 = umin ssa_70, ssa_211 vec1 32 con ssa_213 = intrinsic load_uniform (ssa_0) (base=248, range=4, dest_type=uint /*4*/) vec1 32 con ssa_214 = ishl ssa_212, ssa_28 vec1 32 con ssa_215 = iadd ssa_213, ssa_214 vec1 32 con ssa_216 = intrinsic resource_intel (ssa_44, ssa_215, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_217 = (float32)tex ssa_210 (texture_handle), ssa_216 (sampler_handle), ssa_206 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_218 = ffma ssa_217.x, ssa_9, ssa_10 vec1 32 div ssa_219 = ffma ssa_217.y, ssa_9, ssa_10 vec1 32 div ssa_220 = fneg! ssa_219 vec1 32 div ssa_221 = fmul! ssa_220, ssa_219 vec1 32 div ssa_222 = fneg! ssa_218 vec1 32 div ssa_223 = ffma! ssa_222, ssa_218, ssa_221 vec1 32 div ssa_224 = fadd! ssa_223, ssa_1 vec1 32 div ssa_225 = fsat! ssa_224 vec1 32 div ssa_226 = fsqrt ssa_225 vec1 32 div ssa_227 = fadd ssa_226, ssa_10 vec1 32 div ssa_228 = fmul ssa_203.a, ssa_218 vec1 32 div ssa_229 = fmul ssa_203.a, ssa_219 vec1 32 div ssa_230 = ffma ssa_203.a, ssa_227, ssa_1 vec1 32 div ssa_231 = fmul ssa_230, ssa_230 vec1 32 div ssa_232 = ffma ssa_229, ssa_229, ssa_231 vec1 32 div ssa_233 = ffma ssa_228, ssa_228, ssa_232 vec1 32 div ssa_234 = frsq ssa_233 vec1 32 div ssa_235 = fmul ssa_228, ssa_234 vec1 32 div ssa_236 = fmul ssa_229, ssa_234 vec1 32 con ssa_237 = f2u32 ssa_203.m vec16 32 con ssa_238 = intrinsic load_ssbo_uniform_block_intel (ssa_83, ssa_99) (access=80, align_mul=1073741824, align_offset=64) vec16 32 con ssa_239 = intrinsic load_ssbo_uniform_block_intel (ssa_83, ssa_42) (access=80, align_mul=1073741824, align_offset=128) vec1 32 con ssa_240 = f2u32 ssa_239.a vec1 32 con ssa_241 = f2u32 ssa_239.b vec1 32 div ssa_242 = fddx_coarse ssa_173 vec1 32 div ssa_243 = fddx_coarse ssa_176 vec1 32 div ssa_244 = fddy_coarse ssa_173 vec1 32 div ssa_245 = fddy_coarse ssa_176 vec1 32 div ssa_246 = ffract ssa_173 vec1 32 div ssa_247 = ffract ssa_176 vec1 32 div ssa_248 = fmul ssa_246, ssa_238.a vec1 32 div ssa_249 = fneg ssa_176 vec1 32 div ssa_250 = fadd ssa_1, ssa_249 vec1 32 div ssa_251 = ffract ssa_250 vec1 32 div ssa_252 = fmul ssa_246, ssa_238.e vec1 32 div ssa_253 = fmul ssa_251, ssa_238.f vec1 32 div ssa_254 = f2i32 ssa_252 vec1 32 div ssa_255 = f2i32 ssa_253 vec1 32 con ssa_256 = fceil ssa_238.e vec1 32 con ssa_257 = f2i32 ssa_256 vec1 32 div ssa_258 = imul ssa_255, ssa_257 vec1 32 div ssa_259 = iadd ssa_258, ssa_254 vec1 32 div ssa_260 = ishl ssa_259, ssa_29 vec2 32 div ssa_261 = intrinsic load_ssbo (ssa_50, ssa_260) (access=80, align_mul=8, align_offset=0) vec1 32 con ssa_262 = fceil ssa_238.f vec1 32 con ssa_263 = f2u32 ssa_256 vec1 32 con ssa_264 = f2u32 ssa_262 vec1 32 div ssa_265 = fmul ssa_246, ssa_238.g vec1 32 div ssa_266 = fmul ssa_251, ssa_238.h vec1 32 div ssa_267 = f2i32 ssa_265 vec1 32 div ssa_268 = f2i32 ssa_266 vec1 32 con ssa_269 = fceil ssa_238.g vec1 32 con ssa_270 = f2i32 ssa_269 vec1 32 div ssa_271 = imul ssa_270, ssa_268 vec1 32 con ssa_272 = imul ssa_264, ssa_263 vec1 32 div ssa_273 = iadd3 ssa_272, ssa_267, ssa_271 vec1 32 div ssa_274 = ishl ssa_273, ssa_29 vec2 32 div ssa_275 = intrinsic load_ssbo (ssa_50, ssa_274) (access=80, align_mul=8, align_offset=0) vec1 32 div ssa_276 = ior ssa_275.y, ssa_261.y vec1 32 div ssa_277 = iand ssa_276, ssa_239.e vec1 32 con ssa_278 = intrinsic reduce (ssa_277) (reduction_op=ior /*300*/, cluster_size=0) vec1 32 con ssa_279 = frcp ssa_238.i vec1 32 div ssa_280 = fmul ssa_246, ssa_279 vec1 32 div ssa_281 = fmul ssa_251, ssa_279 vec1 32 con ssa_282 = frcp ssa_238.m vec1 32 con ssa_283 = frcp ssa_238.n vec1 32 con ssa_284 = fmul ssa_282, ssa_238.i vec1 32 con ssa_285 = fmul ssa_283, ssa_238.i vec1 32 con ssa_286 = iadd ssa_278, ssa_30 vec1 32 con ssa_287 = iand ssa_286, ssa_278 vec1 32 con ssa_288 = ine32 ssa_287, ssa_0 vec1 32 con ssa_289 = inot ssa_105 /* succs: block_1 block_13 */ if ssa_288 { block block_1: /* preds: block_0 */ vec1 32 con ssa_290 = umin ssa_63, ssa_38 vec1 32 con ssa_291 = ishl ssa_290, ssa_27 vec1 32 con ssa_292 = iadd3 ssa_42, ssa_291, ssa_40 vec1 32 con ssa_293 = intrinsic resource_intel (ssa_44, ssa_292, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_294 = umin ssa_66, ssa_211 vec1 32 con ssa_295 = ishl ssa_294, ssa_28 vec1 32 con ssa_296 = iadd ssa_213, ssa_295 vec1 32 con ssa_297 = intrinsic resource_intel (ssa_44, ssa_296, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec1 32 con ssa_298 = load_const (0x00000068 = 0.000000) vec1 32 con ssa_299 = load_const (0x000004f0 = 0.000000) /* succs: block_2 */ loop { block block_2: /* preds: block_1 block_11 */ vec1 32 con ssa_300 = phi block_1: ssa_278, block_11: ssa_325 vec1 32 div ssa_301 = phi block_1: ssa_1, block_11: ssa_529 vec1 32 div ssa_302 = phi block_1: ssa_0, block_11: ssa_528 vec1 32 div ssa_303 = phi block_1: ssa_0, block_11: ssa_527 vec1 32 div ssa_304 = phi block_1: ssa_0, block_11: ssa_526 vec1 32 div ssa_305 = phi block_1: ssa_0, block_11: ssa_525 vec1 32 div ssa_306 = phi block_1: ssa_0, block_11: ssa_530 vec1 32 div ssa_307 = phi block_1: ssa_0, block_11: ssa_524 vec1 32 div ssa_308 = phi block_1: ssa_0, block_11: ssa_519 vec1 32 div ssa_309 = phi block_1: ssa_0, block_11: ssa_523 vec1 32 div ssa_310 = phi block_1: ssa_0, block_11: ssa_522 vec1 32 div ssa_311 = phi block_1: ssa_0, block_11: ssa_521 vec1 32 div ssa_312 = phi block_1: ssa_0, block_11: ssa_520 vec1 32 con ssa_313 = uclz ssa_300 vec1 32 con ssa_314 = ineg ssa_313 vec1 32 con ssa_315 = iadd ssa_31, ssa_314 vec1 32 con ssa_316 = ineg ssa_315 vec1 32 con ssa_317 = iadd ssa_31, ssa_316 vec1 32 con ssa_318 = ieq32 ssa_315, ssa_30 vec1 32 con ssa_319 = b32csel ssa_318, ssa_30, ssa_317 vec1 32 con ssa_320 = ineg ssa_319 vec1 32 con ssa_321 = iadd ssa_31, ssa_320 vec1 32 con ssa_322 = ieq32 ssa_319, ssa_30 vec1 32 con ssa_323 = b32csel ssa_322, ssa_30, ssa_321 vec1 32 con ssa_324 = ishl ssa_16, ssa_323 vec1 32 con ssa_325 = ixor ssa_324, ssa_300 vec1 32 div ssa_326 = iand ssa_324, ssa_277 vec1 32 div ssa_327 = ine32 ssa_326, ssa_0 vec1 32 div ssa_328 = flt32! ssa_0, ssa_301 vec1 32 div ssa_329 = iand ssa_328, ssa_327 /* succs: block_3 block_7 */ if ssa_329 { block block_3: /* preds: block_2 */ vec1 32 div ssa_330 = iand ssa_324, ssa_261.y vec1 32 div ssa_331 = ine32 ssa_330, ssa_0 vec1 32 div ssa_332 = b32csel ssa_331, ssa_261.x, ssa_275.x vec1 32 div ssa_333 = b32csel ssa_331, ssa_261.y, ssa_275.y vec1 32 con ssa_334 = bfm ssa_323, ssa_0 vec1 32 div ssa_335 = iand ssa_333, ssa_334 vec1 32 div ssa_336 = bit_count ssa_335 vec1 32 div ssa_337 = iadd ssa_336, ssa_332 vec1 32 div ssa_338 = ishl ssa_337, ssa_17 vec1 32 div ssa_339 = intrinsic load_ssbo (ssa_50, ssa_338) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_340 = ushr ssa_339, ssa_23 vec1 32 div ssa_341 = iand ssa_339, ssa_33 vec1 32 div ssa_342 = iand ssa_340, ssa_33 vec1 32 div ssa_343 = u2f32 ssa_341 vec1 32 div ssa_344 = u2f32 ssa_342 vec1 32 div ssa_345 = ushr ssa_339, ssa_32 vec1 32 div ssa_346 = extract_u8 ssa_339, ssa_29 vec1 32 div ssa_347 = iand ssa_345, ssa_24 vec1 32 div ssa_348 = iand ssa_346, ssa_24 vec1 32 div ssa_349 = ushr ssa_240, ssa_347 vec1 32 div ssa_350 = ushr ssa_241, ssa_348 vec1 32 div ssa_351 = u2f32 ssa_349 vec1 32 div ssa_352 = u2f32 ssa_350 vec1 32 div ssa_353 = fmul ssa_351, ssa_280 vec1 32 div ssa_354 = fmul ssa_352, ssa_281 vec1 32 div ssa_355 = ffract ssa_353 vec1 32 div ssa_356 = ffract ssa_354 vec1 32 div ssa_357 = ffma ssa_284, ssa_355, ssa_282 vec1 32 div ssa_358 = ffma ssa_285, ssa_356, ssa_283 vec1 32 div ssa_359 = ffma ssa_343, ssa_238.o, ssa_357 vec1 32 div ssa_360 = ffma ssa_344, ssa_238.p, ssa_358 vec2 32 div ssa_361 = vec2 ssa_359, ssa_360 vec4 32 div ssa_362 = (float32)txl ssa_293 (texture_handle), ssa_297 (sampler_handle), ssa_361 (coord), ssa_0 (lod), 0 (texture), 0 (sampler) vec1 32 div ssa_363 = flt32! ssa_0, ssa_362.x /* succs: block_4 block_5 */ if ssa_363 { block block_4: /* preds: block_3 */ vec1 32 con ssa_364 = iadd ssa_323, ssa_237 vec1 32 con ssa_365 = ishl ssa_364, ssa_26 vec16 32 con ssa_366 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_365) (access=80, align_mul=128, align_offset=0) vec1 32 con ssa_367 = iadd ssa_365, ssa_99 vec8 32 con ssa_368 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_367) (access=80, align_mul=128, align_offset=64) vec1 32 con ssa_369 = iadd ssa_365, ssa_69 vec4 32 con ssa_370 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_369) (access=80, align_mul=128, align_offset=96) vec8 32 con ssa_371 = intrinsic load_ssbo_uniform_block_intel (ssa_88, ssa_299) (access=80, align_mul=1073741824, align_offset=1264) vec1 32 con ssa_372 = fmul ssa_371.e, ssa_371.h vec1 32 con ssa_373 = fsat! ssa_372 vec1 32 con ssa_374 = ffma ssa_371.f, ssa_371.c, ssa_10 vec1 32 con ssa_375 = ffma ssa_371.f, ssa_371.d, ssa_10 vec1 32 con ssa_376 = ffma ssa_373, ssa_374, ssa_1 vec1 32 con ssa_377 = ffma ssa_373, ssa_375, ssa_1 vec1 32 div ssa_378 = ffma ssa_368.a, ssa_248, ssa_368.g vec1 32 div ssa_379 = ffma ssa_368.a, ssa_247, ssa_368.h vec1 32 div ssa_380 = fmul ssa_368.a, ssa_242 vec1 32 div ssa_381 = fmul ssa_368.a, ssa_243 vec1 32 div ssa_382 = fmul ssa_368.a, ssa_244 vec1 32 div ssa_383 = fmul ssa_368.a, ssa_245 vec1 32 div ssa_384 = ffma ssa_368.b, ssa_248, ssa_368.e vec1 32 div ssa_385 = ffma ssa_368.b, ssa_247, ssa_368.f vec1 32 con ssa_386 = fmul ssa_368.b, ssa_239.m vec1 32 div ssa_387 = fmul ssa_386, ssa_243 vec1 32 div ssa_388 = fmul ssa_386, ssa_245 vec1 32 div ssa_389 = fmul ssa_242, ssa_238.a vec1 32 div ssa_390 = fmul ssa_389, ssa_386 vec1 32 div ssa_391 = fmul ssa_390, ssa_376 vec1 32 div ssa_392 = fmul ssa_387, ssa_376 vec1 32 div ssa_393 = fmul ssa_244, ssa_238.a vec1 32 div ssa_394 = fmul ssa_393, ssa_386 vec1 32 div ssa_395 = fmul ssa_394, ssa_377 vec1 32 div ssa_396 = fmul ssa_388, ssa_377 vec1 32 con ssa_397 = intrinsic load_uniform (ssa_298) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_398 = iadd ssa_397, ssa_20 vec1 32 con ssa_399 = iadd3 ssa_21, ssa_370.y, ssa_398 vec2 32 div ssa_400 = vec2 ssa_384, ssa_385 vec2 32 div ssa_401 = vec2 ssa_391, ssa_392 vec2 32 div ssa_402 = vec2 ssa_395, ssa_396 vec1 32 con ssa_403 = umin ssa_399, ssa_38 vec1 32 con ssa_404 = ishl ssa_403, ssa_27 vec1 32 con ssa_405 = iadd3 ssa_42, ssa_404, ssa_40 vec1 32 con ssa_406 = intrinsic resource_intel (ssa_44, ssa_405, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_407 = (float32)txd ssa_406 (texture_handle), ssa_216 (sampler_handle), ssa_400 (coord), ssa_401 (ddx), ssa_402 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_408 = ffma ssa_407.x, ssa_9, ssa_10 vec1 32 div ssa_409 = ffma ssa_407.y, ssa_9, ssa_10 vec1 32 div ssa_410 = fneg ssa_407.w vec1 32 div ssa_411 = fadd ssa_1, ssa_410 vec1 32 div ssa_412 = fneg ssa_411 vec1 32 div ssa_413 = fadd ssa_362.x, ssa_412 vec1 32 div ssa_414 = ffma ssa_413, ssa_368.c, ssa_411 vec1 32 div ssa_415 = fsat! ssa_414 vec1 32 div ssa_416 = fadd ssa_415, ssa_5 vec1 32 div ssa_417 = fabs ssa_416 vec1 32 div ssa_418 = fneg ssa_417 vec1 32 div ssa_419 = ffma ssa_418, ssa_9, ssa_1 vec1 32 div ssa_420 = fsqrt ssa_419 vec1 32 div ssa_421 = fsat! ssa_420 vec1 32 div ssa_422 = fneg ssa_312 vec1 32 div ssa_423 = ffma ssa_421, ssa_366.d, ssa_422 vec1 32 div ssa_424 = fsat! ssa_423 vec1 32 div ssa_425 = fmul ssa_415, ssa_366.d vec1 32 div ssa_426 = fadd ssa_425, ssa_312 vec1 32 div ssa_427 = fneg ssa_310 vec1 32 div ssa_428 = ffma ssa_408, ssa_368.d, ssa_427 vec1 32 div ssa_429 = fneg ssa_309 vec1 32 div ssa_430 = ffma ssa_409, ssa_368.d, ssa_429 vec1 32 div ssa_431 = ffma ssa_424, ssa_428, ssa_310 vec1 32 div ssa_432 = ffma ssa_424, ssa_430, ssa_309 vec1 32 con ssa_433 = fabs ssa_368.d vec1 32 div ssa_434 = fmul ssa_433, ssa_424 vec1 32 div ssa_435 = fmax! ssa_311, ssa_434 vec1 32 div ssa_436 = fmin! ssa_301, ssa_425 vec1 32 div ssa_437 = fneg ssa_436 vec1 32 div ssa_438 = fadd ssa_301, ssa_437 vec1 32 con ssa_439 = extract_u16 ssa_370.w, ssa_16 vec1 32 con ssa_440 = iadd3 ssa_21, ssa_439, ssa_398 vec2 32 div ssa_441 = vec2 ssa_378, ssa_379 vec2 32 div ssa_442 = vec2 ssa_380, ssa_381 vec2 32 div ssa_443 = vec2 ssa_382, ssa_383 vec1 32 con ssa_444 = umin ssa_440, ssa_38 vec1 32 con ssa_445 = ishl ssa_444, ssa_27 vec1 32 con ssa_446 = iadd3 ssa_42, ssa_445, ssa_40 vec1 32 con ssa_447 = intrinsic resource_intel (ssa_44, ssa_446, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_448 = (float32)txd ssa_447 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_442 (ddx), ssa_443 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_449 = ffma ssa_448.x, ssa_366.i, ssa_366.j vec1 32 div ssa_450 = fsat! ssa_449 vec1 32 div ssa_451 = ffma ssa_450, ssa_366.k, ssa_366.l vec1 32 div ssa_452 = fsat! ssa_451 vec1 32 div ssa_453 = ffma ssa_452, ssa_436, ssa_306 vec1 32 con ssa_454 = extract_u16 ssa_370.w, ssa_0 vec1 32 con ssa_455 = iadd3 ssa_21, ssa_454, ssa_398 vec1 32 con ssa_456 = umin ssa_455, ssa_38 vec1 32 con ssa_457 = ishl ssa_456, ssa_27 vec1 32 con ssa_458 = iadd3 ssa_42, ssa_457, ssa_40 vec1 32 con ssa_459 = intrinsic resource_intel (ssa_44, ssa_458, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_460 = (float32)txd ssa_459 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_442 (ddx), ssa_443 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_461 = ffma ssa_460.x, ssa_366.e, ssa_366.f vec1 32 div ssa_462 = fsat! ssa_461 vec1 32 div ssa_463 = ffma ssa_462, ssa_366.g, ssa_366.h vec1 32 div ssa_464 = fsat! ssa_463 vec1 32 div ssa_465 = ffma ssa_464, ssa_436, ssa_305 vec1 32 con ssa_466 = extract_u16 ssa_370.z, ssa_0 vec1 32 con ssa_467 = iadd3 ssa_21, ssa_466, ssa_398 vec1 32 con ssa_468 = umin ssa_467, ssa_38 vec1 32 con ssa_469 = ishl ssa_468, ssa_27 vec1 32 con ssa_470 = iadd3 ssa_42, ssa_469, ssa_40 vec1 32 con ssa_471 = intrinsic resource_intel (ssa_44, ssa_470, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_472 = (float32)txd ssa_471 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_442 (ddx), ssa_443 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_473 = ffma ssa_460.x, ssa_366.m, ssa_366.n vec1 32 div ssa_474 = fsat! ssa_473 vec1 32 div ssa_475 = ffma ssa_474, ssa_366.o, ssa_366.p vec1 32 div ssa_476 = fsat! ssa_475 vec1 32 con ssa_477 = fadd ssa_366.a, ssa_10 vec1 32 con ssa_478 = fadd ssa_366.b, ssa_10 vec1 32 con ssa_479 = fadd ssa_366.c, ssa_10 vec1 32 div ssa_480 = ffma ssa_476, ssa_477, ssa_1 vec1 32 div ssa_481 = ffma ssa_476, ssa_478, ssa_1 vec1 32 div ssa_482 = ffma ssa_476, ssa_479, ssa_1 vec1 32 div ssa_483 = fmul ssa_472.x, ssa_436 vec1 32 div ssa_484 = fmul ssa_472.y, ssa_436 vec1 32 div ssa_485 = fmul ssa_472.z, ssa_436 vec1 32 div ssa_486 = ffma ssa_483, ssa_480, ssa_304 vec1 32 div ssa_487 = ffma ssa_484, ssa_481, ssa_303 vec1 32 div ssa_488 = ffma ssa_485, ssa_482, ssa_302 vec1 32 div ssa_489 = fmul ssa_380, ssa_239.i vec1 32 div ssa_490 = fmul ssa_381, ssa_239.i vec1 32 div ssa_491 = fmul ssa_382, ssa_239.i vec1 32 div ssa_492 = fmul ssa_383, ssa_239.i vec1 32 con ssa_493 = extract_u16 ssa_370.z, ssa_16 vec1 32 con ssa_494 = iadd3 ssa_21, ssa_493, ssa_398 vec2 32 div ssa_495 = vec2 ssa_489, ssa_490 vec2 32 div ssa_496 = vec2 ssa_491, ssa_492 vec1 32 con ssa_497 = umin ssa_494, ssa_38 vec1 32 con ssa_498 = ishl ssa_497, ssa_27 vec1 32 con ssa_499 = iadd3 ssa_42, ssa_498, ssa_40 vec1 32 con ssa_500 = intrinsic resource_intel (ssa_44, ssa_499, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_501 = (float32)txd ssa_500 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_495 (ddx), ssa_496 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_502 = ffma ssa_501.x, ssa_9, ssa_10 vec1 32 div ssa_503 = ffma ssa_501.y, ssa_9, ssa_10 vec1 32 div ssa_504 = fmul ssa_436, ssa_370.x vec1 32 div ssa_505 = ffma ssa_504, ssa_502, ssa_308 vec1 32 div ssa_506 = ffma ssa_504, ssa_503, ssa_307 /* succs: block_6 */ } else { block block_5: /* preds: block_3 */ /* succs: block_6 */ } block block_6: /* preds: block_4 block_5 */ vec1 32 div ssa_507 = phi block_4: ssa_505, block_5: ssa_308 vec1 32 div ssa_508 = phi block_4: ssa_426, block_5: ssa_312 vec1 32 div ssa_509 = phi block_4: ssa_435, block_5: ssa_311 vec1 32 div ssa_510 = phi block_4: ssa_431, block_5: ssa_310 vec1 32 div ssa_511 = phi block_4: ssa_432, block_5: ssa_309 vec1 32 div ssa_512 = phi block_4: ssa_506, block_5: ssa_307 vec1 32 div ssa_513 = phi block_4: ssa_465, block_5: ssa_305 vec1 32 div ssa_514 = phi block_4: ssa_486, block_5: ssa_304 vec1 32 div ssa_515 = phi block_4: ssa_487, block_5: ssa_303 vec1 32 div ssa_516 = phi block_4: ssa_488, block_5: ssa_302 vec1 32 div ssa_517 = phi block_4: ssa_438, block_5: ssa_301 vec1 32 div ssa_518 = phi block_4: ssa_453, block_5: ssa_306 /* succs: block_8 */ } else { block block_7: /* preds: block_2 */ /* succs: block_8 */ } block block_8: /* preds: block_6 block_7 */ vec1 32 div ssa_519 = phi block_6: ssa_507, block_7: ssa_308 vec1 32 div ssa_520 = phi block_6: ssa_508, block_7: ssa_312 vec1 32 div ssa_521 = phi block_6: ssa_509, block_7: ssa_311 vec1 32 div ssa_522 = phi block_6: ssa_510, block_7: ssa_310 vec1 32 div ssa_523 = phi block_6: ssa_511, block_7: ssa_309 vec1 32 div ssa_524 = phi block_6: ssa_512, block_7: ssa_307 vec1 32 div ssa_525 = phi block_6: ssa_513, block_7: ssa_305 vec1 32 div ssa_526 = phi block_6: ssa_514, block_7: ssa_304 vec1 32 div ssa_527 = phi block_6: ssa_515, block_7: ssa_303 vec1 32 div ssa_528 = phi block_6: ssa_516, block_7: ssa_302 vec1 32 div ssa_529 = phi block_6: ssa_517, block_7: ssa_301 vec1 32 div ssa_530 = phi block_6: ssa_518, block_7: ssa_306 vec1 32 con ssa_531 = iadd ssa_325, ssa_30 vec1 32 con ssa_532 = iand ssa_531, ssa_325 vec1 32 con ssa_533 = ieq32 ssa_532, ssa_0 /* succs: block_9 block_10 */ if ssa_533 { block block_9: /* preds: block_8 */ break /* succs: block_12 */ } else { block block_10: /* preds: block_8 */ /* succs: block_11 */ } block block_11: /* preds: block_10 */ /* succs: block_2 */ } block block_12: /* preds: block_9 */ /* succs: block_14 */ } else { block block_13: /* preds: block_0 */ /* succs: block_14 */ } block block_14: /* preds: block_12 block_13 */ vec1 32 div ssa_534 = phi block_13: ssa_0, block_12: ssa_520 vec1 32 div ssa_535 = phi block_13: ssa_0, block_12: ssa_521 vec1 32 div ssa_536 = phi block_13: ssa_0, block_12: ssa_522 vec1 32 div ssa_537 = phi block_13: ssa_0, block_12: ssa_523 vec1 32 div ssa_538 = phi block_13: ssa_0, block_12: ssa_519 vec1 32 div ssa_539 = phi block_13: ssa_0, block_12: ssa_524 vec1 32 div ssa_540 = phi block_13: ssa_0, block_12: ssa_530 vec1 32 div ssa_541 = phi block_13: ssa_0, block_12: ssa_525 vec1 32 div ssa_542 = phi block_13: ssa_0, block_12: ssa_526 vec1 32 div ssa_543 = phi block_13: ssa_0, block_12: ssa_527 vec1 32 div ssa_544 = phi block_13: ssa_0, block_12: ssa_528 vec1 32 div ssa_545 = phi block_13: ssa_1, block_12: ssa_529 vec1 32 con ssa_546 = phi block_13: ssa_278, block_12: ssa_325 vec1 32 div ssa_547 = flt32! ssa_0, ssa_545 vec1 32 con ssa_548 = ine32 ssa_546, ssa_0 vec1 32 div ssa_549 = iand ssa_547, ssa_548 /* succs: block_15 block_16 */ if ssa_549 { block block_15: /* preds: block_14 */ vec1 32 con ssa_550 = uclz ssa_546 vec1 32 con ssa_551 = ineg ssa_550 vec1 32 con ssa_552 = iadd ssa_31, ssa_551 vec1 32 con ssa_553 = ineg ssa_552 vec1 32 con ssa_554 = iadd ssa_31, ssa_553 vec1 32 con ssa_555 = ieq32 ssa_552, ssa_30 vec1 32 con ssa_556 = b32csel ssa_555, ssa_30, ssa_554 vec1 32 con ssa_557 = ineg ssa_556 vec1 32 con ssa_558 = iadd ssa_31, ssa_557 vec1 32 con ssa_559 = ieq32 ssa_556, ssa_30 vec1 32 con ssa_560 = b32csel ssa_559, ssa_30, ssa_558 vec1 32 con ssa_561 = iadd ssa_560, ssa_237 vec1 32 con ssa_562 = ishl ssa_561, ssa_26 vec16 32 con ssa_563 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_562) (access=80, align_mul=128, align_offset=0) vec1 32 con ssa_564 = iadd ssa_562, ssa_99 vec8 32 con ssa_565 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_564) (access=80, align_mul=128, align_offset=64) vec1 32 con ssa_566 = iadd ssa_562, ssa_69 vec4 32 con ssa_567 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_566) (access=80, align_mul=128, align_offset=96) vec1 32 con ssa_568 = load_const (0x00000068 = 0.000000) vec1 32 con ssa_569 = load_const (0x000004f0 = 0.000000) vec8 32 con ssa_570 = intrinsic load_ssbo_uniform_block_intel (ssa_88, ssa_569) (access=80, align_mul=1073741824, align_offset=1264) vec1 32 con ssa_571 = fmul ssa_570.e, ssa_570.h vec1 32 con ssa_572 = fsat! ssa_571 vec1 32 con ssa_573 = ffma ssa_570.f, ssa_570.c, ssa_10 vec1 32 con ssa_574 = ffma ssa_570.f, ssa_570.d, ssa_10 vec1 32 con ssa_575 = ffma ssa_572, ssa_573, ssa_1 vec1 32 con ssa_576 = ffma ssa_572, ssa_574, ssa_1 vec1 32 div ssa_577 = ffma ssa_565.a, ssa_248, ssa_565.g vec1 32 div ssa_578 = ffma ssa_565.a, ssa_247, ssa_565.h vec1 32 div ssa_579 = fmul ssa_565.a, ssa_242 vec1 32 div ssa_580 = fmul ssa_565.a, ssa_243 vec1 32 div ssa_581 = fmul ssa_565.a, ssa_244 vec1 32 div ssa_582 = fmul ssa_565.a, ssa_245 vec1 32 div ssa_583 = ffma ssa_565.b, ssa_248, ssa_565.e vec1 32 div ssa_584 = ffma ssa_565.b, ssa_247, ssa_565.f vec1 32 con ssa_585 = fmul ssa_565.b, ssa_239.m vec1 32 div ssa_586 = fmul ssa_585, ssa_243 vec1 32 div ssa_587 = fmul ssa_585, ssa_245 vec1 32 div ssa_588 = fmul ssa_242, ssa_238.a vec1 32 div ssa_589 = fmul ssa_588, ssa_585 vec1 32 div ssa_590 = fmul ssa_589, ssa_575 vec1 32 div ssa_591 = fmul ssa_586, ssa_575 vec1 32 div ssa_592 = fmul ssa_244, ssa_238.a vec1 32 div ssa_593 = fmul ssa_592, ssa_585 vec1 32 div ssa_594 = fmul ssa_593, ssa_576 vec1 32 div ssa_595 = fmul ssa_587, ssa_576 vec1 32 con ssa_596 = intrinsic load_uniform (ssa_568) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_597 = iadd ssa_596, ssa_20 vec1 32 con ssa_598 = iadd3 ssa_21, ssa_567.y, ssa_597 vec2 32 div ssa_599 = vec2 ssa_583, ssa_584 vec2 32 div ssa_600 = vec2 ssa_590, ssa_591 vec2 32 div ssa_601 = vec2 ssa_594, ssa_595 vec1 32 con ssa_602 = umin ssa_598, ssa_38 vec1 32 con ssa_603 = ishl ssa_602, ssa_27 vec1 32 con ssa_604 = iadd3 ssa_42, ssa_603, ssa_40 vec1 32 con ssa_605 = intrinsic resource_intel (ssa_44, ssa_604, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_606 = (float32)txd ssa_605 (texture_handle), ssa_216 (sampler_handle), ssa_599 (coord), ssa_600 (ddx), ssa_601 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_607 = ffma ssa_606.x, ssa_9, ssa_10 vec1 32 div ssa_608 = ffma ssa_606.y, ssa_9, ssa_10 vec1 32 div ssa_609 = fneg ssa_606.w vec1 32 div ssa_610 = fadd ssa_609, ssa_1 vec1 32 div ssa_611 = ffma ssa_606.w, ssa_565.c, ssa_610 vec1 32 div ssa_612 = fsat! ssa_611 vec1 32 div ssa_613 = fadd ssa_612, ssa_5 vec1 32 div ssa_614 = fabs ssa_613 vec1 32 div ssa_615 = fneg ssa_614 vec1 32 div ssa_616 = ffma ssa_615, ssa_9, ssa_1 vec1 32 div ssa_617 = fsqrt ssa_616 vec1 32 div ssa_618 = fsat! ssa_617 vec1 32 div ssa_619 = fneg ssa_534 vec1 32 div ssa_620 = ffma ssa_618, ssa_563.d, ssa_619 vec1 32 div ssa_621 = fsat! ssa_620 vec1 32 div ssa_622 = fmul ssa_612, ssa_563.d vec1 32 div ssa_623 = fneg ssa_536 vec1 32 div ssa_624 = ffma ssa_607, ssa_565.d, ssa_623 vec1 32 div ssa_625 = fneg ssa_537 vec1 32 div ssa_626 = ffma ssa_608, ssa_565.d, ssa_625 vec1 32 div ssa_627 = ffma ssa_621, ssa_624, ssa_536 vec1 32 div ssa_628 = ffma ssa_621, ssa_626, ssa_537 vec1 32 con ssa_629 = fabs ssa_565.d vec1 32 div ssa_630 = fmul ssa_629, ssa_621 vec1 32 div ssa_631 = fmax! ssa_535, ssa_630 vec1 32 div ssa_632 = fmin! ssa_545, ssa_622 vec1 32 con ssa_633 = extract_u16 ssa_567.w, ssa_16 vec1 32 con ssa_634 = iadd3 ssa_21, ssa_633, ssa_597 vec2 32 div ssa_635 = vec2 ssa_577, ssa_578 vec2 32 div ssa_636 = vec2 ssa_579, ssa_580 vec2 32 div ssa_637 = vec2 ssa_581, ssa_582 vec1 32 con ssa_638 = umin ssa_634, ssa_38 vec1 32 con ssa_639 = ishl ssa_638, ssa_27 vec1 32 con ssa_640 = iadd3 ssa_42, ssa_639, ssa_40 vec1 32 con ssa_641 = intrinsic resource_intel (ssa_44, ssa_640, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_642 = (float32)txd ssa_641 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_636 (ddx), ssa_637 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_643 = ffma ssa_642.x, ssa_563.i, ssa_563.j vec1 32 div ssa_644 = fsat! ssa_643 vec1 32 div ssa_645 = ffma ssa_644, ssa_563.k, ssa_563.l vec1 32 div ssa_646 = fsat! ssa_645 vec1 32 div ssa_647 = ffma ssa_646, ssa_632, ssa_540 vec1 32 con ssa_648 = extract_u16 ssa_567.w, ssa_0 vec1 32 con ssa_649 = iadd3 ssa_21, ssa_648, ssa_597 vec1 32 con ssa_650 = umin ssa_649, ssa_38 vec1 32 con ssa_651 = ishl ssa_650, ssa_27 vec1 32 con ssa_652 = iadd3 ssa_42, ssa_651, ssa_40 vec1 32 con ssa_653 = intrinsic resource_intel (ssa_44, ssa_652, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_654 = (float32)txd ssa_653 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_636 (ddx), ssa_637 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_655 = ffma ssa_654.x, ssa_563.e, ssa_563.f vec1 32 div ssa_656 = fsat! ssa_655 vec1 32 div ssa_657 = ffma ssa_656, ssa_563.g, ssa_563.h vec1 32 div ssa_658 = fsat! ssa_657 vec1 32 div ssa_659 = ffma ssa_658, ssa_632, ssa_541 vec1 32 con ssa_660 = extract_u16 ssa_567.z, ssa_0 vec1 32 con ssa_661 = iadd3 ssa_21, ssa_660, ssa_597 vec1 32 con ssa_662 = umin ssa_661, ssa_38 vec1 32 con ssa_663 = ishl ssa_662, ssa_27 vec1 32 con ssa_664 = iadd3 ssa_42, ssa_663, ssa_40 vec1 32 con ssa_665 = intrinsic resource_intel (ssa_44, ssa_664, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_666 = (float32)txd ssa_665 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_636 (ddx), ssa_637 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_667 = ffma ssa_654.x, ssa_563.m, ssa_563.n vec1 32 div ssa_668 = fsat! ssa_667 vec1 32 div ssa_669 = ffma ssa_668, ssa_563.o, ssa_563.p vec1 32 div ssa_670 = fsat! ssa_669 vec1 32 con ssa_671 = fadd ssa_563.a, ssa_10 vec1 32 con ssa_672 = fadd ssa_563.b, ssa_10 vec1 32 con ssa_673 = fadd ssa_563.c, ssa_10 vec1 32 div ssa_674 = ffma ssa_670, ssa_671, ssa_1 vec1 32 div ssa_675 = ffma ssa_670, ssa_672, ssa_1 vec1 32 div ssa_676 = ffma ssa_670, ssa_673, ssa_1 vec1 32 div ssa_677 = fmul ssa_666.x, ssa_632 vec1 32 div ssa_678 = fmul ssa_666.y, ssa_632 vec1 32 div ssa_679 = fmul ssa_666.z, ssa_632 vec1 32 div ssa_680 = ffma ssa_677, ssa_674, ssa_542 vec1 32 div ssa_681 = ffma ssa_678, ssa_675, ssa_543 vec1 32 div ssa_682 = ffma ssa_679, ssa_676, ssa_544 vec1 32 div ssa_683 = fmul ssa_579, ssa_239.i vec1 32 div ssa_684 = fmul ssa_580, ssa_239.i vec1 32 div ssa_685 = fmul ssa_581, ssa_239.i vec1 32 div ssa_686 = fmul ssa_582, ssa_239.i vec1 32 con ssa_687 = extract_u16 ssa_567.z, ssa_16 vec1 32 con ssa_688 = iadd3 ssa_21, ssa_687, ssa_597 vec2 32 div ssa_689 = vec2 ssa_683, ssa_684 vec2 32 div ssa_690 = vec2 ssa_685, ssa_686 vec1 32 con ssa_691 = umin ssa_688, ssa_38 vec1 32 con ssa_692 = ishl ssa_691, ssa_27 vec1 32 con ssa_693 = iadd3 ssa_42, ssa_692, ssa_40 vec1 32 con ssa_694 = intrinsic resource_intel (ssa_44, ssa_693, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_695 = (float32)txd ssa_694 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_689 (ddx), ssa_690 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_696 = ffma ssa_695.x, ssa_9, ssa_10 vec1 32 div ssa_697 = ffma ssa_695.y, ssa_9, ssa_10 vec1 32 div ssa_698 = fmul ssa_632, ssa_567.x vec1 32 div ssa_699 = ffma ssa_698, ssa_696, ssa_538 vec1 32 div ssa_700 = ffma ssa_698, ssa_697, ssa_539 /* succs: block_17 */ } else { block block_16: /* preds: block_14 */ /* succs: block_17 */ } block block_17: /* preds: block_15 block_16 */ vec1 32 div ssa_701 = phi block_15: ssa_631, block_16: ssa_535 vec1 32 div ssa_702 = phi block_15: ssa_627, block_16: ssa_536 vec1 32 div ssa_703 = phi block_15: ssa_628, block_16: ssa_537 vec1 32 div ssa_704 = phi block_15: ssa_699, block_16: ssa_538 vec1 32 div ssa_705 = phi block_15: ssa_700, block_16: ssa_539 vec1 32 div ssa_706 = phi block_15: ssa_647, block_16: ssa_540 vec1 32 div ssa_707 = phi block_15: ssa_659, block_16: ssa_541 vec1 32 div ssa_708 = phi block_15: ssa_680, block_16: ssa_542 vec1 32 div ssa_709 = phi block_15: ssa_681, block_16: ssa_543 vec1 32 div ssa_710 = phi block_15: ssa_682, block_16: ssa_544 vec1 32 div ssa_711 = fneg ssa_705 vec1 32 div ssa_712 = fmul ssa_711, ssa_705 vec1 32 div ssa_713 = fneg ssa_704 vec1 32 div ssa_714 = ffma ssa_713, ssa_704, ssa_712 vec1 32 div ssa_715 = fadd ssa_1, ssa_714 vec1 32 div ssa_716 = fsat! ssa_715 vec1 32 div ssa_717 = fsqrt ssa_716 vec1 32 div ssa_718 = fneg ssa_703 vec1 32 div ssa_719 = fmul ssa_718, ssa_703 vec1 32 div ssa_720 = fneg ssa_702 vec1 32 div ssa_721 = ffma ssa_720, ssa_702, ssa_719 vec1 32 div ssa_722 = fadd ssa_1, ssa_721 vec1 32 div ssa_723 = fsat! ssa_722 vec1 32 div ssa_724 = fsqrt ssa_723 vec1 32 div ssa_725 = fadd ssa_702, ssa_713 vec1 32 div ssa_726 = fadd ssa_703, ssa_711 vec1 32 div ssa_727 = fneg ssa_717 vec1 32 div ssa_728 = fadd ssa_724, ssa_727 vec1 32 div ssa_729 = ffma ssa_725, ssa_701, ssa_704 vec1 32 div ssa_730 = ffma ssa_726, ssa_701, ssa_705 vec1 32 div ssa_731 = ffma ssa_728, ssa_701, ssa_717 vec1 32 div ssa_732 = ffma ssa_230, ssa_234, ssa_1 vec1 32 div ssa_733 = fneg ssa_729 vec1 32 div ssa_734 = fneg ssa_730 vec1 32 div ssa_735 = fmul ssa_732, ssa_731 vec1 32 div ssa_736 = ffma ssa_236, ssa_734, ssa_735 vec1 32 div ssa_737 = ffma ssa_235, ssa_733, ssa_736 vec1 32 div ssa_738 = fmul ssa_729, ssa_732 vec1 32 div ssa_739 = fmul ssa_730, ssa_732 vec1 32 div ssa_740 = ffma ssa_737, ssa_235, ssa_738 vec1 32 div ssa_741 = ffma ssa_737, ssa_236, ssa_739 vec1 32 div ssa_742 = fneg ssa_731 vec1 32 div ssa_743 = fadd ssa_737, ssa_742 vec1 32 div ssa_744 = fmul ssa_743, ssa_732 vec1 32 div ssa_745 = fmul ssa_744, ssa_744 vec1 32 div ssa_746 = ffma ssa_741, ssa_741, ssa_745 vec1 32 div ssa_747 = ffma ssa_740, ssa_740, ssa_746 vec1 32 div ssa_748 = frsq ssa_747 vec1 32 div ssa_749 = fmul ssa_740, ssa_748 vec1 32 div ssa_750 = fmul ssa_741, ssa_748 vec1 32 div ssa_751 = fmul ssa_744, ssa_748 vec1 32 div ssa_752 = ffma ssa_106.x, ssa_113.z, ssa_19 vec1 32 div ssa_753 = fadd ssa_752, ssa_114 vec1 32 div ssa_754 = fsat! ssa_753 vec1 32 div ssa_755 = fmul ssa_750, ssa_164 vec1 32 div ssa_756 = fmul ssa_750, ssa_167 vec1 32 div ssa_757 = fmul ssa_750, ssa_170 vec1 32 div ssa_758 = ffma ssa_749, ssa_152, ssa_755 vec1 32 div ssa_759 = ffma ssa_749, ssa_155, ssa_756 vec1 32 div ssa_760 = ffma ssa_749, ssa_158, ssa_757 vec1 32 div ssa_761 = ffma ssa_751, ssa_179, ssa_758 vec1 32 div ssa_762 = ffma ssa_751, ssa_182, ssa_759 vec1 32 div ssa_763 = ffma ssa_751, ssa_161, ssa_760 vec4 32 con ssa_764 = intrinsic load_ssbo_uniform_block_intel (ssa_104, ssa_69) (access=80, align_mul=1073741824, align_offset=96) vec4 32 con ssa_765 = intrinsic load_ssbo_uniform_block_intel (ssa_98, ssa_42) (access=80, align_mul=1073741824, align_offset=128) vec1 32 div ssa_766 = ffma ssa_179, ssa_6, ssa_146 vec1 32 div ssa_767 = ffma ssa_182, ssa_6, ssa_149 vec1 32 div ssa_768 = ffma ssa_161, ssa_6, ssa_132 vec1 32 con ssa_769 = load_const (0x000001d0 = 0.000000) vec8 32 con ssa_770 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_769) (access=80, align_mul=1073741824, align_offset=464) vec1 32 div ssa_771 = ffma ssa_770.a, ssa_766, ssa_770.e vec1 32 div ssa_772 = ffma ssa_770.b, ssa_767, ssa_770.f vec1 32 div ssa_773 = ffma ssa_770.c, ssa_768, ssa_770.g vec1 32 div ssa_774 = fneg ssa_771 vec1 32 div ssa_775 = fadd ssa_1, ssa_774 vec1 32 div ssa_776 = fneg ssa_772 vec1 32 div ssa_777 = fadd ssa_1, ssa_776 vec1 32 div ssa_778 = fneg ssa_773 vec1 32 div ssa_779 = fadd ssa_1, ssa_778 vec1 32 div ssa_780 = fmin! ssa_771, ssa_775 vec1 32 div ssa_781 = fmin! ssa_772, ssa_777 vec1 32 div ssa_782 = fmin! ssa_773, ssa_779 vec1 32 div ssa_783 = fmin! ssa_781, ssa_782 vec1 32 div ssa_784 = fmin! ssa_780, ssa_783 vec1 32 div ssa_785 = fmul ssa_784, ssa_18 vec1 32 div ssa_786 = fsat! ssa_785 vec3 32 div ssa_787 = vec3 ssa_771, ssa_772, ssa_773 vec1 32 con ssa_788 = umin ssa_59, ssa_38 vec1 32 con ssa_789 = ishl ssa_788, ssa_27 vec1 32 con ssa_790 = iadd3 ssa_42, ssa_789, ssa_40 vec1 32 con ssa_791 = intrinsic resource_intel (ssa_44, ssa_790, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_792 = umin ssa_66, ssa_211 vec1 32 con ssa_793 = ishl ssa_792, ssa_28 vec1 32 con ssa_794 = iadd ssa_213, ssa_793 vec1 32 con ssa_795 = intrinsic resource_intel (ssa_44, ssa_794, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_796 = (float32)txl ssa_791 (texture_handle), ssa_795 (sampler_handle), ssa_787 (coord), ssa_0 (lod), 0 (texture), 0 (sampler) vec1 32 div ssa_797 = fneg ssa_796.x vec1 32 div ssa_798 = ffma ssa_797, ssa_786, ssa_1 vec1 32 div ssa_799 = fsat! ssa_798 vec1 32 div ssa_800 = flt32! ssa_0, ssa_799 /* succs: block_18 block_54 */ if ssa_800 { block block_18: /* preds: block_17 */ vec1 32 con ssa_801 = load_const (0x45fa0000 = 8000.000000) vec1 32 con ssa_802 = load_const (0x467a0000 = 16000.000000) vec1 32 con ssa_803 = load_const (0x00000350 = 0.000000) vec4 32 con ssa_804 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_803) (access=80, align_mul=1073741824, align_offset=848) vec1 32 div ssa_805 = ffma ssa_804.x, ssa_766, ssa_804.z vec1 32 div ssa_806 = ffma ssa_804.y, ssa_767, ssa_804.w vec1 32 div ssa_807 = fadd ssa_805, ssa_5 vec1 32 div ssa_808 = fneg ssa_806 vec1 32 div ssa_809 = fadd ssa_6, ssa_808 vec1 32 div ssa_810 = fabs ssa_807 vec1 32 div ssa_811 = fabs ssa_809 vec1 32 div ssa_812 = flt32! ssa_811, ssa_6 vec1 32 div ssa_813 = flt32! ssa_810, ssa_6 vec1 32 div ssa_814 = iand ssa_813, ssa_812 vec1 32 div ssa_815 = fadd ssa_768, ssa_801 vec1 32 con ssa_816 = load_const (0x00000360 = 0.000000) vec1 32 con ssa_817 = umin ssa_62, ssa_38 vec1 32 con ssa_818 = ishl ssa_817, ssa_27 vec1 32 con ssa_819 = iadd3 ssa_42, ssa_818, ssa_40 vec1 32 con ssa_820 = intrinsic resource_intel (ssa_44, ssa_819, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_821 = umin ssa_67, ssa_211 vec1 32 con ssa_822 = ishl ssa_821, ssa_28 vec1 32 con ssa_823 = iadd ssa_213, ssa_822 vec1 32 con ssa_824 = intrinsic resource_intel (ssa_44, ssa_823, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) /* succs: block_19 */ loop { block block_19: /* preds: block_18 */ /* succs: block_20 block_21 */ if ssa_814 { block block_20: /* preds: block_19 */ /* succs: block_25 */ } else { block block_21: /* preds: block_19 */ vec4 32 con ssa_825 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) vec1 32 div ssa_826 = ffma ssa_825.x, ssa_766, ssa_825.z vec1 32 div ssa_827 = ffma ssa_825.y, ssa_767, ssa_825.w vec1 32 div ssa_828 = fadd ssa_826, ssa_5 vec1 32 div ssa_829 = fneg ssa_827 vec1 32 div ssa_830 = fadd ssa_6, ssa_829 vec1 32 div ssa_831 = fabs ssa_828 vec1 32 div ssa_832 = fabs ssa_830 vec1 32 div ssa_833 = flt32! ssa_832, ssa_6 vec1 32 div ssa_834 = flt32! ssa_831, ssa_6 vec1 32 div ssa_835 = iand ssa_834, ssa_833 /* succs: block_22 block_23 */ if ssa_835 { block block_22: /* preds: block_21 */ /* succs: block_24 */ } else { block block_23: /* preds: block_21 */ break /* succs: block_26 */ } block block_24: /* preds: block_22 */ /* succs: block_25 */ } block block_25: /* preds: block_20 block_24 */ vec1 32 div ssa_836 = phi block_20: ssa_806, block_24: ssa_827 vec1 32 div ssa_837 = phi block_20: ssa_805, block_24: ssa_826 vec1 32 div ssa_838 = phi block_20: ssa_0, block_24: ssa_1 vec1 32 div ssa_839 = fneg! ssa_836 vec1 32 div ssa_840 = fadd! ssa_1, ssa_839 vec3 32 div ssa_841 = vec3 ssa_837, ssa_840, ssa_838 vec4 32 div ssa_842 = (float32)txl ssa_820 (texture_handle), ssa_824 (sampler_handle), ssa_841 (coord), ssa_0 (lod), 0 (texture), 0 (sampler) vec1 32 div ssa_843 = fmul ssa_842.x, ssa_802 vec1 32 div ssa_844 = flt32! ssa_815, ssa_843 break /* succs: block_26 */ } block block_26: /* preds: block_23 block_25 */ vec1 32 div ssa_845 = phi block_23: ssa_2, block_25: ssa_844 /* succs: block_27 block_28 */ if ssa_845 { block block_27: /* preds: block_26 */ /* succs: block_53 */ } else { block block_28: /* preds: block_26 */ vec1 32 con ssa_846 = load_const (0x3883126f = 0.000063) /* succs: block_29 block_30 */ if ssa_814 { block block_29: /* preds: block_28 */ /* succs: block_31 */ } else { block block_30: /* preds: block_28 */ vec4 32 con ssa_847 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) vec1 32 div ssa_848 = ffma ssa_847.x, ssa_766, ssa_847.z vec1 32 div ssa_849 = ffma ssa_847.y, ssa_767, ssa_847.w vec1 32 div ssa_850 = fadd ssa_848, ssa_5 vec1 32 div ssa_851 = fneg ssa_849 vec1 32 div ssa_852 = fadd ssa_6, ssa_851 vec1 32 div ssa_853 = fabs ssa_850 vec1 32 div ssa_854 = fabs ssa_852 vec1 32 div ssa_855 = flt32! ssa_854, ssa_6 vec1 32 div ssa_856 = flt32! ssa_853, ssa_6 vec1 32 div ssa_857 = iand ssa_856, ssa_855 vec1 32 div ssa_858 = b32csel ssa_857, ssa_16, ssa_17 /* succs: block_31 */ } block block_31: /* preds: block_29 block_30 */ vec1 32 div ssa_859 = phi block_29: ssa_0, block_30: ssa_858 vec1 32 div ssa_860 = phi block_29: ssa_805, block_30: ssa_848 vec1 32 div ssa_861 = phi block_29: ssa_806, block_30: ssa_849 vec1 32 div ssa_862 = u2f32 ssa_859 vec1 32 div ssa_863 = flt32! ssa_862, ssa_9 /* succs: block_32 block_33 */ if ssa_863 { block block_32: /* preds: block_31 */ vec1 32 div ssa_864 = fneg! ssa_861 vec1 32 div ssa_865 = fadd! ssa_1, ssa_864 vec1 32 div ssa_866 = fneg ssa_815 vec1 32 div ssa_867 = ffma ssa_866, ssa_846, ssa_1 vec3 32 div ssa_868 = vec3 ssa_860, ssa_865, ssa_862 vec1 32 con ssa_869 = umin ssa_68, ssa_211 vec1 32 con ssa_870 = ishl ssa_869, ssa_28 vec1 32 con ssa_871 = iadd ssa_213, ssa_870 vec1 32 con ssa_872 = intrinsic resource_intel (ssa_44, ssa_871, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec1 32 div ssa_873 = (float32)txl ssa_820 (texture_handle), ssa_872 (sampler_handle), ssa_868 (coord), ssa_867 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_34 */ } else { block block_33: /* preds: block_31 */ /* succs: block_34 */ } block block_34: /* preds: block_32 block_33 */ vec1 32 div ssa_874 = phi block_32: ssa_873, block_33: ssa_1 /* succs: block_35 block_36 */ if ssa_814 { block block_35: /* preds: block_34 */ /* succs: block_37 */ } else { block block_36: /* preds: block_34 */ vec4 32 con ssa_875 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) vec1 32 div ssa_876 = ffma ssa_875.x, ssa_766, ssa_875.z vec1 32 div ssa_877 = ffma ssa_875.y, ssa_767, ssa_875.w vec1 32 div ssa_878 = fadd ssa_876, ssa_5 vec1 32 div ssa_879 = fneg ssa_877 vec1 32 div ssa_880 = fadd ssa_6, ssa_879 vec1 32 div ssa_881 = fabs ssa_878 vec1 32 div ssa_882 = fabs ssa_880 vec1 32 div ssa_883 = flt32! ssa_882, ssa_6 vec1 32 div ssa_884 = flt32! ssa_881, ssa_6 vec1 32 div ssa_885 = iand ssa_884, ssa_883 vec1 32 div ssa_886 = b32csel ssa_885, ssa_16, ssa_17 /* succs: block_37 */ } block block_37: /* preds: block_35 block_36 */ vec1 32 div ssa_887 = phi block_35: ssa_0, block_36: ssa_886 vec1 32 div ssa_888 = phi block_35: ssa_805, block_36: ssa_876 vec1 32 div ssa_889 = phi block_35: ssa_806, block_36: ssa_877 vec1 32 div ssa_890 = u2f32 ssa_887 vec1 32 div ssa_891 = flt32! ssa_890, ssa_9 /* succs: block_38 block_39 */ if ssa_891 { block block_38: /* preds: block_37 */ vec1 32 div ssa_892 = fneg! ssa_889 vec1 32 div ssa_893 = fadd! ssa_1, ssa_892 vec1 32 div ssa_894 = fneg ssa_815 vec1 32 div ssa_895 = ffma ssa_894, ssa_846, ssa_1 vec3 32 div ssa_896 = vec3 ssa_888, ssa_893, ssa_890 vec1 32 con ssa_897 = umin ssa_68, ssa_211 vec1 32 con ssa_898 = ishl ssa_897, ssa_28 vec1 32 con ssa_899 = iadd ssa_213, ssa_898 vec1 32 con ssa_900 = intrinsic resource_intel (ssa_44, ssa_899, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec1 32 div ssa_901 = (float32)txl ssa_820 (texture_handle), ssa_900 (sampler_handle), ssa_896 (coord), ssa_895 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_40 */ } else { block block_39: /* preds: block_37 */ /* succs: block_40 */ } block block_40: /* preds: block_38 block_39 */ vec1 32 div ssa_902 = phi block_38: ssa_901, block_39: ssa_1 vec1 32 div ssa_903 = fadd ssa_902, ssa_874 /* succs: block_41 block_42 */ if ssa_814 { block block_41: /* preds: block_40 */ /* succs: block_43 */ } else { block block_42: /* preds: block_40 */ vec4 32 con ssa_904 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) vec1 32 div ssa_905 = ffma ssa_904.x, ssa_766, ssa_904.z vec1 32 div ssa_906 = ffma ssa_904.y, ssa_767, ssa_904.w vec1 32 div ssa_907 = fadd ssa_905, ssa_5 vec1 32 div ssa_908 = fneg ssa_906 vec1 32 div ssa_909 = fadd ssa_6, ssa_908 vec1 32 div ssa_910 = fabs ssa_907 vec1 32 div ssa_911 = fabs ssa_909 vec1 32 div ssa_912 = flt32! ssa_911, ssa_6 vec1 32 div ssa_913 = flt32! ssa_910, ssa_6 vec1 32 div ssa_914 = iand ssa_913, ssa_912 vec1 32 div ssa_915 = b32csel ssa_914, ssa_16, ssa_17 /* succs: block_43 */ } block block_43: /* preds: block_41 block_42 */ vec1 32 div ssa_916 = phi block_41: ssa_0, block_42: ssa_915 vec1 32 div ssa_917 = phi block_41: ssa_805, block_42: ssa_905 vec1 32 div ssa_918 = phi block_41: ssa_806, block_42: ssa_906 vec1 32 div ssa_919 = u2f32 ssa_916 vec1 32 div ssa_920 = flt32! ssa_919, ssa_9 /* succs: block_44 block_45 */ if ssa_920 { block block_44: /* preds: block_43 */ vec1 32 div ssa_921 = fneg! ssa_918 vec1 32 div ssa_922 = fadd! ssa_1, ssa_921 vec1 32 div ssa_923 = fneg ssa_815 vec1 32 div ssa_924 = ffma ssa_923, ssa_846, ssa_1 vec3 32 div ssa_925 = vec3 ssa_917, ssa_922, ssa_919 vec1 32 con ssa_926 = umin ssa_68, ssa_211 vec1 32 con ssa_927 = ishl ssa_926, ssa_28 vec1 32 con ssa_928 = iadd ssa_213, ssa_927 vec1 32 con ssa_929 = intrinsic resource_intel (ssa_44, ssa_928, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec1 32 div ssa_930 = (float32)txl ssa_820 (texture_handle), ssa_929 (sampler_handle), ssa_925 (coord), ssa_924 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_46 */ } else { block block_45: /* preds: block_43 */ /* succs: block_46 */ } block block_46: /* preds: block_44 block_45 */ vec1 32 div ssa_931 = phi block_44: ssa_930, block_45: ssa_1 vec1 32 div ssa_932 = fadd ssa_931, ssa_903 /* succs: block_47 block_48 */ if ssa_814 { block block_47: /* preds: block_46 */ /* succs: block_49 */ } else { block block_48: /* preds: block_46 */ vec4 32 con ssa_933 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) vec1 32 div ssa_934 = ffma ssa_933.x, ssa_766, ssa_933.z vec1 32 div ssa_935 = ffma ssa_933.y, ssa_767, ssa_933.w vec1 32 div ssa_936 = fadd ssa_934, ssa_5 vec1 32 div ssa_937 = fneg ssa_935 vec1 32 div ssa_938 = fadd ssa_6, ssa_937 vec1 32 div ssa_939 = fabs ssa_936 vec1 32 div ssa_940 = fabs ssa_938 vec1 32 div ssa_941 = flt32! ssa_940, ssa_6 vec1 32 div ssa_942 = flt32! ssa_939, ssa_6 vec1 32 div ssa_943 = iand ssa_942, ssa_941 vec1 32 div ssa_944 = b32csel ssa_943, ssa_16, ssa_17 /* succs: block_49 */ } block block_49: /* preds: block_47 block_48 */ vec1 32 div ssa_945 = phi block_47: ssa_0, block_48: ssa_944 vec1 32 div ssa_946 = phi block_47: ssa_805, block_48: ssa_934 vec1 32 div ssa_947 = phi block_47: ssa_806, block_48: ssa_935 vec1 32 div ssa_948 = u2f32 ssa_945 vec1 32 div ssa_949 = flt32! ssa_948, ssa_9 /* succs: block_50 block_51 */ if ssa_949 { block block_50: /* preds: block_49 */ vec1 32 div ssa_950 = fneg! ssa_947 vec1 32 div ssa_951 = fadd! ssa_1, ssa_950 vec1 32 div ssa_952 = fneg ssa_815 vec1 32 div ssa_953 = ffma ssa_952, ssa_846, ssa_1 vec3 32 div ssa_954 = vec3 ssa_946, ssa_951, ssa_948 vec1 32 con ssa_955 = umin ssa_68, ssa_211 vec1 32 con ssa_956 = ishl ssa_955, ssa_28 vec1 32 con ssa_957 = iadd ssa_213, ssa_956 vec1 32 con ssa_958 = intrinsic resource_intel (ssa_44, ssa_957, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec1 32 div ssa_959 = (float32)txl ssa_820 (texture_handle), ssa_958 (sampler_handle), ssa_954 (coord), ssa_953 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_52 */ } else { block block_51: /* preds: block_49 */ /* succs: block_52 */ } block block_52: /* preds: block_50 block_51 */ vec1 32 div ssa_960 = phi block_50: ssa_959, block_51: ssa_1 vec1 32 div ssa_961 = fadd ssa_960, ssa_932 vec1 32 div ssa_962 = fmul ssa_961, ssa_11 /* succs: block_53 */ } block block_53: /* preds: block_27 block_52 */ vec1 32 div ssa_963 = phi block_27: ssa_1, block_52: ssa_962 vec1 32 div ssa_964 = fmul ssa_963, ssa_799 /* succs: block_55 */ } else { block block_54: /* preds: block_17 */ /* succs: block_55 */ } block block_55: /* preds: block_53 block_54 */ vec1 32 div ssa_965 = phi block_53: ssa_964, block_54: ssa_799 vec1 32 div ssa_966 = flt32! ssa_0, ssa_965 /* succs: block_56 block_60 */ if ssa_966 { block block_56: /* preds: block_55 */ vec1 32 con ssa_967 = intrinsic resource_intel (ssa_44, ssa_94, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_968 = load_const (0x41aaaaab = 21.333334) vec1 32 con ssa_969 = load_const (0x3d400000 = 0.046875) vec1 32 con ssa_970 = load_const (0x3daaaaab = 0.083333) vec1 32 con ssa_971 = load_const (0x00000340 = 0.000000) vec4 32 con ssa_972 = intrinsic load_ssbo_uniform_block_intel (ssa_967, ssa_971) (access=80, align_mul=1073741824, align_offset=832) vec1 32 div ssa_973 = flt32! ssa_132, ssa_972.w vec1 32 con ssa_974 = flt32! ssa_0, ssa_972.z vec1 32 div ssa_975 = iand ssa_974, ssa_973 vec1 32 div ssa_976 = b2f32 ssa_975 vec1 32 div ssa_977 = fneg ssa_976 vec1 32 div ssa_978 = fadd ssa_1, ssa_977 vec1 32 div ssa_979 = fmul ssa_978, ssa_965 vec1 32 div ssa_980 = ffma ssa_179, ssa_12, ssa_146 vec1 32 div ssa_981 = fneg ssa_182 vec1 32 div ssa_982 = fneg ssa_149 vec1 32 div ssa_983 = ffma ssa_981, ssa_12, ssa_982 vec1 32 div ssa_984 = ffma ssa_161, ssa_12, ssa_132 vec1 32 con ssa_985 = load_const (0x00000240 = 0.000000) vec4 32 con ssa_986 = intrinsic load_ssbo_uniform_block_intel (ssa_967, ssa_985) (access=80, align_mul=1073741824, align_offset=576) vec1 32 con ssa_987 = fmul ssa_986.x, ssa_968 vec1 32 con ssa_988 = fmul ssa_986.y, ssa_968 vec1 32 con ssa_989 = ffloor ssa_987 vec1 32 con ssa_990 = ffloor ssa_988 vec1 32 con ssa_991 = fneg ssa_989 vec1 32 div ssa_992 = ffma ssa_991, ssa_969, ssa_980 vec1 32 div ssa_993 = ffma ssa_990, ssa_969, ssa_983 vec1 32 div ssa_994 = ffma ssa_992, ssa_970, ssa_6 vec1 32 div ssa_995 = ffma ssa_993, ssa_970, ssa_6 vec1 32 div ssa_996 = fmin! ssa_994, ssa_995 vec1 32 div ssa_997 = fmax! ssa_994, ssa_995 vec1 32 div ssa_998 = flt32! ssa_1, ssa_997 vec1 32 div ssa_999 = flt32! ssa_996, ssa_0 vec1 32 div ssa_1000 = ior ssa_999, ssa_998 /* succs: block_57 block_58 */ if ssa_1000 { block block_57: /* preds: block_56 */ /* succs: block_59 */ } else { block block_58: /* preds: block_56 */ vec1 32 con ssa_1001 = load_const (0x43008000 = 128.500000) vec1 32 con ssa_1002 = load_const (0x3b000080 = 0.001953) vec1 32 con ssa_1003 = load_const (0x42800000 = 64.000000) vec1 32 con ssa_1004 = fmul ssa_986.z, ssa_968 vec1 32 con ssa_1005 = ffloor ssa_1004 vec2 32 div ssa_1006 = vec2 ssa_994, ssa_995 vec1 32 con ssa_1007 = umin ssa_58, ssa_38 vec1 32 con ssa_1008 = ishl ssa_1007, ssa_27 vec1 32 con ssa_1009 = iadd3 ssa_42, ssa_1008, ssa_40 vec1 32 con ssa_1010 = intrinsic resource_intel (ssa_44, ssa_1009, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1011 = umin ssa_65, ssa_211 vec1 32 con ssa_1012 = ishl ssa_1011, ssa_28 vec1 32 con ssa_1013 = iadd ssa_213, ssa_1012 vec1 32 con ssa_1014 = intrinsic resource_intel (ssa_44, ssa_1013, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1015 = (uint32)tg4 ssa_1010 (texture_handle), ssa_1014 (sampler_handle), ssa_1006 (coord), 0 (gather_component), 0 (texture), 0 (sampler) vec1 32 con ssa_1016 = umin ssa_53, ssa_38 vec1 32 con ssa_1017 = ishl ssa_1016, ssa_27 vec1 32 con ssa_1018 = iadd3 ssa_42, ssa_1017, ssa_40 vec1 32 con ssa_1019 = intrinsic resource_intel (ssa_44, ssa_1018, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_1020 = (uint32)tg4 ssa_1019 (texture_handle), ssa_1014 (sampler_handle), ssa_1006 (coord), 0 (gather_component), 0 (texture), 0 (sampler) vec1 32 div ssa_1021 = u2f32 ssa_1015.x vec1 32 div ssa_1022 = u2f32 ssa_1015.y vec1 32 div ssa_1023 = u2f32 ssa_1015.z vec1 32 div ssa_1024 = u2f32 ssa_1015.w vec1 32 con ssa_1025 = ffma ssa_1005, ssa_969, ssa_1003 vec1 32 div ssa_1026 = fneg ssa_1021 vec1 32 div ssa_1027 = ffma ssa_1026, ssa_1002, ssa_1025 vec1 32 div ssa_1028 = fneg ssa_1022 vec1 32 div ssa_1029 = ffma ssa_1028, ssa_1002, ssa_1025 vec1 32 div ssa_1030 = fneg ssa_1023 vec1 32 div ssa_1031 = ffma ssa_1030, ssa_1002, ssa_1025 vec1 32 div ssa_1032 = fneg ssa_1024 vec1 32 div ssa_1033 = ffma ssa_1032, ssa_1002, ssa_1025 vec1 32 div ssa_1034 = u2f32 ssa_1020.x vec1 32 div ssa_1035 = u2f32 ssa_1020.y vec1 32 div ssa_1036 = u2f32 ssa_1020.z vec1 32 div ssa_1037 = u2f32 ssa_1020.w vec1 32 div ssa_1038 = fneg ssa_1034 vec1 32 div ssa_1039 = ffma ssa_1038, ssa_1002, ssa_1025 vec1 32 div ssa_1040 = fneg ssa_1035 vec1 32 div ssa_1041 = ffma ssa_1040, ssa_1002, ssa_1025 vec1 32 div ssa_1042 = fneg ssa_1036 vec1 32 div ssa_1043 = ffma ssa_1042, ssa_1002, ssa_1025 vec1 32 div ssa_1044 = fneg ssa_1037 vec1 32 div ssa_1045 = ffma ssa_1044, ssa_1002, ssa_1025 vec1 32 div ssa_1046 = ffma ssa_992, ssa_968, ssa_1001 vec1 32 div ssa_1047 = ffma ssa_993, ssa_968, ssa_1001 vec1 32 div ssa_1048 = ffract ssa_1046 vec1 32 div ssa_1049 = ffract ssa_1047 vec1 32 div ssa_1050 = flt32! ssa_1039, ssa_984 vec1 32 div ssa_1051 = flt32! ssa_984, ssa_1027 vec1 32 div ssa_1052 = iand ssa_1051, ssa_1050 vec1 32 div ssa_1053 = flt32! ssa_984, ssa_1029 vec1 32 div ssa_1054 = flt32! ssa_1041, ssa_984 vec1 32 div ssa_1055 = iand ssa_1053, ssa_1054 vec1 32 div ssa_1056 = flt32! ssa_984, ssa_1031 vec1 32 div ssa_1057 = flt32! ssa_1043, ssa_984 vec1 32 div ssa_1058 = iand ssa_1056, ssa_1057 vec1 32 div ssa_1059 = flt32! ssa_984, ssa_1033 vec1 32 div ssa_1060 = flt32! ssa_1045, ssa_984 vec1 32 div ssa_1061 = iand ssa_1059, ssa_1060 vec1 32 div ssa_1062 = b2f32 ssa_1052 vec1 32 div ssa_1063 = b2f32 ssa_1055 vec1 32 div ssa_1064 = b2f32 ssa_1058 vec1 32 div ssa_1065 = b2f32 ssa_1061 vec1 32 div ssa_1066 = fneg ssa_1048 vec1 32 div ssa_1067 = fadd ssa_1, ssa_1066 vec1 32 div ssa_1068 = fmul ssa_1063, ssa_1048 vec1 32 div ssa_1069 = ffma ssa_1062, ssa_1067, ssa_1068 vec1 32 div ssa_1070 = fneg ssa_1049 vec1 32 div ssa_1071 = fadd ssa_1, ssa_1070 vec1 32 div ssa_1072 = fmul ssa_1071, ssa_1048 vec1 32 div ssa_1073 = fmul ssa_1072, ssa_1064 vec1 32 div ssa_1074 = fmul ssa_1071, ssa_1067 vec1 32 div ssa_1075 = ffma ssa_1074, ssa_1065, ssa_1073 vec1 32 div ssa_1076 = ffma ssa_1069, ssa_1049, ssa_1075 /* succs: block_59 */ } block block_59: /* preds: block_57 block_58 */ vec1 32 div ssa_1077 = phi block_57: ssa_0, block_58: ssa_1076 vec1 32 div ssa_1078 = fneg ssa_1077 vec1 32 div ssa_1079 = fadd ssa_1, ssa_1078 vec1 32 div ssa_1080 = fmul ssa_979, ssa_1079 /* succs: block_61 */ } else { block block_60: /* preds: block_55 */ /* succs: block_61 */ } block block_61: /* preds: block_59 block_60 */ vec1 32 div ssa_1081 = phi block_59: ssa_1080, block_60: ssa_965 vec4 32 con ssa_1082 = intrinsic load_ssbo_uniform_block_intel (ssa_98, ssa_0) (access=80, align_mul=1073741824, align_offset=0) vec1 32 con ssa_1083 = fadd ssa_764.x, ssa_764.y vec1 32 con ssa_1084 = flt32! ssa_1083, ssa_13 vec1 32 div ssa_1085 = flt32! ssa_1081, ssa_14 vec1 32 div ssa_1086 = ior ssa_1084, ssa_1085 /* succs: block_62 block_63 */ if ssa_1086 { block block_62: /* preds: block_61 */ /* succs: block_76 */ } else { block block_63: /* preds: block_61 */ vec1 32 con ssa_1087 = load_const (0x411ffffe = 9.999998) vec1 32 con ssa_1088 = load_const (0xbf400000 = -0.750000) vec1 32 con ssa_1089 = load_const (0x41700000 = 15.000000) vec1 32 con ssa_1090 = load_const (0x41f00000 = 30.000000) vec1 32 con ssa_1091 = load_const (0x42700000 = 60.000000) vec1 32 con ssa_1092 = load_const (0x42f00000 = 120.000000) vec1 32 div ssa_1093 = flt32! ssa_135, ssa_1092 /* succs: block_64 block_65 */ if ssa_1093 { block block_64: /* preds: block_63 */ vec1 32 div ssa_1094 = fadd ssa_161, ssa_6 vec1 32 div ssa_1095 = fsat! ssa_1094 vec1 32 con ssa_1096 = fadd ssa_764.y, ssa_10 vec1 32 div ssa_1097 = ffma ssa_1095, ssa_1081, ssa_1096 vec1 32 div ssa_1098 = fsat! ssa_1097 vec1 32 div ssa_1099 = fneg ssa_1081 vec1 32 div ssa_1100 = fmul ssa_1099, ssa_765.w vec1 32 div ssa_1101 = ffma ssa_1100, ssa_1098, ssa_1 vec1 32 div ssa_1102 = fadd ssa_10, ssa_1081 vec1 32 div ssa_1103 = fmul ssa_1102, ssa_765.z vec1 32 div ssa_1104 = ffma ssa_1103, ssa_1098, ssa_1 /* succs: block_66 */ } else { block block_65: /* preds: block_63 */ /* succs: block_66 */ } block block_66: /* preds: block_64 block_65 */ vec1 32 div ssa_1105 = phi block_64: ssa_1101, block_65: ssa_1 vec1 32 div ssa_1106 = phi block_64: ssa_1104, block_65: ssa_1 vec1 32 div ssa_1107 = flt32! ssa_135, ssa_1091 /* succs: block_67 block_68 */ if ssa_1107 { block block_67: /* preds: block_66 */ vec1 32 con ssa_1108 = load_const (0x41800000 = 16.000000) vec1 32 con ssa_1109 = load_const (0x3fc00000 = 1.500000) vec1 32 con ssa_1110 = load_const (0x40a00001 = 5.000000) vec1 32 con ssa_1111 = load_const (0xbe99999a = -0.300000) vec1 32 con ssa_1112 = load_const (0x3d23d70a = 0.040000) vec1 32 con ssa_1113 = load_const (0x3e4ccccd = 0.200000) vec1 32 div ssa_1114 = fmul ssa_146, ssa_1113 vec1 32 div ssa_1115 = fmul ssa_149, ssa_1113 vec1 32 div ssa_1116 = ffract ssa_1114 vec1 32 div ssa_1117 = ffract ssa_1115 vec2 32 div ssa_1118 = vec2 ssa_1116, ssa_1117 vec1 32 con ssa_1119 = umin ssa_55, ssa_38 vec1 32 con ssa_1120 = ishl ssa_1119, ssa_27 vec1 32 con ssa_1121 = iadd3 ssa_42, ssa_1120, ssa_40 vec1 32 con ssa_1122 = intrinsic resource_intel (ssa_44, ssa_1121, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1123 = umin ssa_67, ssa_211 vec1 32 con ssa_1124 = ishl ssa_1123, ssa_28 vec1 32 con ssa_1125 = iadd ssa_213, ssa_1124 vec1 32 con ssa_1126 = intrinsic resource_intel (ssa_44, ssa_1125, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1127 = (float32)tex ssa_1122 (texture_handle), ssa_1126 (sampler_handle), ssa_1118 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1128 = fmul ssa_146, ssa_1112 vec1 32 div ssa_1129 = fmul ssa_149, ssa_1112 vec1 32 div ssa_1130 = ffract ssa_1128 vec1 32 div ssa_1131 = ffract ssa_1129 vec2 32 div ssa_1132 = vec2 ssa_1130, ssa_1131 vec4 32 div ssa_1133 = (float32)tex ssa_1122 (texture_handle), ssa_1126 (sampler_handle), ssa_1132 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1134 = ffma ssa_1133.w, ssa_1127.w, ssa_10 vec1 32 div ssa_1135 = fsat! ssa_1134 vec1 32 div ssa_1136 = fadd ssa_1135, ssa_5 vec1 32 div ssa_1137 = fmul ssa_1136, ssa_1087 vec1 32 div ssa_1138 = fsat! ssa_1137 vec1 32 div ssa_1139 = fadd ssa_1135, ssa_1111 vec1 32 div ssa_1140 = fmul ssa_1139, ssa_1110 vec1 32 div ssa_1141 = fsat! ssa_1140 vec1 32 div ssa_1142 = fneg ssa_1081 vec1 32 div ssa_1143 = ffma ssa_1142, ssa_765.w, ssa_1 vec1 32 div ssa_1144 = fneg ssa_1105 vec1 32 div ssa_1145 = fadd ssa_1143, ssa_1144 vec1 32 div ssa_1146 = ffma ssa_1141, ssa_1145, ssa_1105 vec1 32 div ssa_1147 = fneg ssa_1146 vec1 32 div ssa_1148 = fadd ssa_1143, ssa_1147 vec1 32 div ssa_1149 = ffma ssa_1148, ssa_1138, ssa_1146 vec1 32 div ssa_1150 = fadd ssa_10, ssa_1081 vec1 32 div ssa_1151 = fneg ssa_1106 vec1 32 div ssa_1152 = fadd ssa_1151, ssa_1 vec1 32 div ssa_1153 = ffma ssa_1150, ssa_765.z, ssa_1152 vec1 32 div ssa_1154 = ffma ssa_1141, ssa_1153, ssa_1106 vec1 32 div ssa_1155 = fneg ssa_1154 vec1 32 div ssa_1156 = ffma ssa_1155, ssa_1138, ssa_1154 vec1 32 div ssa_1157 = fneg ssa_1138 vec1 32 div ssa_1158 = fadd ssa_1, ssa_1157 vec1 32 div ssa_1159 = fmul ssa_146, ssa_1109 vec1 32 div ssa_1160 = fmul ssa_149, ssa_1109 vec1 32 div ssa_1161 = ffract ssa_1159 vec1 32 div ssa_1162 = ffract ssa_1160 vec1 32 con ssa_1163 = ffract ssa_1082.z vec1 32 con ssa_1164 = fmul ssa_1163, ssa_1108 vec1 32 con ssa_1165 = ffloor ssa_1164 vec1 32 con ssa_1166 = fmul ssa_1165, ssa_11 vec1 32 con ssa_1167 = ffract ssa_1166 vec1 32 con ssa_1168 = ffloor ssa_1166 vec1 32 div ssa_1169 = ffract ssa_1161 vec1 32 div ssa_1170 = ffract ssa_1162 vec1 32 div ssa_1171 = ffma ssa_1169, ssa_11, ssa_1167 vec1 32 con ssa_1172 = fneg ssa_1168 vec1 32 div ssa_1173 = fadd ssa_1170, ssa_1172 vec1 32 div ssa_1174 = fmul ssa_1173, ssa_11 vec1 32 div ssa_1175 = ffract ssa_1171 vec1 32 div ssa_1176 = ffract ssa_1174 vec2 32 div ssa_1177 = vec2 ssa_1175, ssa_1176 vec1 32 con ssa_1178 = umin ssa_57, ssa_38 vec1 32 con ssa_1179 = ishl ssa_1178, ssa_27 vec1 32 con ssa_1180 = iadd3 ssa_42, ssa_1179, ssa_40 vec1 32 con ssa_1181 = intrinsic resource_intel (ssa_44, ssa_1180, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_1182 = (float32)tex ssa_1181 (texture_handle), ssa_1126 (sampler_handle), ssa_1177 (coord), 0 (texture), 0 (sampler) vec1 32 con ssa_1183 = fmul ssa_764.x, ssa_6 vec1 32 div ssa_1184 = fadd ssa_1182.x, ssa_5 vec1 32 div ssa_1185 = fadd ssa_1182.y, ssa_5 vec1 32 div ssa_1186 = ffma ssa_1184, ssa_1183, ssa_6 vec1 32 div ssa_1187 = ffma ssa_1185, ssa_1183, ssa_6 vec1 32 div ssa_1188 = fmul ssa_1186, ssa_1138 vec1 32 div ssa_1189 = fmul ssa_1187, ssa_1138 /* succs: block_69 */ } else { block block_68: /* preds: block_66 */ /* succs: block_69 */ } block block_69: /* preds: block_67 block_68 */ vec1 32 div ssa_1190 = phi block_67: ssa_1149, block_68: ssa_1105 vec1 32 div ssa_1191 = phi block_67: ssa_1156, block_68: ssa_1106 vec1 32 div ssa_1192 = phi block_67: ssa_1158, block_68: ssa_1 vec1 32 div ssa_1193 = phi block_67: ssa_1188, block_68: ssa_0 vec1 32 div ssa_1194 = phi block_67: ssa_1189, block_68: ssa_0 vec1 32 div ssa_1195 = phi block_67: ssa_1138, block_68: ssa_0 vec1 32 div ssa_1196 = flt32! ssa_135, ssa_1090 /* succs: block_70 block_71 */ if ssa_1196 { block block_70: /* preds: block_69 */ vec1 32 con ssa_1197 = load_const (0xc0000000 = -2.000000) vec1 32 con ssa_1198 = load_const (0x3e7d70a4 = 0.247500) vec1 32 con ssa_1199 = load_const (0x3c75c290 = 0.015000) vec1 32 con ssa_1200 = load_const (0x3ea8f5c3 = 0.330000) vec1 32 con ssa_1201 = load_const (0x3f400000 = 0.750000) vec1 32 div ssa_1202 = fabs ssa_182 vec1 32 div ssa_1203 = fmul ssa_149, ssa_1201 vec1 32 div ssa_1204 = fmul ssa_132, ssa_11 vec1 32 div ssa_1205 = ffma ssa_1082.z, ssa_12, ssa_1204 vec1 32 div ssa_1206 = fmul ssa_146, ssa_1201 vec2 32 div ssa_1207 = vec2 ssa_1203, ssa_1205 vec1 32 con ssa_1208 = umin ssa_56, ssa_38 vec1 32 con ssa_1209 = ishl ssa_1208, ssa_27 vec1 32 con ssa_1210 = iadd3 ssa_42, ssa_1209, ssa_40 vec1 32 con ssa_1211 = intrinsic resource_intel (ssa_44, ssa_1210, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1212 = umin ssa_67, ssa_211 vec1 32 con ssa_1213 = ishl ssa_1212, ssa_28 vec1 32 con ssa_1214 = iadd ssa_213, ssa_1213 vec1 32 con ssa_1215 = intrinsic resource_intel (ssa_44, ssa_1214, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1216 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1207 (coord), 0 (texture), 0 (sampler) vec2 32 div ssa_1217 = vec2 ssa_1206, ssa_1205 vec4 32 div ssa_1218 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1217 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1219 = fneg ssa_1216.x vec1 32 div ssa_1220 = fadd ssa_1218.x, ssa_1219 vec1 32 div ssa_1221 = fneg ssa_1216.y vec1 32 div ssa_1222 = fadd ssa_1218.y, ssa_1221 vec1 32 div ssa_1223 = fneg ssa_1216.z vec1 32 div ssa_1224 = fadd ssa_1218.z, ssa_1223 vec1 32 div ssa_1225 = fmul ssa_149, ssa_1198 vec1 32 con ssa_1226 = fmul ssa_1082.z, ssa_1199 vec1 32 div ssa_1227 = ffma ssa_1205, ssa_1200, ssa_1226 vec1 32 div ssa_1228 = fmul ssa_146, ssa_1198 vec2 32 div ssa_1229 = vec2 ssa_1225, ssa_1227 vec4 32 div ssa_1230 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1229 (coord), 0 (texture), 0 (sampler) vec2 32 div ssa_1231 = vec2 ssa_1228, ssa_1227 vec4 32 div ssa_1232 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1231 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1233 = fneg ssa_1230.x vec1 32 div ssa_1234 = fadd ssa_1232.x, ssa_1233 vec1 32 div ssa_1235 = fneg ssa_1230.y vec1 32 div ssa_1236 = fadd ssa_1232.y, ssa_1235 vec1 32 div ssa_1237 = fneg ssa_1230.z vec1 32 div ssa_1238 = fadd ssa_1232.z, ssa_1237 vec1 32 div ssa_1239 = fadd ssa_1238, ssa_1224 vec1 32 div ssa_1240 = fadd ssa_1230.z, ssa_1216.z vec1 32 div ssa_1241 = ffma ssa_1239, ssa_1202, ssa_1240 vec1 32 div ssa_1242 = fabs ssa_161 vec1 32 div ssa_1243 = fadd ssa_1242, ssa_1088 vec1 32 div ssa_1244 = fmul ssa_1243, ssa_1197 vec1 32 div ssa_1245 = fsat! ssa_1244 vec1 32 con ssa_1246 = fmul ssa_764.x, ssa_764.y vec1 32 div ssa_1247 = fmul ssa_1246, ssa_1245 vec1 32 div ssa_1248 = fneg ssa_1081 vec1 32 div ssa_1249 = ffma ssa_1248, ssa_765.z, ssa_1 vec1 32 div ssa_1250 = fadd ssa_1234, ssa_1220 vec1 32 div ssa_1251 = fadd ssa_1216.x, ssa_10 vec1 32 div ssa_1252 = fadd ssa_1251, ssa_1230.x vec1 32 div ssa_1253 = ffma ssa_1250, ssa_1202, ssa_1252 vec1 32 div ssa_1254 = fadd ssa_1236, ssa_1222 vec1 32 div ssa_1255 = fadd ssa_1216.y, ssa_10 vec1 32 div ssa_1256 = fadd ssa_1255, ssa_1230.y vec1 32 div ssa_1257 = ffma ssa_1254, ssa_1202, ssa_1256 vec1 32 div ssa_1258 = fneg ssa_1193 vec1 32 div ssa_1259 = ffma ssa_1253, ssa_1249, ssa_1258 vec1 32 div ssa_1260 = fneg ssa_1194 vec1 32 div ssa_1261 = ffma ssa_1257, ssa_1249, ssa_1260 vec1 32 div ssa_1262 = ffma ssa_1259, ssa_1247, ssa_1193 vec1 32 div ssa_1263 = ffma ssa_1261, ssa_1247, ssa_1194 vec1 32 div ssa_1264 = fneg ssa_1247 vec1 32 div ssa_1265 = ffma ssa_1264, ssa_1195, ssa_1195 vec1 32 div ssa_1266 = fadd ssa_10, ssa_1081 vec1 32 div ssa_1267 = ffma ssa_1266, ssa_765.z, ssa_1 vec1 32 div ssa_1268 = fneg ssa_1191 vec1 32 div ssa_1269 = fadd ssa_1267, ssa_1268 vec1 32 div ssa_1270 = fmul ssa_1269, ssa_6 vec1 32 div ssa_1271 = fmul ssa_1270, ssa_1247 vec1 32 div ssa_1272 = ffma ssa_1271, ssa_1241, ssa_1191 /* succs: block_72 */ } else { block block_71: /* preds: block_69 */ /* succs: block_72 */ } block block_72: /* preds: block_70 block_71 */ vec1 32 div ssa_1273 = phi block_70: ssa_1272, block_71: ssa_1191 vec1 32 div ssa_1274 = phi block_70: ssa_1262, block_71: ssa_1193 vec1 32 div ssa_1275 = phi block_70: ssa_1263, block_71: ssa_1194 vec1 32 div ssa_1276 = phi block_70: ssa_1265, block_71: ssa_1195 vec1 32 div ssa_1277 = flt32! ssa_135, ssa_1089 /* succs: block_73 block_74 */ if ssa_1277 { block block_73: /* preds: block_72 */ vec1 32 con ssa_1278 = load_const (0x40d55558 = 6.666668) vec1 32 con ssa_1279 = load_const (0x3e199998 = 0.150000) vec1 32 con ssa_1280 = load_const (0x3ecccccd = 0.400000) vec1 32 div ssa_1281 = fadd ssa_161, ssa_1088 vec1 32 div ssa_1282 = fmul ssa_1281, ssa_1087 vec1 32 div ssa_1283 = fsat! ssa_1282 vec1 32 div ssa_1284 = ffract ssa_146 vec1 32 div ssa_1285 = ffract ssa_149 vec2 32 div ssa_1286 = vec2 ssa_1284, ssa_1285 vec1 32 con ssa_1287 = umin ssa_55, ssa_38 vec1 32 con ssa_1288 = ishl ssa_1287, ssa_27 vec1 32 con ssa_1289 = iadd3 ssa_42, ssa_1288, ssa_40 vec1 32 con ssa_1290 = intrinsic resource_intel (ssa_44, ssa_1289, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1291 = umin ssa_67, ssa_211 vec1 32 con ssa_1292 = ishl ssa_1291, ssa_28 vec1 32 con ssa_1293 = iadd ssa_213, ssa_1292 vec1 32 con ssa_1294 = intrinsic resource_intel (ssa_44, ssa_1293, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1295 = (float32)tex ssa_1290 (texture_handle), ssa_1294 (sampler_handle), ssa_1286 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1296 = ffma ssa_1082.z, ssa_1280, ssa_1295.x vec1 32 div ssa_1297 = ffract ssa_1296 vec1 32 div ssa_1298 = fneg ssa_1297 vec1 32 div ssa_1299 = fadd ssa_1279, ssa_1298 vec1 32 div ssa_1300 = fsat! ssa_1299 vec1 32 con ssa_1301 = fmul ssa_764.x, ssa_1278 vec1 32 con ssa_1302 = fmul ssa_1301, ssa_764.x vec1 32 div ssa_1303 = fmul ssa_1302, ssa_1081 vec1 32 div ssa_1304 = fmul ssa_1303, ssa_1283 vec1 32 div ssa_1305 = fmul ssa_1304, ssa_1295.z vec1 32 div ssa_1306 = fmul ssa_1305, ssa_1300 vec1 32 con ssa_1307 = fadd ssa_764.y, ssa_10 vec1 32 div ssa_1308 = fadd ssa_1307, ssa_1295.y vec1 32 div ssa_1309 = fmul ssa_1308, ssa_1087 vec1 32 div ssa_1310 = fsat! ssa_1309 vec1 32 div ssa_1311 = fneg ssa_1190 vec1 32 div ssa_1312 = fadd ssa_1311, ssa_1 vec1 32 div ssa_1313 = fneg ssa_1081 vec1 32 div ssa_1314 = ffma ssa_1313, ssa_765.w, ssa_1312 vec1 32 div ssa_1315 = fmul ssa_1310, ssa_1314 vec1 32 div ssa_1316 = ffma ssa_1315, ssa_1306, ssa_1190 vec1 32 div ssa_1317 = fneg ssa_1310 vec1 32 div ssa_1318 = fmul ssa_1317, ssa_1273 vec1 32 div ssa_1319 = ffma ssa_1318, ssa_1306, ssa_1273 /* succs: block_75 */ } else { block block_74: /* preds: block_72 */ /* succs: block_75 */ } block block_75: /* preds: block_73 block_74 */ vec1 32 div ssa_1320 = phi block_73: ssa_1319, block_74: ssa_1273 vec1 32 div ssa_1321 = phi block_73: ssa_1316, block_74: ssa_1190 /* succs: block_76 */ } block block_76: /* preds: block_62 block_75 */ vec1 32 div ssa_1322 = phi block_62: ssa_1, block_75: ssa_1321 vec1 32 div ssa_1323 = phi block_62: ssa_1, block_75: ssa_1320 vec1 32 div ssa_1324 = phi block_62: ssa_1, block_75: ssa_1192 vec1 32 div ssa_1325 = phi block_62: ssa_0, block_75: ssa_1274 vec1 32 div ssa_1326 = phi block_62: ssa_0, block_75: ssa_1275 vec1 32 div ssa_1327 = phi block_62: ssa_0, block_75: ssa_1276 vec1 32 div ssa_1328 = fmul ssa_1322, ssa_708 vec1 32 div ssa_1329 = fmul ssa_1322, ssa_709 vec1 32 div ssa_1330 = fmul ssa_1322, ssa_710 vec1 32 div ssa_1331 = fmul ssa_1323, ssa_707 vec1 32 div ssa_1332 = ffma ssa_1324, ssa_761, ssa_1325 vec1 32 div ssa_1333 = ffma ssa_1324, ssa_762, ssa_1326 vec1 32 div ssa_1334 = ffma ssa_1324, ssa_763, ssa_1327 /* succs: block_77 block_78 */ if ssa_289 { block block_77: /* preds: block_76 */ vec1 32 div ssa_1335 = fmul ssa_161, ssa_1334 vec1 32 div ssa_1336 = ffma ssa_182, ssa_1333, ssa_1335 vec1 32 div ssa_1337 = ffma ssa_179, ssa_1332, ssa_1336 vec1 32 div ssa_1338 = fneg ssa_1332 vec1 32 div ssa_1339 = fmul ssa_1338, ssa_9 vec1 32 div ssa_1340 = fneg ssa_1333 vec1 32 div ssa_1341 = fmul ssa_1340, ssa_9 vec1 32 div ssa_1342 = fneg ssa_1334 vec1 32 div ssa_1343 = fmul ssa_1342, ssa_9 vec1 32 div ssa_1344 = ffma ssa_1339, ssa_1337, ssa_179 vec1 32 div ssa_1345 = ffma ssa_1341, ssa_1337, ssa_182 vec1 32 div ssa_1346 = ffma ssa_1343, ssa_1337, ssa_161 /* succs: block_79 */ } else { block block_78: /* preds: block_76 */ /* succs: block_79 */ } block block_79: /* preds: block_77 block_78 */ vec1 32 div ssa_1347 = phi block_77: ssa_1344, block_78: ssa_1332 vec1 32 div ssa_1348 = phi block_77: ssa_1345, block_78: ssa_1333 vec1 32 div ssa_1349 = phi block_77: ssa_1346, block_78: ssa_1334 vec1 32 con ssa_1350 = fmul ssa_189.y, ssa_3 vec1 32 div ssa_1351 = fsqrt ssa_1328 vec1 32 div ssa_1352 = fsqrt ssa_1329 vec1 32 div ssa_1353 = fsqrt ssa_1330 vec1 32 div ssa_1354 = fabs ssa_1349 vec1 32 div ssa_1355 = fabs ssa_1348 vec1 32 div ssa_1356 = fmax! ssa_1355, ssa_1354 vec1 32 div ssa_1357 = fabs ssa_1347 vec1 32 div ssa_1358 = fmax! ssa_1357, ssa_1356 vec1 32 div ssa_1359 = frcp ssa_1358 vec1 32 div ssa_1360 = fmul ssa_1347, ssa_6 vec1 32 div ssa_1361 = fmul ssa_1348, ssa_6 vec1 32 div ssa_1362 = fmul ssa_1349, ssa_6 vec1 32 div ssa_1363 = ffma ssa_1360, ssa_1359, ssa_6 vec1 32 div ssa_1364 = ffma ssa_1361, ssa_1359, ssa_6 vec1 32 div ssa_1365 = ffma ssa_1362, ssa_1359, ssa_6 vec1 32 div ssa_1366 = fmul ssa_754, ssa_8 vec1 32 div ssa_1367 = f2u32 ssa_1366 vec1 32 div ssa_1368 = u2f32 ssa_1367 vec1 32 div ssa_1369 = fmul ssa_1368, ssa_7 vec1 32 div ssa_1370 = frcp ssa_123 vec1 32 div ssa_1371 = fmul ssa_139, ssa_1370 vec1 32 div ssa_1372 = fmul ssa_143, ssa_1370 vec1 32 div ssa_1373 = fmul ssa_120, ssa_1370 vec1 32 div ssa_1374 = frcp ssa_112 vec1 32 div ssa_1375 = ffma ssa_126, ssa_1374, ssa_1371 vec1 32 div ssa_1376 = ffma ssa_129, ssa_1374, ssa_1372 vec1 32 div ssa_1377 = ffma ssa_109, ssa_1374, ssa_1373 vec1 32 div ssa_1378 = fmul ssa_1375, ssa_6 vec1 32 div ssa_1379 = fmul ssa_1376, ssa_5 vec1 32 div ssa_1380 = fmul ssa_1377, ssa_4 vec4 32 div ssa_1381 = vec4 ssa_1351, ssa_1352, ssa_1353, ssa_1350 intrinsic store_output (ssa_1381, ssa_0) (base=8, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA0 slots=1 /*132*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target */ vec4 32 div ssa_1382 = vec4 ssa_1363, ssa_1364, ssa_1365, ssa_0 intrinsic store_output (ssa_1382, ssa_0) (base=10, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA1 slots=1 /*133*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target_1 */ vec4 32 div ssa_1383 = vec4 ssa_706, ssa_1331, ssa_3, ssa_1369 intrinsic store_output (ssa_1383, ssa_0) (base=12, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA2 slots=1 /*134*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target_2 */ vec4 32 div ssa_1384 = vec4 ssa_1378, ssa_1379, ssa_1380, ssa_1 intrinsic store_output (ssa_1384, ssa_0) (base=14, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA3 slots=1 /*135*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target_3 */ /* succs: block_80 */ block block_80: } NIR (final form) for fragment shader: shader: MESA_SHADER_FRAGMENT source_sha1: {0x54969a6b, 0xf3bf0d03, 0x19f918a7, 0x34f0be6e, 0x8aacdd2a} stage: 4 next_stage: 0 num_ssbos: 5 inputs_read: 33-39 outputs_written: 4-7 system_values_read: 0x00000000'00000000'02480000 subgroup_size: 2 uses_wide_subgroup_intrinsics: true uses_texture_gather: true divergence_analysis_run: true bit_sizes_float: 0x20 bit_sizes_int: 0x21 separate_shader: true uses_discard: true uses_demote: true needs_quad_helper_invocations: true needs_all_helper_invocations: true origin_upper_left: true inputs: 0 outputs: 0 uniforms: 256 decl_var push_const INTERP_MODE_NONE RootConstants registers decl_var uniform INTERP_MODE_NONE restrict texture2D[] @0 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict texture2DArray[] @1 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict texture3D[] @2 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict utexture2D[] @3 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @4 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @5 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @6 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly SSBO[] @7 (~0, 0, 2) decl_var ssbo INTERP_MODE_NONE restrict readonly BindlessCBV[] @8 (~0, 0, 2) decl_var uniform INTERP_MODE_NONE restrict sampler[] @9 (~0, 0, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target (FRAG_RESULT_DATA0.xyzw, 8, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_1 (FRAG_RESULT_DATA1.xyzw, 10, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_2 (FRAG_RESULT_DATA2.xyzw, 12, 0) decl_var shader_out INTERP_MODE_NONE vec4 SV_Target_3 (FRAG_RESULT_DATA3.xyzw, 14, 0) decl_var INTERP_MODE_NONE float TEXCOORD_6 decl_var INTERP_MODE_NONE float TEXCOORD_6@10 decl_var INTERP_MODE_NONE float TEXCOORD_6@11 decl_var INTERP_MODE_NONE float TEXCOORD_5 decl_var INTERP_MODE_NONE float TEXCOORD_5@12 decl_var INTERP_MODE_NONE float TEXCOORD_5@13 decl_var INTERP_MODE_NONE float TEXCOORD_5@14 decl_var INTERP_MODE_NONE float TEXCOORD_4 decl_var INTERP_MODE_NONE float TEXCOORD_4@15 decl_var INTERP_MODE_NONE float TEXCOORD_4@16 decl_var INTERP_MODE_NONE float TEXCOORD_4@17 decl_var INTERP_MODE_NONE float TEXCOORD_3 decl_var INTERP_MODE_NONE float TEXCOORD_3@18 decl_var INTERP_MODE_NONE float TEXCOORD_2 decl_var INTERP_MODE_NONE float TEXCOORD_2@19 decl_var INTERP_MODE_NONE float TEXCOORD_2@20 decl_var INTERP_MODE_NONE float TEXCOORD_1 decl_var INTERP_MODE_NONE float TEXCOORD_1@21 decl_var INTERP_MODE_NONE float TEXCOORD_1@22 decl_var INTERP_MODE_NONE float TEXCOORD_1@23 decl_var INTERP_MODE_NONE float TEXCOORD decl_var INTERP_MODE_NONE float TEXCOORD@24 decl_var INTERP_MODE_NONE float TEXCOORD@25 decl_var INTERP_MODE_NONE float TEXCOORD@26 decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD@27 (VARYING_SLOT_VAR1.xyzw, 33, 0) decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD_1@28 (VARYING_SLOT_VAR2.xyzw, 34, 0) decl_var shader_in INTERP_MODE_SMOOTH vec3 TEXCOORD_2@29 (VARYING_SLOT_VAR3.xyz, 35, 0) decl_var shader_in INTERP_MODE_SMOOTH vec2 TEXCOORD_3@30 (VARYING_SLOT_VAR4.zw, 36, 0) decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD_4@31 (VARYING_SLOT_VAR5.xyzw, 37, 0) decl_var shader_in INTERP_MODE_SMOOTH vec4 TEXCOORD_5@32 (VARYING_SLOT_VAR6.xyzw, 38, 0) decl_var shader_in INTERP_MODE_SMOOTH vec3 TEXCOORD_6@33 (VARYING_SLOT_VAR7.xyz, 39, 0) decl_function main (0 params) impl main { decl_reg vec1 32 con r7 decl_reg vec1 32 div r8 decl_reg vec1 32 div r9 decl_reg vec1 32 div r10 decl_reg vec1 32 div r11 decl_reg vec1 32 div r12 decl_reg vec1 32 div r13 decl_reg vec1 32 div r14 decl_reg vec1 32 div r15 decl_reg vec1 32 div r16 decl_reg vec1 32 div r17 decl_reg vec1 32 div r18 decl_reg vec1 32 div r19 decl_reg vec1 32 div r20 decl_reg vec1 32 div r21 decl_reg vec1 32 div r22 decl_reg vec1 32 div r23 decl_reg vec1 32 div r24 decl_reg vec1 32 div r25 decl_reg vec1 32 div r26 decl_reg vec1 32 div r27 decl_reg vec1 32 div r28 decl_reg vec1 32 div r29 decl_reg vec1 32 div r30 decl_reg vec1 32 div r31 decl_reg vec1 32 div r32 decl_reg vec1 32 div r33 decl_reg vec1 32 div r34 decl_reg vec1 32 div r35 decl_reg vec1 32 div r36 decl_reg vec1 32 div r37 decl_reg vec1 32 div r38 decl_reg vec1 32 div r39 decl_reg vec1 32 div r40 decl_reg vec1 32 div r41 decl_reg vec1 32 div r42 decl_reg vec1 32 div r43 decl_reg vec1 32 div r44 decl_reg vec1 32 div r45 decl_reg vec1 32 div r46 decl_reg vec1 32 div r47 decl_reg vec1 32 div r48 decl_reg vec1 32 div r49 decl_reg vec1 32 div r50 decl_reg vec1 32 div r51 block block_0: /* preds: */ vec1 32 con ssa_0 = load_const (0x00000000 = 0.000000) vec1 32 con ssa_1 = load_const (0x3f800000 = 1.000000) vec1 32 con ssa_2 = load_const (0xffffffff = -nan) vec1 32 con ssa_3 = load_const (0x3eaaaaab = 0.333333) vec1 32 con ssa_4 = load_const (0x447a0000 = 1000.000000) vec1 32 con ssa_5 = load_const (0xbf000000 = -0.500000) vec1 32 con ssa_6 = load_const (0x3f000000 = 0.500000) vec1 32 con ssa_7 = load_const (0x3b808081 = 0.003922) vec1 32 con ssa_8 = load_const (0x427c0000 = 63.000000) vec1 32 con ssa_9 = load_const (0x40000000 = 2.000000) vec1 32 con ssa_10 = load_const (0xbf800000 = -1.000000) vec1 32 con ssa_11 = load_const (0x3e800000 = 0.250000) vec1 32 con ssa_12 = load_const (0x3dcccccd = 0.100000) vec1 32 con ssa_13 = load_const (0x3a83126f = 0.001000) vec1 32 con ssa_14 = load_const (0x3c23d70a = 0.010000) vec1 32 con ssa_15 = load_const (0x00000004 = 0.000000) vec1 32 con ssa_16 = load_const (0x00000001 = 0.000000) vec1 32 con ssa_17 = load_const (0x00000002 = 0.000000) vec1 32 con ssa_18 = load_const (0x41200000 = 10.000000) vec1 32 con ssa_19 = load_const (0x37a7c5ac = 0.000020) vec1 32 con ssa_20 = load_const (0xffffff7e = -nan) vec1 32 con ssa_21 = load_const (0x00000082 = 0.000000) vec1 32 con ssa_22 = load_const (0x00000010 = 0.000000) vec1 32 con ssa_23 = load_const (0x0000000a = 0.000000) vec1 32 con ssa_24 = load_const (0x0000000f = 0.000000) vec1 32 con ssa_25 = load_const (0x0000000e = 0.000000) vec1 32 con ssa_26 = load_const (0x00000007 = 0.000000) vec1 32 con ssa_27 = load_const (0x00000006 = 0.000000) vec1 32 con ssa_28 = load_const (0x00000005 = 0.000000) vec1 32 con ssa_29 = load_const (0x00000003 = 0.000000) vec1 32 con ssa_30 = load_const (0xffffffff = -nan) vec1 32 con ssa_31 = load_const (0x0000001f = 0.000000) vec1 32 con ssa_32 = load_const (0x00000014 = 0.000000) vec1 32 con ssa_33 = load_const (0x000003ff = 0.000000) vec1 32 con ssa_34 = load_const (0x0000003f = 0.000000) vec1 32 con ssa_35 = load_const (0x00000048 = 0.000000) vec1 32 con ssa_36 = intrinsic load_uniform (ssa_35) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_37 = iadd ssa_36, ssa_29 vec1 32 con ssa_38 = load_const (0x000f423f = 0.000000) vec1 32 con ssa_39 = umin ssa_37, ssa_38 vec1 32 con ssa_40 = intrinsic load_uniform (ssa_0) (base=252, range=4, dest_type=uint /*4*/) vec1 32 con ssa_41 = ishl ssa_39, ssa_27 vec1 32 con ssa_42 = load_const (0x00000080 = 0.000000) vec1 32 con ssa_43 = iadd3 ssa_42, ssa_41, ssa_40 vec1 32 con ssa_44 = load_const (0xdeaddeed = -6264355898823540736.000000) vec1 32 con ssa_45 = intrinsic resource_intel (ssa_44, ssa_43, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_46 = iadd ssa_36, ssa_17 vec1 32 con ssa_47 = umin ssa_46, ssa_38 vec1 32 con ssa_48 = ishl ssa_47, ssa_27 vec1 32 con ssa_49 = iadd3 ssa_42, ssa_48, ssa_40 vec1 32 con ssa_50 = intrinsic resource_intel (ssa_44, ssa_49, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_51 = load_const (0x00000054 = 0.000000) vec1 32 con ssa_52 = intrinsic load_uniform (ssa_51) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_53 = iadd ssa_52, ssa_22 vec1 32 con ssa_54 = iadd ssa_52, ssa_25 vec1 32 con ssa_55 = iadd ssa_52, ssa_26 vec1 32 con ssa_56 = iadd ssa_52, ssa_27 vec1 32 con ssa_57 = iadd ssa_52, ssa_28 vec1 32 con ssa_58 = iadd ssa_52, ssa_15 vec1 32 con ssa_59 = iadd ssa_52, ssa_16 vec1 32 con ssa_60 = load_const (0x00000050 = 0.000000) vec1 32 con ssa_61 = intrinsic load_uniform (ssa_60) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_62 = iadd ssa_61, ssa_22 vec1 32 con ssa_63 = iadd ssa_36, ssa_16 vec1 32 con ssa_64 = load_const (0x00000064 = 0.000000) vec1 32 con ssa_65 = intrinsic load_uniform (ssa_64) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_66 = iadd ssa_65, ssa_29 vec1 32 con ssa_67 = iadd ssa_65, ssa_17 vec1 32 con ssa_68 = iadd ssa_65, ssa_16 vec1 32 con ssa_69 = load_const (0x00000060 = 0.000000) vec1 32 con ssa_70 = intrinsic load_uniform (ssa_69) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_71 = load_const (0x00000044 = 0.000000) vec1 32 con ssa_72 = intrinsic load_uniform (ssa_71) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_73 = iadd ssa_72, ssa_28 vec1 32 con ssa_74 = umin ssa_73, ssa_38 vec1 32 con ssa_75 = ishl ssa_74, ssa_27 vec1 32 con ssa_76 = iadd3 ssa_42, ssa_75, ssa_40 vec1 32 con ssa_77 = intrinsic resource_intel (ssa_44, ssa_76, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_78 = load_const (0x00000038 = 0.000000) vec1 32 con ssa_79 = intrinsic load_uniform (ssa_78) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_80 = umin ssa_79, ssa_38 vec1 32 con ssa_81 = ishl ssa_80, ssa_27 vec1 32 con ssa_82 = iadd3 ssa_42, ssa_81, ssa_40 vec1 32 con ssa_83 = intrinsic resource_intel (ssa_44, ssa_82, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_84 = iadd ssa_72, ssa_15 vec1 32 con ssa_85 = umin ssa_84, ssa_38 vec1 32 con ssa_86 = ishl ssa_85, ssa_27 vec1 32 con ssa_87 = iadd3 ssa_42, ssa_86, ssa_40 vec1 32 con ssa_88 = intrinsic resource_intel (ssa_44, ssa_87, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_89 = load_const (0x00000034 = 0.000000) vec1 32 con ssa_90 = intrinsic load_uniform (ssa_89) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_91 = iadd ssa_90, ssa_16 vec1 32 con ssa_92 = umin ssa_91, ssa_38 vec1 32 con ssa_93 = ishl ssa_92, ssa_27 vec1 32 con ssa_94 = iadd3 ssa_42, ssa_93, ssa_40 vec1 32 con ssa_95 = umin ssa_90, ssa_38 vec1 32 con ssa_96 = ishl ssa_95, ssa_27 vec1 32 con ssa_97 = iadd3 ssa_42, ssa_96, ssa_40 vec1 32 con ssa_98 = intrinsic resource_intel (ssa_44, ssa_97, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_99 = load_const (0x00000040 = 0.000000) vec1 32 con ssa_100 = intrinsic load_uniform (ssa_99) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_101 = umin ssa_100, ssa_38 vec1 32 con ssa_102 = ishl ssa_101, ssa_27 vec1 32 con ssa_103 = iadd3 ssa_42, ssa_102, ssa_40 vec1 32 con ssa_104 = intrinsic resource_intel (ssa_44, ssa_103, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_105 = intrinsic load_front_face () () vec2 32 div ssa_106 = intrinsic load_barycentric_pixel () (interp_mode=1) vec3 32 con ssa_107 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=39, component=0, io location=39 slots=1 /*167*/) vec1 32 div ssa_108 = ffma ssa_106.y, ssa_107.y, ssa_107.x vec1 32 div ssa_109 = ffma ssa_106.x, ssa_107.z, ssa_108 vec3 32 con ssa_110 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=39, component=1, io location=39 slots=1 /*167*/) vec1 32 div ssa_111 = ffma ssa_106.y, ssa_110.y, ssa_110.x vec1 32 div ssa_112 = ffma ssa_106.x, ssa_110.z, ssa_111 vec3 32 con ssa_113 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=39, component=2, io location=39 slots=1 /*167*/) vec1 32 div ssa_114 = ffma ssa_106.y, ssa_113.y, ssa_113.x vec3 32 con ssa_115 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=0, io location=38 slots=1 /*166*/) vec1 32 div ssa_116 = fneg ssa_106.y vec1 32 con ssa_117 = fneg ssa_115.x vec1 32 div ssa_118 = ffma ssa_116, ssa_115.y, ssa_117 vec1 32 div ssa_119 = fneg ssa_106.x vec1 32 div ssa_120 = ffma ssa_119, ssa_115.z, ssa_118 vec3 32 con ssa_121 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=1, io location=38 slots=1 /*166*/) vec1 32 div ssa_122 = ffma ssa_106.y, ssa_121.y, ssa_121.x vec1 32 div ssa_123 = ffma ssa_106.x, ssa_121.z, ssa_122 vec3 32 con ssa_124 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=2, io location=38 slots=1 /*166*/) vec1 32 div ssa_125 = ffma ssa_106.y, ssa_124.y, ssa_124.x vec1 32 div ssa_126 = ffma ssa_106.x, ssa_124.z, ssa_125 vec3 32 con ssa_127 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=38, component=3, io location=38 slots=1 /*166*/) vec1 32 div ssa_128 = ffma ssa_106.y, ssa_127.y, ssa_127.x vec1 32 div ssa_129 = ffma ssa_106.x, ssa_127.z, ssa_128 vec3 32 con ssa_130 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=0, io location=37 slots=1 /*165*/) vec1 32 div ssa_131 = ffma ssa_106.y, ssa_130.y, ssa_130.x vec1 32 div ssa_132 = ffma ssa_106.x, ssa_130.z, ssa_131 vec3 32 con ssa_133 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=1, io location=37 slots=1 /*165*/) vec1 32 div ssa_134 = ffma ssa_106.y, ssa_133.y, ssa_133.x vec1 32 div ssa_135 = ffma ssa_106.x, ssa_133.z, ssa_134 vec3 32 con ssa_136 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=2, io location=37 slots=1 /*165*/) vec1 32 con ssa_137 = fneg ssa_136.x vec1 32 div ssa_138 = ffma ssa_116, ssa_136.y, ssa_137 vec1 32 div ssa_139 = ffma ssa_119, ssa_136.z, ssa_138 vec3 32 con ssa_140 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=37, component=3, io location=37 slots=1 /*165*/) vec1 32 con ssa_141 = fneg ssa_140.x vec1 32 div ssa_142 = ffma ssa_116, ssa_140.y, ssa_141 vec1 32 div ssa_143 = ffma ssa_119, ssa_140.z, ssa_142 vec3 32 con ssa_144 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=36, component=2, io location=36 slots=1 /*164*/) vec1 32 div ssa_145 = ffma ssa_106.y, ssa_144.y, ssa_144.x vec1 32 div ssa_146 = ffma ssa_106.x, ssa_144.z, ssa_145 vec3 32 con ssa_147 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=36, component=3, io location=36 slots=1 /*164*/) vec1 32 div ssa_148 = ffma ssa_106.y, ssa_147.y, ssa_147.x vec1 32 div ssa_149 = ffma ssa_106.x, ssa_147.z, ssa_148 vec3 32 con ssa_150 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=35, component=0, io location=35 slots=1 /*163*/) vec1 32 div ssa_151 = ffma ssa_106.y, ssa_150.y, ssa_150.x vec1 32 div ssa_152 = ffma ssa_106.x, ssa_150.z, ssa_151 vec3 32 con ssa_153 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=35, component=1, io location=35 slots=1 /*163*/) vec1 32 div ssa_154 = ffma ssa_106.y, ssa_153.y, ssa_153.x vec1 32 div ssa_155 = ffma ssa_106.x, ssa_153.z, ssa_154 vec3 32 con ssa_156 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=35, component=2, io location=35 slots=1 /*163*/) vec1 32 div ssa_157 = ffma ssa_106.y, ssa_156.y, ssa_156.x vec1 32 div ssa_158 = ffma ssa_106.x, ssa_156.z, ssa_157 vec3 32 con ssa_159 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=0, io location=34 slots=1 /*162*/) vec1 32 div ssa_160 = ffma ssa_106.y, ssa_159.y, ssa_159.x vec1 32 div ssa_161 = ffma ssa_106.x, ssa_159.z, ssa_160 vec3 32 con ssa_162 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=1, io location=34 slots=1 /*162*/) vec1 32 div ssa_163 = ffma ssa_106.y, ssa_162.y, ssa_162.x vec1 32 div ssa_164 = ffma ssa_106.x, ssa_162.z, ssa_163 vec3 32 con ssa_165 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=2, io location=34 slots=1 /*162*/) vec1 32 div ssa_166 = ffma ssa_106.y, ssa_165.y, ssa_165.x vec1 32 div ssa_167 = ffma ssa_106.x, ssa_165.z, ssa_166 vec3 32 con ssa_168 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=34, component=3, io location=34 slots=1 /*162*/) vec1 32 div ssa_169 = ffma ssa_106.y, ssa_168.y, ssa_168.x vec1 32 div ssa_170 = ffma ssa_106.x, ssa_168.z, ssa_169 vec3 32 con ssa_171 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=0, io location=33 slots=1 /*161*/) vec1 32 div ssa_172 = ffma ssa_106.y, ssa_171.y, ssa_171.x vec1 32 div ssa_173 = ffma ssa_106.x, ssa_171.z, ssa_172 vec3 32 con ssa_174 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=1, io location=33 slots=1 /*161*/) vec1 32 div ssa_175 = ffma ssa_106.y, ssa_174.y, ssa_174.x vec1 32 div ssa_176 = ffma ssa_106.x, ssa_174.z, ssa_175 vec3 32 con ssa_177 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=2, io location=33 slots=1 /*161*/) vec1 32 div ssa_178 = ffma ssa_106.y, ssa_177.y, ssa_177.x vec1 32 div ssa_179 = ffma ssa_106.x, ssa_177.z, ssa_178 vec3 32 con ssa_180 = intrinsic load_fs_input_interp_deltas (ssa_0) (base=33, component=3, io location=33 slots=1 /*161*/) vec1 32 div ssa_181 = ffma ssa_106.y, ssa_180.y, ssa_180.x vec1 32 div ssa_182 = ffma ssa_106.x, ssa_180.z, ssa_181 vec4 32 div ssa_183 = intrinsic load_frag_coord () () vec2 32 div ssa_184 = intrinsic load_sample_pos_or_center () () vec1 32 div ssa_185 = fadd ssa_183.x, ssa_184.x vec1 32 div ssa_186 = fadd ssa_183.y, ssa_184.y vec1 32 div ssa_187 = f2u32 ssa_185 vec1 32 div ssa_188 = f2u32 ssa_186 vec4 32 con ssa_189 = intrinsic load_ssbo_uniform_block_intel (ssa_104, ssa_0) (access=80, align_mul=1073741824, align_offset=0) vec1 32 div ssa_190 = iand ssa_187, ssa_34 vec1 32 div ssa_191 = iand ssa_188, ssa_34 vec1 32 con ssa_192 = load_const (0x000001c0 = 0.000000) vec4 32 con ssa_193 = intrinsic load_ssbo_uniform_block_intel (ssa_98, ssa_192) (access=80, align_mul=1073741824, align_offset=448) vec1 32 con ssa_194 = iand ssa_193.y, ssa_34 vec3 32 div ssa_195 = vec3 ssa_190, ssa_191, ssa_194 vec1 32 con ssa_196 = umin ssa_54, ssa_38 vec1 32 con ssa_197 = ishl ssa_196, ssa_27 vec1 32 con ssa_198 = iadd3 ssa_42, ssa_197, ssa_40 vec1 32 con ssa_199 = intrinsic resource_intel (ssa_44, ssa_198, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_200 = (float32)txf ssa_199 (texture_handle), ssa_195 (coord), ssa_0 (lod), 0 (texture) vec1 32 div ssa_201 = ffma ssa_200.x, ssa_189.z, ssa_189.w vec1 32 div ssa_202 = flt32! ssa_201, ssa_0 intrinsic demote_if (ssa_202) () vec16 32 con ssa_203 = intrinsic load_ssbo_uniform_block_intel (ssa_83, ssa_0) (access=80, align_mul=1073741824, align_offset=0) vec1 32 div ssa_204 = ffma ssa_203.e, ssa_173, ssa_203.i vec1 32 div ssa_205 = ffma ssa_203.f, ssa_176, ssa_203.j vec2 32 div ssa_206 = vec2 ssa_204, ssa_205 vec1 32 con ssa_207 = umin ssa_36, ssa_38 vec1 32 con ssa_208 = ishl ssa_207, ssa_27 vec1 32 con ssa_209 = iadd3 ssa_42, ssa_208, ssa_40 vec1 32 con ssa_210 = intrinsic resource_intel (ssa_44, ssa_209, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_211 = load_const (0x000007ff = 0.000000) vec1 32 con ssa_212 = umin ssa_70, ssa_211 vec1 32 con ssa_213 = intrinsic load_uniform (ssa_0) (base=248, range=4, dest_type=uint /*4*/) vec1 32 con ssa_214 = ishl ssa_212, ssa_28 vec1 32 con ssa_215 = iadd ssa_213, ssa_214 vec1 32 con ssa_216 = intrinsic resource_intel (ssa_44, ssa_215, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_217 = (float32)tex ssa_210 (texture_handle), ssa_216 (sampler_handle), ssa_206 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_218 = ffma ssa_217.x, ssa_9, ssa_10 vec1 32 div ssa_219 = ffma ssa_217.y, ssa_9, ssa_10 vec1 32 div ssa_220 = fneg! ssa_219 vec1 32 div ssa_221 = fmul! ssa_220, ssa_219 vec1 32 div ssa_222 = fneg! ssa_218 vec1 32 div ssa_223 = ffma! ssa_222, ssa_218, ssa_221 vec1 32 div ssa_224 = fadd! ssa_223, ssa_1 vec1 32 div ssa_225 = fsat! ssa_224 vec1 32 div ssa_226 = fsqrt ssa_225 vec1 32 div ssa_227 = fadd ssa_226, ssa_10 vec1 32 div ssa_228 = fmul ssa_203.a, ssa_218 vec1 32 div ssa_229 = fmul ssa_203.a, ssa_219 vec1 32 div ssa_230 = ffma ssa_203.a, ssa_227, ssa_1 vec1 32 div ssa_231 = fmul ssa_230, ssa_230 vec1 32 div ssa_232 = ffma ssa_229, ssa_229, ssa_231 vec1 32 div ssa_233 = ffma ssa_228, ssa_228, ssa_232 vec1 32 div ssa_234 = frsq ssa_233 vec1 32 div ssa_235 = fmul ssa_228, ssa_234 vec1 32 div ssa_236 = fmul ssa_229, ssa_234 vec1 32 con ssa_237 = f2u32 ssa_203.m vec16 32 con ssa_238 = intrinsic load_ssbo_uniform_block_intel (ssa_83, ssa_99) (access=80, align_mul=1073741824, align_offset=64) vec16 32 con ssa_239 = intrinsic load_ssbo_uniform_block_intel (ssa_83, ssa_42) (access=80, align_mul=1073741824, align_offset=128) vec1 32 con ssa_240 = f2u32 ssa_239.a vec1 32 con ssa_241 = f2u32 ssa_239.b vec1 32 div ssa_242 = fddx_coarse ssa_173 vec1 32 div ssa_243 = fddx_coarse ssa_176 vec1 32 div ssa_244 = fddy_coarse ssa_173 vec1 32 div ssa_245 = fddy_coarse ssa_176 vec1 32 div ssa_246 = ffract ssa_173 vec1 32 div ssa_247 = ffract ssa_176 vec1 32 div ssa_248 = fmul ssa_246, ssa_238.a vec1 32 div ssa_249 = fneg ssa_176 vec1 32 div ssa_250 = fadd ssa_1, ssa_249 vec1 32 div ssa_251 = ffract ssa_250 vec1 32 div ssa_252 = fmul ssa_246, ssa_238.e vec1 32 div ssa_253 = fmul ssa_251, ssa_238.f vec1 32 div ssa_254 = f2i32 ssa_252 vec1 32 div ssa_255 = f2i32 ssa_253 vec1 32 con ssa_256 = fceil ssa_238.e vec1 32 con ssa_257 = f2i32 ssa_256 vec1 32 div ssa_258 = imul ssa_255, ssa_257 vec1 32 div ssa_259 = iadd ssa_258, ssa_254 vec1 32 div ssa_260 = ishl ssa_259, ssa_29 vec2 32 div ssa_261 = intrinsic load_ssbo (ssa_50, ssa_260) (access=80, align_mul=8, align_offset=0) vec1 32 con ssa_262 = fceil ssa_238.f vec1 32 con ssa_263 = f2u32 ssa_256 vec1 32 con ssa_264 = f2u32 ssa_262 vec1 32 div ssa_265 = fmul ssa_246, ssa_238.g vec1 32 div ssa_266 = fmul ssa_251, ssa_238.h vec1 32 div ssa_267 = f2i32 ssa_265 vec1 32 div ssa_268 = f2i32 ssa_266 vec1 32 con ssa_269 = fceil ssa_238.g vec1 32 con ssa_270 = f2i32 ssa_269 vec1 32 div ssa_271 = imul ssa_270, ssa_268 vec1 32 con ssa_272 = imul ssa_264, ssa_263 vec1 32 div ssa_273 = iadd3 ssa_272, ssa_267, ssa_271 vec1 32 div ssa_274 = ishl ssa_273, ssa_29 vec2 32 div ssa_275 = intrinsic load_ssbo (ssa_50, ssa_274) (access=80, align_mul=8, align_offset=0) vec1 32 div ssa_276 = ior ssa_275.y, ssa_261.y vec1 32 div ssa_277 = iand ssa_276, ssa_239.e con r7 = intrinsic reduce (ssa_277) (reduction_op=ior /*300*/, cluster_size=0) vec1 32 con ssa_279 = frcp ssa_238.i vec1 32 div ssa_280 = fmul ssa_246, ssa_279 vec1 32 div ssa_281 = fmul ssa_251, ssa_279 vec1 32 con ssa_282 = frcp ssa_238.m vec1 32 con ssa_283 = frcp ssa_238.n vec1 32 con ssa_284 = fmul ssa_282, ssa_238.i vec1 32 con ssa_285 = fmul ssa_283, ssa_238.i vec1 32 con ssa_286 = iadd r7, ssa_30 vec1 32 con ssa_287 = iand ssa_286, r7 vec1 32 con ssa_288 = ine32 ssa_287, ssa_0 vec1 32 con ssa_289 = inot ssa_105 /* succs: block_1 block_13 */ if ssa_288 { block block_1: /* preds: block_0 */ vec1 32 con ssa_290 = umin ssa_63, ssa_38 vec1 32 con ssa_291 = ishl ssa_290, ssa_27 vec1 32 con ssa_292 = iadd3 ssa_42, ssa_291, ssa_40 vec1 32 con ssa_293 = intrinsic resource_intel (ssa_44, ssa_292, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_294 = umin ssa_66, ssa_211 vec1 32 con ssa_295 = ishl ssa_294, ssa_28 vec1 32 con ssa_296 = iadd ssa_213, ssa_295 vec1 32 con ssa_297 = intrinsic resource_intel (ssa_44, ssa_296, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec1 32 con ssa_298 = load_const (0x00000068 = 0.000000) vec1 32 con ssa_299 = load_const (0x000004f0 = 0.000000) div r19 = mov ssa_0 div r18 = mov ssa_0 div r17 = mov ssa_0 div r16 = mov ssa_0 div r15 = mov ssa_0 div r14 = mov ssa_0 div r13 = mov ssa_0 div r12 = mov ssa_0 div r11 = mov ssa_0 div r10 = mov ssa_0 div r9 = mov ssa_0 div r8 = mov ssa_1 /* succs: block_2 */ loop { block block_2: /* preds: block_1 block_11 */ vec1 32 con ssa_313 = uclz r7 vec1 32 con ssa_314 = ineg ssa_313 vec1 32 con ssa_315 = iadd ssa_31, ssa_314 vec1 32 con ssa_316 = ineg ssa_315 vec1 32 con ssa_317 = iadd ssa_31, ssa_316 vec1 32 con ssa_318 = ieq32 ssa_315, ssa_30 vec1 32 con ssa_319 = b32csel ssa_318, ssa_30, ssa_317 vec1 32 con ssa_320 = ineg ssa_319 vec1 32 con ssa_321 = iadd ssa_31, ssa_320 vec1 32 con ssa_322 = ieq32 ssa_319, ssa_30 vec1 32 con ssa_323 = b32csel ssa_322, ssa_30, ssa_321 vec1 32 con ssa_324 = ishl ssa_16, ssa_323 con r7 = ixor ssa_324, r7 vec1 32 div ssa_326 = iand ssa_324, ssa_277 vec1 32 div ssa_327 = ine32 ssa_326, ssa_0 vec1 32 div ssa_328 = flt32! ssa_0, r8 vec1 32 div ssa_329 = iand ssa_328, ssa_327 /* succs: block_3 block_7 */ if ssa_329 { block block_3: /* preds: block_2 */ vec1 32 div ssa_330 = iand ssa_324, ssa_261.y vec1 32 div ssa_331 = ine32 ssa_330, ssa_0 vec1 32 div ssa_332 = b32csel ssa_331, ssa_261.x, ssa_275.x vec1 32 div ssa_333 = b32csel ssa_331, ssa_261.y, ssa_275.y vec1 32 con ssa_334 = bfm ssa_323, ssa_0 vec1 32 div ssa_335 = iand ssa_333, ssa_334 vec1 32 div ssa_336 = bit_count ssa_335 vec1 32 div ssa_337 = iadd ssa_336, ssa_332 vec1 32 div ssa_338 = ishl ssa_337, ssa_17 vec1 32 div ssa_339 = intrinsic load_ssbo (ssa_50, ssa_338) (access=80, align_mul=4, align_offset=0) vec1 32 div ssa_340 = ushr ssa_339, ssa_23 vec1 32 div ssa_341 = iand ssa_339, ssa_33 vec1 32 div ssa_342 = iand ssa_340, ssa_33 vec1 32 div ssa_343 = u2f32 ssa_341 vec1 32 div ssa_344 = u2f32 ssa_342 vec1 32 div ssa_345 = ushr ssa_339, ssa_32 vec1 32 div ssa_346 = extract_u8 ssa_339, ssa_29 vec1 32 div ssa_347 = iand ssa_345, ssa_24 vec1 32 div ssa_348 = iand ssa_346, ssa_24 vec1 32 div ssa_349 = ushr ssa_240, ssa_347 vec1 32 div ssa_350 = ushr ssa_241, ssa_348 vec1 32 div ssa_351 = u2f32 ssa_349 vec1 32 div ssa_352 = u2f32 ssa_350 vec1 32 div ssa_353 = fmul ssa_351, ssa_280 vec1 32 div ssa_354 = fmul ssa_352, ssa_281 vec1 32 div ssa_355 = ffract ssa_353 vec1 32 div ssa_356 = ffract ssa_354 vec1 32 div ssa_357 = ffma ssa_284, ssa_355, ssa_282 vec1 32 div ssa_358 = ffma ssa_285, ssa_356, ssa_283 vec1 32 div ssa_359 = ffma ssa_343, ssa_238.o, ssa_357 vec1 32 div ssa_360 = ffma ssa_344, ssa_238.p, ssa_358 vec2 32 div ssa_361 = vec2 ssa_359, ssa_360 vec4 32 div ssa_362 = (float32)txl ssa_293 (texture_handle), ssa_297 (sampler_handle), ssa_361 (coord), ssa_0 (lod), 0 (texture), 0 (sampler) vec1 32 div ssa_363 = flt32! ssa_0, ssa_362.x /* succs: block_4 block_5 */ if ssa_363 { block block_4: /* preds: block_3 */ vec1 32 con ssa_364 = iadd ssa_323, ssa_237 vec1 32 con ssa_365 = ishl ssa_364, ssa_26 vec16 32 con ssa_366 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_365) (access=80, align_mul=128, align_offset=0) vec1 32 con ssa_367 = iadd ssa_365, ssa_99 vec8 32 con ssa_368 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_367) (access=80, align_mul=128, align_offset=64) vec1 32 con ssa_369 = iadd ssa_365, ssa_69 vec4 32 con ssa_370 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_369) (access=80, align_mul=128, align_offset=96) vec8 32 con ssa_371 = intrinsic load_ssbo_uniform_block_intel (ssa_88, ssa_299) (access=80, align_mul=1073741824, align_offset=1264) vec1 32 con ssa_372 = fmul ssa_371.e, ssa_371.h vec1 32 con ssa_373 = fsat! ssa_372 vec1 32 con ssa_374 = ffma ssa_371.f, ssa_371.c, ssa_10 vec1 32 con ssa_375 = ffma ssa_371.f, ssa_371.d, ssa_10 vec1 32 con ssa_376 = ffma ssa_373, ssa_374, ssa_1 vec1 32 con ssa_377 = ffma ssa_373, ssa_375, ssa_1 vec1 32 div ssa_378 = ffma ssa_368.a, ssa_248, ssa_368.g vec1 32 div ssa_379 = ffma ssa_368.a, ssa_247, ssa_368.h vec1 32 div ssa_380 = fmul ssa_368.a, ssa_242 vec1 32 div ssa_381 = fmul ssa_368.a, ssa_243 vec1 32 div ssa_382 = fmul ssa_368.a, ssa_244 vec1 32 div ssa_383 = fmul ssa_368.a, ssa_245 vec1 32 div ssa_384 = ffma ssa_368.b, ssa_248, ssa_368.e vec1 32 div ssa_385 = ffma ssa_368.b, ssa_247, ssa_368.f vec1 32 con ssa_386 = fmul ssa_368.b, ssa_239.m vec1 32 div ssa_387 = fmul ssa_386, ssa_243 vec1 32 div ssa_388 = fmul ssa_386, ssa_245 vec1 32 div ssa_389 = fmul ssa_242, ssa_238.a vec1 32 div ssa_390 = fmul ssa_389, ssa_386 vec1 32 div ssa_391 = fmul ssa_390, ssa_376 vec1 32 div ssa_392 = fmul ssa_387, ssa_376 vec1 32 div ssa_393 = fmul ssa_244, ssa_238.a vec1 32 div ssa_394 = fmul ssa_393, ssa_386 vec1 32 div ssa_395 = fmul ssa_394, ssa_377 vec1 32 div ssa_396 = fmul ssa_388, ssa_377 vec1 32 con ssa_397 = intrinsic load_uniform (ssa_298) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_398 = iadd ssa_397, ssa_20 vec1 32 con ssa_399 = iadd3 ssa_21, ssa_370.y, ssa_398 vec2 32 div ssa_400 = vec2 ssa_384, ssa_385 vec2 32 div ssa_401 = vec2 ssa_391, ssa_392 vec2 32 div ssa_402 = vec2 ssa_395, ssa_396 vec1 32 con ssa_403 = umin ssa_399, ssa_38 vec1 32 con ssa_404 = ishl ssa_403, ssa_27 vec1 32 con ssa_405 = iadd3 ssa_42, ssa_404, ssa_40 vec1 32 con ssa_406 = intrinsic resource_intel (ssa_44, ssa_405, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_407 = (float32)txd ssa_406 (texture_handle), ssa_216 (sampler_handle), ssa_400 (coord), ssa_401 (ddx), ssa_402 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_408 = ffma ssa_407.x, ssa_9, ssa_10 vec1 32 div ssa_409 = ffma ssa_407.y, ssa_9, ssa_10 vec1 32 div ssa_410 = fneg ssa_407.w vec1 32 div ssa_411 = fadd ssa_1, ssa_410 vec1 32 div ssa_412 = fneg ssa_411 vec1 32 div ssa_413 = fadd ssa_362.x, ssa_412 vec1 32 div ssa_414 = ffma ssa_413, ssa_368.c, ssa_411 vec1 32 div ssa_415 = fsat! ssa_414 vec1 32 div ssa_416 = fadd ssa_415, ssa_5 vec1 32 div ssa_417 = fabs ssa_416 vec1 32 div ssa_418 = fneg ssa_417 vec1 32 div ssa_419 = ffma ssa_418, ssa_9, ssa_1 vec1 32 div ssa_420 = fsqrt ssa_419 vec1 32 div ssa_421 = fsat! ssa_420 vec1 32 div ssa_422 = fneg r19 vec1 32 div ssa_423 = ffma ssa_421, ssa_366.d, ssa_422 vec1 32 div ssa_424 = fsat! ssa_423 vec1 32 div ssa_425 = fmul ssa_415, ssa_366.d div r19 = fadd ssa_425, r19 vec1 32 div ssa_427 = fneg r17 vec1 32 div ssa_428 = ffma ssa_408, ssa_368.d, ssa_427 vec1 32 div ssa_429 = fneg r16 vec1 32 div ssa_430 = ffma ssa_409, ssa_368.d, ssa_429 div r17 = ffma ssa_424, ssa_428, r17 div r16 = ffma ssa_424, ssa_430, r16 vec1 32 con ssa_433 = fabs ssa_368.d vec1 32 div ssa_434 = fmul ssa_433, ssa_424 div r18 = fmax! r18, ssa_434 vec1 32 div ssa_436 = fmin! r8, ssa_425 vec1 32 div ssa_437 = fneg ssa_436 div r8 = fadd r8, ssa_437 vec1 32 con ssa_439 = extract_u16 ssa_370.w, ssa_16 vec1 32 con ssa_440 = iadd3 ssa_21, ssa_439, ssa_398 vec2 32 div ssa_441 = vec2 ssa_378, ssa_379 vec2 32 div ssa_442 = vec2 ssa_380, ssa_381 vec2 32 div ssa_443 = vec2 ssa_382, ssa_383 vec1 32 con ssa_444 = umin ssa_440, ssa_38 vec1 32 con ssa_445 = ishl ssa_444, ssa_27 vec1 32 con ssa_446 = iadd3 ssa_42, ssa_445, ssa_40 vec1 32 con ssa_447 = intrinsic resource_intel (ssa_44, ssa_446, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_448 = (float32)txd ssa_447 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_442 (ddx), ssa_443 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_449 = ffma ssa_448.x, ssa_366.i, ssa_366.j vec1 32 div ssa_450 = fsat! ssa_449 vec1 32 div ssa_451 = ffma ssa_450, ssa_366.k, ssa_366.l vec1 32 div ssa_452 = fsat! ssa_451 div r13 = ffma ssa_452, ssa_436, r13 vec1 32 con ssa_454 = extract_u16 ssa_370.w, ssa_0 vec1 32 con ssa_455 = iadd3 ssa_21, ssa_454, ssa_398 vec1 32 con ssa_456 = umin ssa_455, ssa_38 vec1 32 con ssa_457 = ishl ssa_456, ssa_27 vec1 32 con ssa_458 = iadd3 ssa_42, ssa_457, ssa_40 vec1 32 con ssa_459 = intrinsic resource_intel (ssa_44, ssa_458, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_460 = (float32)txd ssa_459 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_442 (ddx), ssa_443 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_461 = ffma ssa_460.x, ssa_366.e, ssa_366.f vec1 32 div ssa_462 = fsat! ssa_461 vec1 32 div ssa_463 = ffma ssa_462, ssa_366.g, ssa_366.h vec1 32 div ssa_464 = fsat! ssa_463 div r12 = ffma ssa_464, ssa_436, r12 vec1 32 con ssa_466 = extract_u16 ssa_370.z, ssa_0 vec1 32 con ssa_467 = iadd3 ssa_21, ssa_466, ssa_398 vec1 32 con ssa_468 = umin ssa_467, ssa_38 vec1 32 con ssa_469 = ishl ssa_468, ssa_27 vec1 32 con ssa_470 = iadd3 ssa_42, ssa_469, ssa_40 vec1 32 con ssa_471 = intrinsic resource_intel (ssa_44, ssa_470, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_472 = (float32)txd ssa_471 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_442 (ddx), ssa_443 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_473 = ffma ssa_460.x, ssa_366.m, ssa_366.n vec1 32 div ssa_474 = fsat! ssa_473 vec1 32 div ssa_475 = ffma ssa_474, ssa_366.o, ssa_366.p vec1 32 div ssa_476 = fsat! ssa_475 vec1 32 con ssa_477 = fadd ssa_366.a, ssa_10 vec1 32 con ssa_478 = fadd ssa_366.b, ssa_10 vec1 32 con ssa_479 = fadd ssa_366.c, ssa_10 vec1 32 div ssa_480 = ffma ssa_476, ssa_477, ssa_1 vec1 32 div ssa_481 = ffma ssa_476, ssa_478, ssa_1 vec1 32 div ssa_482 = ffma ssa_476, ssa_479, ssa_1 vec1 32 div ssa_483 = fmul ssa_472.x, ssa_436 vec1 32 div ssa_484 = fmul ssa_472.y, ssa_436 vec1 32 div ssa_485 = fmul ssa_472.z, ssa_436 div r11 = ffma ssa_483, ssa_480, r11 div r10 = ffma ssa_484, ssa_481, r10 div r9 = ffma ssa_485, ssa_482, r9 vec1 32 div ssa_489 = fmul ssa_380, ssa_239.i vec1 32 div ssa_490 = fmul ssa_381, ssa_239.i vec1 32 div ssa_491 = fmul ssa_382, ssa_239.i vec1 32 div ssa_492 = fmul ssa_383, ssa_239.i vec1 32 con ssa_493 = extract_u16 ssa_370.z, ssa_16 vec1 32 con ssa_494 = iadd3 ssa_21, ssa_493, ssa_398 vec2 32 div ssa_495 = vec2 ssa_489, ssa_490 vec2 32 div ssa_496 = vec2 ssa_491, ssa_492 vec1 32 con ssa_497 = umin ssa_494, ssa_38 vec1 32 con ssa_498 = ishl ssa_497, ssa_27 vec1 32 con ssa_499 = iadd3 ssa_42, ssa_498, ssa_40 vec1 32 con ssa_500 = intrinsic resource_intel (ssa_44, ssa_499, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_501 = (float32)txd ssa_500 (texture_handle), ssa_216 (sampler_handle), ssa_441 (coord), ssa_495 (ddx), ssa_496 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_502 = ffma ssa_501.x, ssa_9, ssa_10 vec1 32 div ssa_503 = ffma ssa_501.y, ssa_9, ssa_10 vec1 32 div ssa_504 = fmul ssa_436, ssa_370.x div r15 = ffma ssa_504, ssa_502, r15 div r14 = ffma ssa_504, ssa_503, r14 /* succs: block_6 */ } else { block block_5: /* preds: block_3 */ /* succs: block_6 */ } block block_6: /* preds: block_4 block_5 */ /* succs: block_8 */ } else { block block_7: /* preds: block_2 */ /* succs: block_8 */ } block block_8: /* preds: block_6 block_7 */ vec1 32 con ssa_531 = iadd r7, ssa_30 vec1 32 con ssa_532 = iand ssa_531, r7 vec1 32 con ssa_533 = ieq32 ssa_532, ssa_0 /* succs: block_9 block_10 */ if ssa_533 { block block_9: /* preds: block_8 */ break /* succs: block_12 */ } else { block block_10: /* preds: block_8 */ /* succs: block_11 */ } block block_11: /* preds: block_10 */ /* succs: block_2 */ } block block_12: /* preds: block_9 */ /* succs: block_14 */ } else { block block_13: /* preds: block_0 */ div r8 = mov ssa_1 div r9 = mov ssa_0 div r10 = mov ssa_0 div r11 = mov ssa_0 div r12 = mov ssa_0 div r13 = mov ssa_0 div r14 = mov ssa_0 div r15 = mov ssa_0 div r16 = mov ssa_0 div r17 = mov ssa_0 div r18 = mov ssa_0 div r19 = mov ssa_0 /* succs: block_14 */ } block block_14: /* preds: block_12 block_13 */ vec1 32 div ssa_547 = flt32! ssa_0, r8 vec1 32 con ssa_548 = ine32 r7, ssa_0 vec1 32 div ssa_549 = iand ssa_547, ssa_548 /* succs: block_15 block_16 */ if ssa_549 { block block_15: /* preds: block_14 */ vec1 32 con ssa_550 = uclz r7 vec1 32 con ssa_551 = ineg ssa_550 vec1 32 con ssa_552 = iadd ssa_31, ssa_551 vec1 32 con ssa_553 = ineg ssa_552 vec1 32 con ssa_554 = iadd ssa_31, ssa_553 vec1 32 con ssa_555 = ieq32 ssa_552, ssa_30 vec1 32 con ssa_556 = b32csel ssa_555, ssa_30, ssa_554 vec1 32 con ssa_557 = ineg ssa_556 vec1 32 con ssa_558 = iadd ssa_31, ssa_557 vec1 32 con ssa_559 = ieq32 ssa_556, ssa_30 vec1 32 con ssa_560 = b32csel ssa_559, ssa_30, ssa_558 vec1 32 con ssa_561 = iadd ssa_560, ssa_237 vec1 32 con ssa_562 = ishl ssa_561, ssa_26 vec16 32 con ssa_563 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_562) (access=80, align_mul=128, align_offset=0) vec1 32 con ssa_564 = iadd ssa_562, ssa_99 vec8 32 con ssa_565 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_564) (access=80, align_mul=128, align_offset=64) vec1 32 con ssa_566 = iadd ssa_562, ssa_69 vec4 32 con ssa_567 = intrinsic load_ssbo_uniform_block_intel (ssa_45, ssa_566) (access=80, align_mul=128, align_offset=96) vec1 32 con ssa_568 = load_const (0x00000068 = 0.000000) vec1 32 con ssa_569 = load_const (0x000004f0 = 0.000000) vec8 32 con ssa_570 = intrinsic load_ssbo_uniform_block_intel (ssa_88, ssa_569) (access=80, align_mul=1073741824, align_offset=1264) vec1 32 con ssa_571 = fmul ssa_570.e, ssa_570.h vec1 32 con ssa_572 = fsat! ssa_571 vec1 32 con ssa_573 = ffma ssa_570.f, ssa_570.c, ssa_10 vec1 32 con ssa_574 = ffma ssa_570.f, ssa_570.d, ssa_10 vec1 32 con ssa_575 = ffma ssa_572, ssa_573, ssa_1 vec1 32 con ssa_576 = ffma ssa_572, ssa_574, ssa_1 vec1 32 div ssa_577 = ffma ssa_565.a, ssa_248, ssa_565.g vec1 32 div ssa_578 = ffma ssa_565.a, ssa_247, ssa_565.h vec1 32 div ssa_579 = fmul ssa_565.a, ssa_242 vec1 32 div ssa_580 = fmul ssa_565.a, ssa_243 vec1 32 div ssa_581 = fmul ssa_565.a, ssa_244 vec1 32 div ssa_582 = fmul ssa_565.a, ssa_245 vec1 32 div ssa_583 = ffma ssa_565.b, ssa_248, ssa_565.e vec1 32 div ssa_584 = ffma ssa_565.b, ssa_247, ssa_565.f vec1 32 con ssa_585 = fmul ssa_565.b, ssa_239.m vec1 32 div ssa_586 = fmul ssa_585, ssa_243 vec1 32 div ssa_587 = fmul ssa_585, ssa_245 vec1 32 div ssa_588 = fmul ssa_242, ssa_238.a vec1 32 div ssa_589 = fmul ssa_588, ssa_585 vec1 32 div ssa_590 = fmul ssa_589, ssa_575 vec1 32 div ssa_591 = fmul ssa_586, ssa_575 vec1 32 div ssa_592 = fmul ssa_244, ssa_238.a vec1 32 div ssa_593 = fmul ssa_592, ssa_585 vec1 32 div ssa_594 = fmul ssa_593, ssa_576 vec1 32 div ssa_595 = fmul ssa_587, ssa_576 vec1 32 con ssa_596 = intrinsic load_uniform (ssa_568) (base=0, range=108, dest_type=invalid /*256*/) vec1 32 con ssa_597 = iadd ssa_596, ssa_20 vec1 32 con ssa_598 = iadd3 ssa_21, ssa_567.y, ssa_597 vec2 32 div ssa_599 = vec2 ssa_583, ssa_584 vec2 32 div ssa_600 = vec2 ssa_590, ssa_591 vec2 32 div ssa_601 = vec2 ssa_594, ssa_595 vec1 32 con ssa_602 = umin ssa_598, ssa_38 vec1 32 con ssa_603 = ishl ssa_602, ssa_27 vec1 32 con ssa_604 = iadd3 ssa_42, ssa_603, ssa_40 vec1 32 con ssa_605 = intrinsic resource_intel (ssa_44, ssa_604, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_606 = (float32)txd ssa_605 (texture_handle), ssa_216 (sampler_handle), ssa_599 (coord), ssa_600 (ddx), ssa_601 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_607 = ffma ssa_606.x, ssa_9, ssa_10 vec1 32 div ssa_608 = ffma ssa_606.y, ssa_9, ssa_10 vec1 32 div ssa_609 = fneg ssa_606.w vec1 32 div ssa_610 = fadd ssa_609, ssa_1 vec1 32 div ssa_611 = ffma ssa_606.w, ssa_565.c, ssa_610 vec1 32 div ssa_612 = fsat! ssa_611 vec1 32 div ssa_613 = fadd ssa_612, ssa_5 vec1 32 div ssa_614 = fabs ssa_613 vec1 32 div ssa_615 = fneg ssa_614 vec1 32 div ssa_616 = ffma ssa_615, ssa_9, ssa_1 vec1 32 div ssa_617 = fsqrt ssa_616 vec1 32 div ssa_618 = fsat! ssa_617 vec1 32 div ssa_619 = fneg r19 vec1 32 div ssa_620 = ffma ssa_618, ssa_563.d, ssa_619 vec1 32 div ssa_621 = fsat! ssa_620 vec1 32 div ssa_622 = fmul ssa_612, ssa_563.d vec1 32 div ssa_623 = fneg r17 vec1 32 div ssa_624 = ffma ssa_607, ssa_565.d, ssa_623 vec1 32 div ssa_625 = fneg r16 vec1 32 div ssa_626 = ffma ssa_608, ssa_565.d, ssa_625 div r17 = ffma ssa_621, ssa_624, r17 div r16 = ffma ssa_621, ssa_626, r16 vec1 32 con ssa_629 = fabs ssa_565.d vec1 32 div ssa_630 = fmul ssa_629, ssa_621 div r18 = fmax! r18, ssa_630 vec1 32 div ssa_632 = fmin! r8, ssa_622 vec1 32 con ssa_633 = extract_u16 ssa_567.w, ssa_16 vec1 32 con ssa_634 = iadd3 ssa_21, ssa_633, ssa_597 vec2 32 div ssa_635 = vec2 ssa_577, ssa_578 vec2 32 div ssa_636 = vec2 ssa_579, ssa_580 vec2 32 div ssa_637 = vec2 ssa_581, ssa_582 vec1 32 con ssa_638 = umin ssa_634, ssa_38 vec1 32 con ssa_639 = ishl ssa_638, ssa_27 vec1 32 con ssa_640 = iadd3 ssa_42, ssa_639, ssa_40 vec1 32 con ssa_641 = intrinsic resource_intel (ssa_44, ssa_640, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_642 = (float32)txd ssa_641 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_636 (ddx), ssa_637 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_643 = ffma ssa_642.x, ssa_563.i, ssa_563.j vec1 32 div ssa_644 = fsat! ssa_643 vec1 32 div ssa_645 = ffma ssa_644, ssa_563.k, ssa_563.l vec1 32 div ssa_646 = fsat! ssa_645 div r13 = ffma ssa_646, ssa_632, r13 vec1 32 con ssa_648 = extract_u16 ssa_567.w, ssa_0 vec1 32 con ssa_649 = iadd3 ssa_21, ssa_648, ssa_597 vec1 32 con ssa_650 = umin ssa_649, ssa_38 vec1 32 con ssa_651 = ishl ssa_650, ssa_27 vec1 32 con ssa_652 = iadd3 ssa_42, ssa_651, ssa_40 vec1 32 con ssa_653 = intrinsic resource_intel (ssa_44, ssa_652, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_654 = (float32)txd ssa_653 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_636 (ddx), ssa_637 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_655 = ffma ssa_654.x, ssa_563.e, ssa_563.f vec1 32 div ssa_656 = fsat! ssa_655 vec1 32 div ssa_657 = ffma ssa_656, ssa_563.g, ssa_563.h vec1 32 div ssa_658 = fsat! ssa_657 div r12 = ffma ssa_658, ssa_632, r12 vec1 32 con ssa_660 = extract_u16 ssa_567.z, ssa_0 vec1 32 con ssa_661 = iadd3 ssa_21, ssa_660, ssa_597 vec1 32 con ssa_662 = umin ssa_661, ssa_38 vec1 32 con ssa_663 = ishl ssa_662, ssa_27 vec1 32 con ssa_664 = iadd3 ssa_42, ssa_663, ssa_40 vec1 32 con ssa_665 = intrinsic resource_intel (ssa_44, ssa_664, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_666 = (float32)txd ssa_665 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_636 (ddx), ssa_637 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_667 = ffma ssa_654.x, ssa_563.m, ssa_563.n vec1 32 div ssa_668 = fsat! ssa_667 vec1 32 div ssa_669 = ffma ssa_668, ssa_563.o, ssa_563.p vec1 32 div ssa_670 = fsat! ssa_669 vec1 32 con ssa_671 = fadd ssa_563.a, ssa_10 vec1 32 con ssa_672 = fadd ssa_563.b, ssa_10 vec1 32 con ssa_673 = fadd ssa_563.c, ssa_10 vec1 32 div ssa_674 = ffma ssa_670, ssa_671, ssa_1 vec1 32 div ssa_675 = ffma ssa_670, ssa_672, ssa_1 vec1 32 div ssa_676 = ffma ssa_670, ssa_673, ssa_1 vec1 32 div ssa_677 = fmul ssa_666.x, ssa_632 vec1 32 div ssa_678 = fmul ssa_666.y, ssa_632 vec1 32 div ssa_679 = fmul ssa_666.z, ssa_632 div r11 = ffma ssa_677, ssa_674, r11 div r10 = ffma ssa_678, ssa_675, r10 div r9 = ffma ssa_679, ssa_676, r9 vec1 32 div ssa_683 = fmul ssa_579, ssa_239.i vec1 32 div ssa_684 = fmul ssa_580, ssa_239.i vec1 32 div ssa_685 = fmul ssa_581, ssa_239.i vec1 32 div ssa_686 = fmul ssa_582, ssa_239.i vec1 32 con ssa_687 = extract_u16 ssa_567.z, ssa_16 vec1 32 con ssa_688 = iadd3 ssa_21, ssa_687, ssa_597 vec2 32 div ssa_689 = vec2 ssa_683, ssa_684 vec2 32 div ssa_690 = vec2 ssa_685, ssa_686 vec1 32 con ssa_691 = umin ssa_688, ssa_38 vec1 32 con ssa_692 = ishl ssa_691, ssa_27 vec1 32 con ssa_693 = iadd3 ssa_42, ssa_692, ssa_40 vec1 32 con ssa_694 = intrinsic resource_intel (ssa_44, ssa_693, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_695 = (float32)txd ssa_694 (texture_handle), ssa_216 (sampler_handle), ssa_635 (coord), ssa_689 (ddx), ssa_690 (ddy), 0 (texture), 0 (sampler) vec1 32 div ssa_696 = ffma ssa_695.x, ssa_9, ssa_10 vec1 32 div ssa_697 = ffma ssa_695.y, ssa_9, ssa_10 vec1 32 div ssa_698 = fmul ssa_632, ssa_567.x div r15 = ffma ssa_698, ssa_696, r15 div r14 = ffma ssa_698, ssa_697, r14 /* succs: block_17 */ } else { block block_16: /* preds: block_14 */ /* succs: block_17 */ } block block_17: /* preds: block_15 block_16 */ vec1 32 div ssa_711 = fneg r14 vec1 32 div ssa_712 = fmul ssa_711, r14 vec1 32 div ssa_713 = fneg r15 vec1 32 div ssa_714 = ffma ssa_713, r15, ssa_712 vec1 32 div ssa_715 = fadd ssa_1, ssa_714 vec1 32 div ssa_716 = fsat! ssa_715 vec1 32 div ssa_717 = fsqrt ssa_716 vec1 32 div ssa_718 = fneg r16 vec1 32 div ssa_719 = fmul ssa_718, r16 vec1 32 div ssa_720 = fneg r17 vec1 32 div ssa_721 = ffma ssa_720, r17, ssa_719 vec1 32 div ssa_722 = fadd ssa_1, ssa_721 vec1 32 div ssa_723 = fsat! ssa_722 vec1 32 div ssa_724 = fsqrt ssa_723 vec1 32 div ssa_725 = fadd r17, ssa_713 vec1 32 div ssa_726 = fadd r16, ssa_711 vec1 32 div ssa_727 = fneg ssa_717 vec1 32 div ssa_728 = fadd ssa_724, ssa_727 vec1 32 div ssa_729 = ffma ssa_725, r18, r15 vec1 32 div ssa_730 = ffma ssa_726, r18, r14 vec1 32 div ssa_731 = ffma ssa_728, r18, ssa_717 vec1 32 div ssa_732 = ffma ssa_230, ssa_234, ssa_1 vec1 32 div ssa_733 = fneg ssa_729 vec1 32 div ssa_734 = fneg ssa_730 vec1 32 div ssa_735 = fmul ssa_732, ssa_731 vec1 32 div ssa_736 = ffma ssa_236, ssa_734, ssa_735 vec1 32 div ssa_737 = ffma ssa_235, ssa_733, ssa_736 vec1 32 div ssa_738 = fmul ssa_729, ssa_732 vec1 32 div ssa_739 = fmul ssa_730, ssa_732 vec1 32 div ssa_740 = ffma ssa_737, ssa_235, ssa_738 vec1 32 div ssa_741 = ffma ssa_737, ssa_236, ssa_739 vec1 32 div ssa_742 = fneg ssa_731 vec1 32 div ssa_743 = fadd ssa_737, ssa_742 vec1 32 div ssa_744 = fmul ssa_743, ssa_732 vec1 32 div ssa_745 = fmul ssa_744, ssa_744 vec1 32 div ssa_746 = ffma ssa_741, ssa_741, ssa_745 vec1 32 div ssa_747 = ffma ssa_740, ssa_740, ssa_746 vec1 32 div ssa_748 = frsq ssa_747 vec1 32 div ssa_749 = fmul ssa_740, ssa_748 vec1 32 div ssa_750 = fmul ssa_741, ssa_748 vec1 32 div ssa_751 = fmul ssa_744, ssa_748 vec1 32 div ssa_752 = ffma ssa_106.x, ssa_113.z, ssa_19 vec1 32 div ssa_753 = fadd ssa_752, ssa_114 vec1 32 div ssa_754 = fsat! ssa_753 vec1 32 div ssa_755 = fmul ssa_750, ssa_164 vec1 32 div ssa_756 = fmul ssa_750, ssa_167 vec1 32 div ssa_757 = fmul ssa_750, ssa_170 vec1 32 div ssa_758 = ffma ssa_749, ssa_152, ssa_755 vec1 32 div ssa_759 = ffma ssa_749, ssa_155, ssa_756 vec1 32 div ssa_760 = ffma ssa_749, ssa_158, ssa_757 vec1 32 div ssa_761 = ffma ssa_751, ssa_179, ssa_758 vec1 32 div ssa_762 = ffma ssa_751, ssa_182, ssa_759 vec1 32 div ssa_763 = ffma ssa_751, ssa_161, ssa_760 vec4 32 con ssa_764 = intrinsic load_ssbo_uniform_block_intel (ssa_104, ssa_69) (access=80, align_mul=1073741824, align_offset=96) vec4 32 con ssa_765 = intrinsic load_ssbo_uniform_block_intel (ssa_98, ssa_42) (access=80, align_mul=1073741824, align_offset=128) vec1 32 div ssa_766 = ffma ssa_179, ssa_6, ssa_146 vec1 32 div ssa_767 = ffma ssa_182, ssa_6, ssa_149 vec1 32 div ssa_768 = ffma ssa_161, ssa_6, ssa_132 vec1 32 con ssa_769 = load_const (0x000001d0 = 0.000000) vec8 32 con ssa_770 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_769) (access=80, align_mul=1073741824, align_offset=464) vec1 32 div ssa_771 = ffma ssa_770.a, ssa_766, ssa_770.e vec1 32 div ssa_772 = ffma ssa_770.b, ssa_767, ssa_770.f vec1 32 div ssa_773 = ffma ssa_770.c, ssa_768, ssa_770.g vec1 32 div ssa_774 = fneg ssa_771 vec1 32 div ssa_775 = fadd ssa_1, ssa_774 vec1 32 div ssa_776 = fneg ssa_772 vec1 32 div ssa_777 = fadd ssa_1, ssa_776 vec1 32 div ssa_778 = fneg ssa_773 vec1 32 div ssa_779 = fadd ssa_1, ssa_778 vec1 32 div ssa_780 = fmin! ssa_771, ssa_775 vec1 32 div ssa_781 = fmin! ssa_772, ssa_777 vec1 32 div ssa_782 = fmin! ssa_773, ssa_779 vec1 32 div ssa_783 = fmin! ssa_781, ssa_782 vec1 32 div ssa_784 = fmin! ssa_780, ssa_783 vec1 32 div ssa_785 = fmul ssa_784, ssa_18 vec1 32 div ssa_786 = fsat! ssa_785 vec3 32 div ssa_787 = vec3 ssa_771, ssa_772, ssa_773 vec1 32 con ssa_788 = umin ssa_59, ssa_38 vec1 32 con ssa_789 = ishl ssa_788, ssa_27 vec1 32 con ssa_790 = iadd3 ssa_42, ssa_789, ssa_40 vec1 32 con ssa_791 = intrinsic resource_intel (ssa_44, ssa_790, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_792 = umin ssa_66, ssa_211 vec1 32 con ssa_793 = ishl ssa_792, ssa_28 vec1 32 con ssa_794 = iadd ssa_213, ssa_793 vec1 32 con ssa_795 = intrinsic resource_intel (ssa_44, ssa_794, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_796 = (float32)txl ssa_791 (texture_handle), ssa_795 (sampler_handle), ssa_787 (coord), ssa_0 (lod), 0 (texture), 0 (sampler) vec1 32 div ssa_797 = fneg ssa_796.x vec1 32 div ssa_798 = ffma ssa_797, ssa_786, ssa_1 div r20 = fsat! ssa_798 vec1 32 div ssa_800 = flt32! ssa_0, r20 /* succs: block_18 block_54 */ if ssa_800 { block block_18: /* preds: block_17 */ vec1 32 con ssa_801 = load_const (0x45fa0000 = 8000.000000) vec1 32 con ssa_802 = load_const (0x467a0000 = 16000.000000) vec1 32 con ssa_803 = load_const (0x00000350 = 0.000000) vec4 32 con ssa_804 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_803) (access=80, align_mul=1073741824, align_offset=848) div r21 = ffma ssa_804.x, ssa_766, ssa_804.z div r22 = ffma ssa_804.y, ssa_767, ssa_804.w vec1 32 div ssa_807 = fadd r21, ssa_5 vec1 32 div ssa_808 = fneg r22 vec1 32 div ssa_809 = fadd ssa_6, ssa_808 vec1 32 div ssa_810 = fabs ssa_807 vec1 32 div ssa_811 = fabs ssa_809 vec1 32 div ssa_812 = flt32! ssa_811, ssa_6 vec1 32 div ssa_813 = flt32! ssa_810, ssa_6 vec1 32 div ssa_814 = iand ssa_813, ssa_812 vec1 32 div ssa_815 = fadd ssa_768, ssa_801 vec1 32 con ssa_816 = load_const (0x00000360 = 0.000000) vec1 32 con ssa_817 = umin ssa_62, ssa_38 vec1 32 con ssa_818 = ishl ssa_817, ssa_27 vec1 32 con ssa_819 = iadd3 ssa_42, ssa_818, ssa_40 vec1 32 con ssa_820 = intrinsic resource_intel (ssa_44, ssa_819, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_821 = umin ssa_67, ssa_211 vec1 32 con ssa_822 = ishl ssa_821, ssa_28 vec1 32 con ssa_823 = iadd ssa_213, ssa_822 vec1 32 con ssa_824 = intrinsic resource_intel (ssa_44, ssa_823, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) /* succs: block_19 */ loop { block block_19: /* preds: block_18 */ /* succs: block_20 block_21 */ if ssa_814 { block block_20: /* preds: block_19 */ div r25 = mov ssa_0 div r24 = mov r21 div r23 = mov r22 /* succs: block_25 */ } else { block block_21: /* preds: block_19 */ vec4 32 con ssa_825 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) div r24 = ffma ssa_825.x, ssa_766, ssa_825.z div r23 = ffma ssa_825.y, ssa_767, ssa_825.w vec1 32 div ssa_828 = fadd r24, ssa_5 vec1 32 div ssa_829 = fneg r23 vec1 32 div ssa_830 = fadd ssa_6, ssa_829 vec1 32 div ssa_831 = fabs ssa_828 vec1 32 div ssa_832 = fabs ssa_830 vec1 32 div ssa_833 = flt32! ssa_832, ssa_6 vec1 32 div ssa_834 = flt32! ssa_831, ssa_6 vec1 32 div ssa_835 = iand ssa_834, ssa_833 /* succs: block_22 block_23 */ if ssa_835 { block block_22: /* preds: block_21 */ /* succs: block_24 */ } else { block block_23: /* preds: block_21 */ div r26 = mov ssa_2 break /* succs: block_26 */ } block block_24: /* preds: block_22 */ div r25 = mov ssa_1 /* succs: block_25 */ } block block_25: /* preds: block_20 block_24 */ vec1 32 div ssa_839 = fneg! r23 vec1 32 div ssa_840 = fadd! ssa_1, ssa_839 vec3 32 div ssa_841 = vec3 r24, ssa_840, r25 vec4 32 div ssa_842 = (float32)txl ssa_820 (texture_handle), ssa_824 (sampler_handle), ssa_841 (coord), ssa_0 (lod), 0 (texture), 0 (sampler) vec1 32 div ssa_843 = fmul ssa_842.x, ssa_802 div r26 = flt32! ssa_815, ssa_843 break /* succs: block_26 */ } block block_26: /* preds: block_23 block_25 */ /* succs: block_27 block_28 */ if r26 { block block_27: /* preds: block_26 */ div r27 = mov ssa_1 /* succs: block_53 */ } else { block block_28: /* preds: block_26 */ vec1 32 con ssa_846 = load_const (0x3883126f = 0.000063) /* succs: block_29 block_30 */ if ssa_814 { block block_29: /* preds: block_28 */ div r30 = mov r22 div r29 = mov r21 div r28 = mov ssa_0 /* succs: block_31 */ } else { block block_30: /* preds: block_28 */ vec4 32 con ssa_847 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) div r29 = ffma ssa_847.x, ssa_766, ssa_847.z div r30 = ffma ssa_847.y, ssa_767, ssa_847.w vec1 32 div ssa_850 = fadd r29, ssa_5 vec1 32 div ssa_851 = fneg r30 vec1 32 div ssa_852 = fadd ssa_6, ssa_851 vec1 32 div ssa_853 = fabs ssa_850 vec1 32 div ssa_854 = fabs ssa_852 vec1 32 div ssa_855 = flt32! ssa_854, ssa_6 vec1 32 div ssa_856 = flt32! ssa_853, ssa_6 vec1 32 div ssa_857 = iand ssa_856, ssa_855 div r28 = b32csel ssa_857, ssa_16, ssa_17 /* succs: block_31 */ } block block_31: /* preds: block_29 block_30 */ vec1 32 div ssa_862 = u2f32 r28 vec1 32 div ssa_863 = flt32! ssa_862, ssa_9 /* succs: block_32 block_33 */ if ssa_863 { block block_32: /* preds: block_31 */ vec1 32 div ssa_864 = fneg! r30 vec1 32 div ssa_865 = fadd! ssa_1, ssa_864 vec1 32 div ssa_866 = fneg ssa_815 vec1 32 div ssa_867 = ffma ssa_866, ssa_846, ssa_1 vec3 32 div ssa_868 = vec3 r29, ssa_865, ssa_862 vec1 32 con ssa_869 = umin ssa_68, ssa_211 vec1 32 con ssa_870 = ishl ssa_869, ssa_28 vec1 32 con ssa_871 = iadd ssa_213, ssa_870 vec1 32 con ssa_872 = intrinsic resource_intel (ssa_44, ssa_871, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) div r31 = (float32)txl ssa_820 (texture_handle), ssa_872 (sampler_handle), ssa_868 (coord), ssa_867 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_34 */ } else { block block_33: /* preds: block_31 */ div r31 = mov ssa_1 /* succs: block_34 */ } block block_34: /* preds: block_32 block_33 */ /* succs: block_35 block_36 */ if ssa_814 { block block_35: /* preds: block_34 */ div r34 = mov r22 div r33 = mov r21 div r32 = mov ssa_0 /* succs: block_37 */ } else { block block_36: /* preds: block_34 */ vec4 32 con ssa_875 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) div r33 = ffma ssa_875.x, ssa_766, ssa_875.z div r34 = ffma ssa_875.y, ssa_767, ssa_875.w vec1 32 div ssa_878 = fadd r33, ssa_5 vec1 32 div ssa_879 = fneg r34 vec1 32 div ssa_880 = fadd ssa_6, ssa_879 vec1 32 div ssa_881 = fabs ssa_878 vec1 32 div ssa_882 = fabs ssa_880 vec1 32 div ssa_883 = flt32! ssa_882, ssa_6 vec1 32 div ssa_884 = flt32! ssa_881, ssa_6 vec1 32 div ssa_885 = iand ssa_884, ssa_883 div r32 = b32csel ssa_885, ssa_16, ssa_17 /* succs: block_37 */ } block block_37: /* preds: block_35 block_36 */ vec1 32 div ssa_890 = u2f32 r32 vec1 32 div ssa_891 = flt32! ssa_890, ssa_9 /* succs: block_38 block_39 */ if ssa_891 { block block_38: /* preds: block_37 */ vec1 32 div ssa_892 = fneg! r34 vec1 32 div ssa_893 = fadd! ssa_1, ssa_892 vec1 32 div ssa_894 = fneg ssa_815 vec1 32 div ssa_895 = ffma ssa_894, ssa_846, ssa_1 vec3 32 div ssa_896 = vec3 r33, ssa_893, ssa_890 vec1 32 con ssa_897 = umin ssa_68, ssa_211 vec1 32 con ssa_898 = ishl ssa_897, ssa_28 vec1 32 con ssa_899 = iadd ssa_213, ssa_898 vec1 32 con ssa_900 = intrinsic resource_intel (ssa_44, ssa_899, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) div r35 = (float32)txl ssa_820 (texture_handle), ssa_900 (sampler_handle), ssa_896 (coord), ssa_895 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_40 */ } else { block block_39: /* preds: block_37 */ div r35 = mov ssa_1 /* succs: block_40 */ } block block_40: /* preds: block_38 block_39 */ vec1 32 div ssa_903 = fadd r35, r31 /* succs: block_41 block_42 */ if ssa_814 { block block_41: /* preds: block_40 */ div r38 = mov r22 div r37 = mov r21 div r36 = mov ssa_0 /* succs: block_43 */ } else { block block_42: /* preds: block_40 */ vec4 32 con ssa_904 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) div r37 = ffma ssa_904.x, ssa_766, ssa_904.z div r38 = ffma ssa_904.y, ssa_767, ssa_904.w vec1 32 div ssa_907 = fadd r37, ssa_5 vec1 32 div ssa_908 = fneg r38 vec1 32 div ssa_909 = fadd ssa_6, ssa_908 vec1 32 div ssa_910 = fabs ssa_907 vec1 32 div ssa_911 = fabs ssa_909 vec1 32 div ssa_912 = flt32! ssa_911, ssa_6 vec1 32 div ssa_913 = flt32! ssa_910, ssa_6 vec1 32 div ssa_914 = iand ssa_913, ssa_912 div r36 = b32csel ssa_914, ssa_16, ssa_17 /* succs: block_43 */ } block block_43: /* preds: block_41 block_42 */ vec1 32 div ssa_919 = u2f32 r36 vec1 32 div ssa_920 = flt32! ssa_919, ssa_9 /* succs: block_44 block_45 */ if ssa_920 { block block_44: /* preds: block_43 */ vec1 32 div ssa_921 = fneg! r38 vec1 32 div ssa_922 = fadd! ssa_1, ssa_921 vec1 32 div ssa_923 = fneg ssa_815 vec1 32 div ssa_924 = ffma ssa_923, ssa_846, ssa_1 vec3 32 div ssa_925 = vec3 r37, ssa_922, ssa_919 vec1 32 con ssa_926 = umin ssa_68, ssa_211 vec1 32 con ssa_927 = ishl ssa_926, ssa_28 vec1 32 con ssa_928 = iadd ssa_213, ssa_927 vec1 32 con ssa_929 = intrinsic resource_intel (ssa_44, ssa_928, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) div r39 = (float32)txl ssa_820 (texture_handle), ssa_929 (sampler_handle), ssa_925 (coord), ssa_924 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_46 */ } else { block block_45: /* preds: block_43 */ div r39 = mov ssa_1 /* succs: block_46 */ } block block_46: /* preds: block_44 block_45 */ vec1 32 div ssa_932 = fadd r39, ssa_903 /* succs: block_47 block_48 */ if ssa_814 { block block_47: /* preds: block_46 */ div r40 = mov ssa_0 /* succs: block_49 */ } else { block block_48: /* preds: block_46 */ vec4 32 con ssa_933 = intrinsic load_ssbo_uniform_block_intel (ssa_77, ssa_816) (access=80, align_mul=1073741824, align_offset=864) div r21 = ffma ssa_933.x, ssa_766, ssa_933.z div r22 = ffma ssa_933.y, ssa_767, ssa_933.w vec1 32 div ssa_936 = fadd r21, ssa_5 vec1 32 div ssa_937 = fneg r22 vec1 32 div ssa_938 = fadd ssa_6, ssa_937 vec1 32 div ssa_939 = fabs ssa_936 vec1 32 div ssa_940 = fabs ssa_938 vec1 32 div ssa_941 = flt32! ssa_940, ssa_6 vec1 32 div ssa_942 = flt32! ssa_939, ssa_6 vec1 32 div ssa_943 = iand ssa_942, ssa_941 div r40 = b32csel ssa_943, ssa_16, ssa_17 /* succs: block_49 */ } block block_49: /* preds: block_47 block_48 */ vec1 32 div ssa_948 = u2f32 r40 vec1 32 div ssa_949 = flt32! ssa_948, ssa_9 /* succs: block_50 block_51 */ if ssa_949 { block block_50: /* preds: block_49 */ vec1 32 div ssa_950 = fneg! r22 vec1 32 div ssa_951 = fadd! ssa_1, ssa_950 vec1 32 div ssa_952 = fneg ssa_815 vec1 32 div ssa_953 = ffma ssa_952, ssa_846, ssa_1 vec3 32 div ssa_954 = vec3 r21, ssa_951, ssa_948 vec1 32 con ssa_955 = umin ssa_68, ssa_211 vec1 32 con ssa_956 = ishl ssa_955, ssa_28 vec1 32 con ssa_957 = iadd ssa_213, ssa_956 vec1 32 con ssa_958 = intrinsic resource_intel (ssa_44, ssa_957, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) div r41 = (float32)txl ssa_820 (texture_handle), ssa_958 (sampler_handle), ssa_954 (coord), ssa_953 (comparator), ssa_0 (lod), 0 (texture), 0 (sampler) /* succs: block_52 */ } else { block block_51: /* preds: block_49 */ div r41 = mov ssa_1 /* succs: block_52 */ } block block_52: /* preds: block_50 block_51 */ vec1 32 div ssa_961 = fadd r41, ssa_932 div r27 = fmul ssa_961, ssa_11 /* succs: block_53 */ } block block_53: /* preds: block_27 block_52 */ div r20 = fmul r27, r20 /* succs: block_55 */ } else { block block_54: /* preds: block_17 */ /* succs: block_55 */ } block block_55: /* preds: block_53 block_54 */ vec1 32 div ssa_966 = flt32! ssa_0, r20 /* succs: block_56 block_60 */ if ssa_966 { block block_56: /* preds: block_55 */ vec1 32 con ssa_967 = intrinsic resource_intel (ssa_44, ssa_94, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_968 = load_const (0x41aaaaab = 21.333334) vec1 32 con ssa_969 = load_const (0x3d400000 = 0.046875) vec1 32 con ssa_970 = load_const (0x3daaaaab = 0.083333) vec1 32 con ssa_971 = load_const (0x00000340 = 0.000000) vec4 32 con ssa_972 = intrinsic load_ssbo_uniform_block_intel (ssa_967, ssa_971) (access=80, align_mul=1073741824, align_offset=832) vec1 32 div ssa_973 = flt32! ssa_132, ssa_972.w vec1 32 con ssa_974 = flt32! ssa_0, ssa_972.z vec1 32 div ssa_975 = iand ssa_974, ssa_973 vec1 32 div ssa_976 = b2f32 ssa_975 vec1 32 div ssa_977 = fneg ssa_976 vec1 32 div ssa_978 = fadd ssa_1, ssa_977 vec1 32 div ssa_979 = fmul ssa_978, r20 vec1 32 div ssa_980 = ffma ssa_179, ssa_12, ssa_146 vec1 32 div ssa_981 = fneg ssa_182 vec1 32 div ssa_982 = fneg ssa_149 vec1 32 div ssa_983 = ffma ssa_981, ssa_12, ssa_982 vec1 32 div ssa_984 = ffma ssa_161, ssa_12, ssa_132 vec1 32 con ssa_985 = load_const (0x00000240 = 0.000000) vec4 32 con ssa_986 = intrinsic load_ssbo_uniform_block_intel (ssa_967, ssa_985) (access=80, align_mul=1073741824, align_offset=576) vec1 32 con ssa_987 = fmul ssa_986.x, ssa_968 vec1 32 con ssa_988 = fmul ssa_986.y, ssa_968 vec1 32 con ssa_989 = ffloor ssa_987 vec1 32 con ssa_990 = ffloor ssa_988 vec1 32 con ssa_991 = fneg ssa_989 vec1 32 div ssa_992 = ffma ssa_991, ssa_969, ssa_980 vec1 32 div ssa_993 = ffma ssa_990, ssa_969, ssa_983 vec1 32 div ssa_994 = ffma ssa_992, ssa_970, ssa_6 vec1 32 div ssa_995 = ffma ssa_993, ssa_970, ssa_6 vec1 32 div ssa_996 = fmin! ssa_994, ssa_995 vec1 32 div ssa_997 = fmax! ssa_994, ssa_995 vec1 32 div ssa_998 = flt32! ssa_1, ssa_997 vec1 32 div ssa_999 = flt32! ssa_996, ssa_0 vec1 32 div ssa_1000 = ior ssa_999, ssa_998 /* succs: block_57 block_58 */ if ssa_1000 { block block_57: /* preds: block_56 */ div r42 = mov ssa_0 /* succs: block_59 */ } else { block block_58: /* preds: block_56 */ vec1 32 con ssa_1001 = load_const (0x43008000 = 128.500000) vec1 32 con ssa_1002 = load_const (0x3b000080 = 0.001953) vec1 32 con ssa_1003 = load_const (0x42800000 = 64.000000) vec1 32 con ssa_1004 = fmul ssa_986.z, ssa_968 vec1 32 con ssa_1005 = ffloor ssa_1004 vec2 32 div ssa_1006 = vec2 ssa_994, ssa_995 vec1 32 con ssa_1007 = umin ssa_58, ssa_38 vec1 32 con ssa_1008 = ishl ssa_1007, ssa_27 vec1 32 con ssa_1009 = iadd3 ssa_42, ssa_1008, ssa_40 vec1 32 con ssa_1010 = intrinsic resource_intel (ssa_44, ssa_1009, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1011 = umin ssa_65, ssa_211 vec1 32 con ssa_1012 = ishl ssa_1011, ssa_28 vec1 32 con ssa_1013 = iadd ssa_213, ssa_1012 vec1 32 con ssa_1014 = intrinsic resource_intel (ssa_44, ssa_1013, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1015 = (uint32)tg4 ssa_1010 (texture_handle), ssa_1014 (sampler_handle), ssa_1006 (coord), 0 (gather_component), 0 (texture), 0 (sampler) vec1 32 con ssa_1016 = umin ssa_53, ssa_38 vec1 32 con ssa_1017 = ishl ssa_1016, ssa_27 vec1 32 con ssa_1018 = iadd3 ssa_42, ssa_1017, ssa_40 vec1 32 con ssa_1019 = intrinsic resource_intel (ssa_44, ssa_1018, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_1020 = (uint32)tg4 ssa_1019 (texture_handle), ssa_1014 (sampler_handle), ssa_1006 (coord), 0 (gather_component), 0 (texture), 0 (sampler) vec1 32 div ssa_1021 = u2f32 ssa_1015.x vec1 32 div ssa_1022 = u2f32 ssa_1015.y vec1 32 div ssa_1023 = u2f32 ssa_1015.z vec1 32 div ssa_1024 = u2f32 ssa_1015.w vec1 32 con ssa_1025 = ffma ssa_1005, ssa_969, ssa_1003 vec1 32 div ssa_1026 = fneg ssa_1021 vec1 32 div ssa_1027 = ffma ssa_1026, ssa_1002, ssa_1025 vec1 32 div ssa_1028 = fneg ssa_1022 vec1 32 div ssa_1029 = ffma ssa_1028, ssa_1002, ssa_1025 vec1 32 div ssa_1030 = fneg ssa_1023 vec1 32 div ssa_1031 = ffma ssa_1030, ssa_1002, ssa_1025 vec1 32 div ssa_1032 = fneg ssa_1024 vec1 32 div ssa_1033 = ffma ssa_1032, ssa_1002, ssa_1025 vec1 32 div ssa_1034 = u2f32 ssa_1020.x vec1 32 div ssa_1035 = u2f32 ssa_1020.y vec1 32 div ssa_1036 = u2f32 ssa_1020.z vec1 32 div ssa_1037 = u2f32 ssa_1020.w vec1 32 div ssa_1038 = fneg ssa_1034 vec1 32 div ssa_1039 = ffma ssa_1038, ssa_1002, ssa_1025 vec1 32 div ssa_1040 = fneg ssa_1035 vec1 32 div ssa_1041 = ffma ssa_1040, ssa_1002, ssa_1025 vec1 32 div ssa_1042 = fneg ssa_1036 vec1 32 div ssa_1043 = ffma ssa_1042, ssa_1002, ssa_1025 vec1 32 div ssa_1044 = fneg ssa_1037 vec1 32 div ssa_1045 = ffma ssa_1044, ssa_1002, ssa_1025 vec1 32 div ssa_1046 = ffma ssa_992, ssa_968, ssa_1001 vec1 32 div ssa_1047 = ffma ssa_993, ssa_968, ssa_1001 vec1 32 div ssa_1048 = ffract ssa_1046 vec1 32 div ssa_1049 = ffract ssa_1047 vec1 32 div ssa_1050 = flt32! ssa_1039, ssa_984 vec1 32 div ssa_1051 = flt32! ssa_984, ssa_1027 vec1 32 div ssa_1052 = iand ssa_1051, ssa_1050 vec1 32 div ssa_1053 = flt32! ssa_984, ssa_1029 vec1 32 div ssa_1054 = flt32! ssa_1041, ssa_984 vec1 32 div ssa_1055 = iand ssa_1053, ssa_1054 vec1 32 div ssa_1056 = flt32! ssa_984, ssa_1031 vec1 32 div ssa_1057 = flt32! ssa_1043, ssa_984 vec1 32 div ssa_1058 = iand ssa_1056, ssa_1057 vec1 32 div ssa_1059 = flt32! ssa_984, ssa_1033 vec1 32 div ssa_1060 = flt32! ssa_1045, ssa_984 vec1 32 div ssa_1061 = iand ssa_1059, ssa_1060 vec1 32 div ssa_1062 = b2f32 ssa_1052 vec1 32 div ssa_1063 = b2f32 ssa_1055 vec1 32 div ssa_1064 = b2f32 ssa_1058 vec1 32 div ssa_1065 = b2f32 ssa_1061 vec1 32 div ssa_1066 = fneg ssa_1048 vec1 32 div ssa_1067 = fadd ssa_1, ssa_1066 vec1 32 div ssa_1068 = fmul ssa_1063, ssa_1048 vec1 32 div ssa_1069 = ffma ssa_1062, ssa_1067, ssa_1068 vec1 32 div ssa_1070 = fneg ssa_1049 vec1 32 div ssa_1071 = fadd ssa_1, ssa_1070 vec1 32 div ssa_1072 = fmul ssa_1071, ssa_1048 vec1 32 div ssa_1073 = fmul ssa_1072, ssa_1064 vec1 32 div ssa_1074 = fmul ssa_1071, ssa_1067 vec1 32 div ssa_1075 = ffma ssa_1074, ssa_1065, ssa_1073 div r42 = ffma ssa_1069, ssa_1049, ssa_1075 /* succs: block_59 */ } block block_59: /* preds: block_57 block_58 */ vec1 32 div ssa_1078 = fneg r42 vec1 32 div ssa_1079 = fadd ssa_1, ssa_1078 div r20 = fmul ssa_979, ssa_1079 /* succs: block_61 */ } else { block block_60: /* preds: block_55 */ /* succs: block_61 */ } block block_61: /* preds: block_59 block_60 */ vec4 32 con ssa_1082 = intrinsic load_ssbo_uniform_block_intel (ssa_98, ssa_0) (access=80, align_mul=1073741824, align_offset=0) vec1 32 con ssa_1083 = fadd ssa_764.x, ssa_764.y vec1 32 con ssa_1084 = flt32! ssa_1083, ssa_13 vec1 32 div ssa_1085 = flt32! r20, ssa_14 vec1 32 div ssa_1086 = ior ssa_1084, ssa_1085 /* succs: block_62 block_63 */ if ssa_1086 { block block_62: /* preds: block_61 */ div r48 = mov ssa_0 div r47 = mov ssa_0 div r46 = mov ssa_0 div r45 = mov ssa_1 div r44 = mov ssa_1 div r43 = mov ssa_1 /* succs: block_76 */ } else { block block_63: /* preds: block_61 */ vec1 32 con ssa_1087 = load_const (0x411ffffe = 9.999998) vec1 32 con ssa_1088 = load_const (0xbf400000 = -0.750000) vec1 32 con ssa_1089 = load_const (0x41700000 = 15.000000) vec1 32 con ssa_1090 = load_const (0x41f00000 = 30.000000) vec1 32 con ssa_1091 = load_const (0x42700000 = 60.000000) vec1 32 con ssa_1092 = load_const (0x42f00000 = 120.000000) vec1 32 div ssa_1093 = flt32! ssa_135, ssa_1092 /* succs: block_64 block_65 */ if ssa_1093 { block block_64: /* preds: block_63 */ vec1 32 div ssa_1094 = fadd ssa_161, ssa_6 vec1 32 div ssa_1095 = fsat! ssa_1094 vec1 32 con ssa_1096 = fadd ssa_764.y, ssa_10 vec1 32 div ssa_1097 = ffma ssa_1095, r20, ssa_1096 vec1 32 div ssa_1098 = fsat! ssa_1097 vec1 32 div ssa_1099 = fneg r20 vec1 32 div ssa_1100 = fmul ssa_1099, ssa_765.w div r43 = ffma ssa_1100, ssa_1098, ssa_1 vec1 32 div ssa_1102 = fadd ssa_10, r20 vec1 32 div ssa_1103 = fmul ssa_1102, ssa_765.z div r44 = ffma ssa_1103, ssa_1098, ssa_1 /* succs: block_66 */ } else { block block_65: /* preds: block_63 */ div r44 = mov ssa_1 div r43 = mov ssa_1 /* succs: block_66 */ } block block_66: /* preds: block_64 block_65 */ vec1 32 div ssa_1107 = flt32! ssa_135, ssa_1091 /* succs: block_67 block_68 */ if ssa_1107 { block block_67: /* preds: block_66 */ vec1 32 con ssa_1108 = load_const (0x41800000 = 16.000000) vec1 32 con ssa_1109 = load_const (0x3fc00000 = 1.500000) vec1 32 con ssa_1110 = load_const (0x40a00001 = 5.000000) vec1 32 con ssa_1111 = load_const (0xbe99999a = -0.300000) vec1 32 con ssa_1112 = load_const (0x3d23d70a = 0.040000) vec1 32 con ssa_1113 = load_const (0x3e4ccccd = 0.200000) vec1 32 div ssa_1114 = fmul ssa_146, ssa_1113 vec1 32 div ssa_1115 = fmul ssa_149, ssa_1113 vec1 32 div ssa_1116 = ffract ssa_1114 vec1 32 div ssa_1117 = ffract ssa_1115 vec2 32 div ssa_1118 = vec2 ssa_1116, ssa_1117 vec1 32 con ssa_1119 = umin ssa_55, ssa_38 vec1 32 con ssa_1120 = ishl ssa_1119, ssa_27 vec1 32 con ssa_1121 = iadd3 ssa_42, ssa_1120, ssa_40 vec1 32 con ssa_1122 = intrinsic resource_intel (ssa_44, ssa_1121, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1123 = umin ssa_67, ssa_211 vec1 32 con ssa_1124 = ishl ssa_1123, ssa_28 vec1 32 con ssa_1125 = iadd ssa_213, ssa_1124 vec1 32 con ssa_1126 = intrinsic resource_intel (ssa_44, ssa_1125, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1127 = (float32)tex ssa_1122 (texture_handle), ssa_1126 (sampler_handle), ssa_1118 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1128 = fmul ssa_146, ssa_1112 vec1 32 div ssa_1129 = fmul ssa_149, ssa_1112 vec1 32 div ssa_1130 = ffract ssa_1128 vec1 32 div ssa_1131 = ffract ssa_1129 vec2 32 div ssa_1132 = vec2 ssa_1130, ssa_1131 vec4 32 div ssa_1133 = (float32)tex ssa_1122 (texture_handle), ssa_1126 (sampler_handle), ssa_1132 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1134 = ffma ssa_1133.w, ssa_1127.w, ssa_10 vec1 32 div ssa_1135 = fsat! ssa_1134 vec1 32 div ssa_1136 = fadd ssa_1135, ssa_5 vec1 32 div ssa_1137 = fmul ssa_1136, ssa_1087 div r48 = fsat! ssa_1137 vec1 32 div ssa_1139 = fadd ssa_1135, ssa_1111 vec1 32 div ssa_1140 = fmul ssa_1139, ssa_1110 vec1 32 div ssa_1141 = fsat! ssa_1140 vec1 32 div ssa_1142 = fneg r20 vec1 32 div ssa_1143 = ffma ssa_1142, ssa_765.w, ssa_1 vec1 32 div ssa_1144 = fneg r43 vec1 32 div ssa_1145 = fadd ssa_1143, ssa_1144 vec1 32 div ssa_1146 = ffma ssa_1141, ssa_1145, r43 vec1 32 div ssa_1147 = fneg ssa_1146 vec1 32 div ssa_1148 = fadd ssa_1143, ssa_1147 div r43 = ffma ssa_1148, r48, ssa_1146 vec1 32 div ssa_1150 = fadd ssa_10, r20 vec1 32 div ssa_1151 = fneg r44 vec1 32 div ssa_1152 = fadd ssa_1151, ssa_1 vec1 32 div ssa_1153 = ffma ssa_1150, ssa_765.z, ssa_1152 vec1 32 div ssa_1154 = ffma ssa_1141, ssa_1153, r44 vec1 32 div ssa_1155 = fneg ssa_1154 div r44 = ffma ssa_1155, r48, ssa_1154 vec1 32 div ssa_1157 = fneg r48 div r45 = fadd ssa_1, ssa_1157 vec1 32 div ssa_1159 = fmul ssa_146, ssa_1109 vec1 32 div ssa_1160 = fmul ssa_149, ssa_1109 vec1 32 div ssa_1161 = ffract ssa_1159 vec1 32 div ssa_1162 = ffract ssa_1160 vec1 32 con ssa_1163 = ffract ssa_1082.z vec1 32 con ssa_1164 = fmul ssa_1163, ssa_1108 vec1 32 con ssa_1165 = ffloor ssa_1164 vec1 32 con ssa_1166 = fmul ssa_1165, ssa_11 vec1 32 con ssa_1167 = ffract ssa_1166 vec1 32 con ssa_1168 = ffloor ssa_1166 vec1 32 div ssa_1169 = ffract ssa_1161 vec1 32 div ssa_1170 = ffract ssa_1162 vec1 32 div ssa_1171 = ffma ssa_1169, ssa_11, ssa_1167 vec1 32 con ssa_1172 = fneg ssa_1168 vec1 32 div ssa_1173 = fadd ssa_1170, ssa_1172 vec1 32 div ssa_1174 = fmul ssa_1173, ssa_11 vec1 32 div ssa_1175 = ffract ssa_1171 vec1 32 div ssa_1176 = ffract ssa_1174 vec2 32 div ssa_1177 = vec2 ssa_1175, ssa_1176 vec1 32 con ssa_1178 = umin ssa_57, ssa_38 vec1 32 con ssa_1179 = ishl ssa_1178, ssa_27 vec1 32 con ssa_1180 = iadd3 ssa_42, ssa_1179, ssa_40 vec1 32 con ssa_1181 = intrinsic resource_intel (ssa_44, ssa_1180, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec4 32 div ssa_1182 = (float32)tex ssa_1181 (texture_handle), ssa_1126 (sampler_handle), ssa_1177 (coord), 0 (texture), 0 (sampler) vec1 32 con ssa_1183 = fmul ssa_764.x, ssa_6 vec1 32 div ssa_1184 = fadd ssa_1182.x, ssa_5 vec1 32 div ssa_1185 = fadd ssa_1182.y, ssa_5 vec1 32 div ssa_1186 = ffma ssa_1184, ssa_1183, ssa_6 vec1 32 div ssa_1187 = ffma ssa_1185, ssa_1183, ssa_6 div r46 = fmul ssa_1186, r48 div r47 = fmul ssa_1187, r48 /* succs: block_69 */ } else { block block_68: /* preds: block_66 */ div r48 = mov ssa_0 div r47 = mov ssa_0 div r46 = mov ssa_0 div r45 = mov ssa_1 /* succs: block_69 */ } block block_69: /* preds: block_67 block_68 */ vec1 32 div ssa_1196 = flt32! ssa_135, ssa_1090 /* succs: block_70 block_71 */ if ssa_1196 { block block_70: /* preds: block_69 */ vec1 32 con ssa_1197 = load_const (0xc0000000 = -2.000000) vec1 32 con ssa_1198 = load_const (0x3e7d70a4 = 0.247500) vec1 32 con ssa_1199 = load_const (0x3c75c290 = 0.015000) vec1 32 con ssa_1200 = load_const (0x3ea8f5c3 = 0.330000) vec1 32 con ssa_1201 = load_const (0x3f400000 = 0.750000) vec1 32 div ssa_1202 = fabs ssa_182 vec1 32 div ssa_1203 = fmul ssa_149, ssa_1201 vec1 32 div ssa_1204 = fmul ssa_132, ssa_11 vec1 32 div ssa_1205 = ffma ssa_1082.z, ssa_12, ssa_1204 vec1 32 div ssa_1206 = fmul ssa_146, ssa_1201 vec2 32 div ssa_1207 = vec2 ssa_1203, ssa_1205 vec1 32 con ssa_1208 = umin ssa_56, ssa_38 vec1 32 con ssa_1209 = ishl ssa_1208, ssa_27 vec1 32 con ssa_1210 = iadd3 ssa_42, ssa_1209, ssa_40 vec1 32 con ssa_1211 = intrinsic resource_intel (ssa_44, ssa_1210, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1212 = umin ssa_67, ssa_211 vec1 32 con ssa_1213 = ishl ssa_1212, ssa_28 vec1 32 con ssa_1214 = iadd ssa_213, ssa_1213 vec1 32 con ssa_1215 = intrinsic resource_intel (ssa_44, ssa_1214, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1216 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1207 (coord), 0 (texture), 0 (sampler) vec2 32 div ssa_1217 = vec2 ssa_1206, ssa_1205 vec4 32 div ssa_1218 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1217 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1219 = fneg ssa_1216.x vec1 32 div ssa_1220 = fadd ssa_1218.x, ssa_1219 vec1 32 div ssa_1221 = fneg ssa_1216.y vec1 32 div ssa_1222 = fadd ssa_1218.y, ssa_1221 vec1 32 div ssa_1223 = fneg ssa_1216.z vec1 32 div ssa_1224 = fadd ssa_1218.z, ssa_1223 vec1 32 div ssa_1225 = fmul ssa_149, ssa_1198 vec1 32 con ssa_1226 = fmul ssa_1082.z, ssa_1199 vec1 32 div ssa_1227 = ffma ssa_1205, ssa_1200, ssa_1226 vec1 32 div ssa_1228 = fmul ssa_146, ssa_1198 vec2 32 div ssa_1229 = vec2 ssa_1225, ssa_1227 vec4 32 div ssa_1230 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1229 (coord), 0 (texture), 0 (sampler) vec2 32 div ssa_1231 = vec2 ssa_1228, ssa_1227 vec4 32 div ssa_1232 = (float32)tex ssa_1211 (texture_handle), ssa_1215 (sampler_handle), ssa_1231 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1233 = fneg ssa_1230.x vec1 32 div ssa_1234 = fadd ssa_1232.x, ssa_1233 vec1 32 div ssa_1235 = fneg ssa_1230.y vec1 32 div ssa_1236 = fadd ssa_1232.y, ssa_1235 vec1 32 div ssa_1237 = fneg ssa_1230.z vec1 32 div ssa_1238 = fadd ssa_1232.z, ssa_1237 vec1 32 div ssa_1239 = fadd ssa_1238, ssa_1224 vec1 32 div ssa_1240 = fadd ssa_1230.z, ssa_1216.z vec1 32 div ssa_1241 = ffma ssa_1239, ssa_1202, ssa_1240 vec1 32 div ssa_1242 = fabs ssa_161 vec1 32 div ssa_1243 = fadd ssa_1242, ssa_1088 vec1 32 div ssa_1244 = fmul ssa_1243, ssa_1197 vec1 32 div ssa_1245 = fsat! ssa_1244 vec1 32 con ssa_1246 = fmul ssa_764.x, ssa_764.y vec1 32 div ssa_1247 = fmul ssa_1246, ssa_1245 vec1 32 div ssa_1248 = fneg r20 vec1 32 div ssa_1249 = ffma ssa_1248, ssa_765.z, ssa_1 vec1 32 div ssa_1250 = fadd ssa_1234, ssa_1220 vec1 32 div ssa_1251 = fadd ssa_1216.x, ssa_10 vec1 32 div ssa_1252 = fadd ssa_1251, ssa_1230.x vec1 32 div ssa_1253 = ffma ssa_1250, ssa_1202, ssa_1252 vec1 32 div ssa_1254 = fadd ssa_1236, ssa_1222 vec1 32 div ssa_1255 = fadd ssa_1216.y, ssa_10 vec1 32 div ssa_1256 = fadd ssa_1255, ssa_1230.y vec1 32 div ssa_1257 = ffma ssa_1254, ssa_1202, ssa_1256 vec1 32 div ssa_1258 = fneg r46 vec1 32 div ssa_1259 = ffma ssa_1253, ssa_1249, ssa_1258 vec1 32 div ssa_1260 = fneg r47 vec1 32 div ssa_1261 = ffma ssa_1257, ssa_1249, ssa_1260 div r46 = ffma ssa_1259, ssa_1247, r46 div r47 = ffma ssa_1261, ssa_1247, r47 vec1 32 div ssa_1264 = fneg ssa_1247 div r48 = ffma ssa_1264, r48, r48 vec1 32 div ssa_1266 = fadd ssa_10, r20 vec1 32 div ssa_1267 = ffma ssa_1266, ssa_765.z, ssa_1 vec1 32 div ssa_1268 = fneg r44 vec1 32 div ssa_1269 = fadd ssa_1267, ssa_1268 vec1 32 div ssa_1270 = fmul ssa_1269, ssa_6 vec1 32 div ssa_1271 = fmul ssa_1270, ssa_1247 div r44 = ffma ssa_1271, ssa_1241, r44 /* succs: block_72 */ } else { block block_71: /* preds: block_69 */ /* succs: block_72 */ } block block_72: /* preds: block_70 block_71 */ vec1 32 div ssa_1277 = flt32! ssa_135, ssa_1089 /* succs: block_73 block_74 */ if ssa_1277 { block block_73: /* preds: block_72 */ vec1 32 con ssa_1278 = load_const (0x40d55558 = 6.666668) vec1 32 con ssa_1279 = load_const (0x3e199998 = 0.150000) vec1 32 con ssa_1280 = load_const (0x3ecccccd = 0.400000) vec1 32 div ssa_1281 = fadd ssa_161, ssa_1088 vec1 32 div ssa_1282 = fmul ssa_1281, ssa_1087 vec1 32 div ssa_1283 = fsat! ssa_1282 vec1 32 div ssa_1284 = ffract ssa_146 vec1 32 div ssa_1285 = ffract ssa_149 vec2 32 div ssa_1286 = vec2 ssa_1284, ssa_1285 vec1 32 con ssa_1287 = umin ssa_55, ssa_38 vec1 32 con ssa_1288 = ishl ssa_1287, ssa_27 vec1 32 con ssa_1289 = iadd3 ssa_42, ssa_1288, ssa_40 vec1 32 con ssa_1290 = intrinsic resource_intel (ssa_44, ssa_1289, ssa_44) (desc_set=1, binding=2, resource_intel=bindless /*1*/, resource_block_intel=-1) vec1 32 con ssa_1291 = umin ssa_67, ssa_211 vec1 32 con ssa_1292 = ishl ssa_1291, ssa_28 vec1 32 con ssa_1293 = iadd ssa_213, ssa_1292 vec1 32 con ssa_1294 = intrinsic resource_intel (ssa_44, ssa_1293, ssa_44) (desc_set=0, binding=0, resource_intel=bindless|sampler /*5*/, resource_block_intel=-1) vec4 32 div ssa_1295 = (float32)tex ssa_1290 (texture_handle), ssa_1294 (sampler_handle), ssa_1286 (coord), 0 (texture), 0 (sampler) vec1 32 div ssa_1296 = ffma ssa_1082.z, ssa_1280, ssa_1295.x vec1 32 div ssa_1297 = ffract ssa_1296 vec1 32 div ssa_1298 = fneg ssa_1297 vec1 32 div ssa_1299 = fadd ssa_1279, ssa_1298 vec1 32 div ssa_1300 = fsat! ssa_1299 vec1 32 con ssa_1301 = fmul ssa_764.x, ssa_1278 vec1 32 con ssa_1302 = fmul ssa_1301, ssa_764.x vec1 32 div ssa_1303 = fmul ssa_1302, r20 vec1 32 div ssa_1304 = fmul ssa_1303, ssa_1283 vec1 32 div ssa_1305 = fmul ssa_1304, ssa_1295.z vec1 32 div ssa_1306 = fmul ssa_1305, ssa_1300 vec1 32 con ssa_1307 = fadd ssa_764.y, ssa_10 vec1 32 div ssa_1308 = fadd ssa_1307, ssa_1295.y vec1 32 div ssa_1309 = fmul ssa_1308, ssa_1087 vec1 32 div ssa_1310 = fsat! ssa_1309 vec1 32 div ssa_1311 = fneg r43 vec1 32 div ssa_1312 = fadd ssa_1311, ssa_1 vec1 32 div ssa_1313 = fneg r20 vec1 32 div ssa_1314 = ffma ssa_1313, ssa_765.w, ssa_1312 vec1 32 div ssa_1315 = fmul ssa_1310, ssa_1314 div r43 = ffma ssa_1315, ssa_1306, r43 vec1 32 div ssa_1317 = fneg ssa_1310 vec1 32 div ssa_1318 = fmul ssa_1317, r44 div r44 = ffma ssa_1318, ssa_1306, r44 /* succs: block_75 */ } else { block block_74: /* preds: block_72 */ /* succs: block_75 */ } block block_75: /* preds: block_73 block_74 */ /* succs: block_76 */ } block block_76: /* preds: block_62 block_75 */ vec1 32 div ssa_1328 = fmul r43, r11 vec1 32 div ssa_1329 = fmul r43, r10 vec1 32 div ssa_1330 = fmul r43, r9 vec1 32 div ssa_1331 = fmul r44, r12 div r49 = ffma r45, ssa_761, r46 div r50 = ffma r45, ssa_762, r47 div r51 = ffma r45, ssa_763, r48 /* succs: block_77 block_78 */ if ssa_289 { block block_77: /* preds: block_76 */ vec1 32 div ssa_1335 = fmul ssa_161, r51 vec1 32 div ssa_1336 = ffma ssa_182, r50, ssa_1335 vec1 32 div ssa_1337 = ffma ssa_179, r49, ssa_1336 vec1 32 div ssa_1338 = fneg r49 vec1 32 div ssa_1339 = fmul ssa_1338, ssa_9 vec1 32 div ssa_1340 = fneg r50 vec1 32 div ssa_1341 = fmul ssa_1340, ssa_9 vec1 32 div ssa_1342 = fneg r51 vec1 32 div ssa_1343 = fmul ssa_1342, ssa_9 div r49 = ffma ssa_1339, ssa_1337, ssa_179 div r50 = ffma ssa_1341, ssa_1337, ssa_182 div r51 = ffma ssa_1343, ssa_1337, ssa_161 /* succs: block_79 */ } else { block block_78: /* preds: block_76 */ /* succs: block_79 */ } block block_79: /* preds: block_77 block_78 */ vec1 32 con ssa_1350 = fmul ssa_189.y, ssa_3 vec1 32 div ssa_1351 = fsqrt ssa_1328 vec1 32 div ssa_1352 = fsqrt ssa_1329 vec1 32 div ssa_1353 = fsqrt ssa_1330 vec1 32 div ssa_1354 = fabs r51 vec1 32 div ssa_1355 = fabs r50 vec1 32 div ssa_1356 = fmax! ssa_1355, ssa_1354 vec1 32 div ssa_1357 = fabs r49 vec1 32 div ssa_1358 = fmax! ssa_1357, ssa_1356 vec1 32 div ssa_1359 = frcp ssa_1358 vec1 32 div ssa_1360 = fmul r49, ssa_6 vec1 32 div ssa_1361 = fmul r50, ssa_6 vec1 32 div ssa_1362 = fmul r51, ssa_6 vec1 32 div ssa_1363 = ffma ssa_1360, ssa_1359, ssa_6 vec1 32 div ssa_1364 = ffma ssa_1361, ssa_1359, ssa_6 vec1 32 div ssa_1365 = ffma ssa_1362, ssa_1359, ssa_6 vec1 32 div ssa_1366 = fmul ssa_754, ssa_8 vec1 32 div ssa_1367 = f2u32 ssa_1366 vec1 32 div ssa_1368 = u2f32 ssa_1367 vec1 32 div ssa_1369 = fmul ssa_1368, ssa_7 vec1 32 div ssa_1370 = frcp ssa_123 vec1 32 div ssa_1371 = fmul ssa_139, ssa_1370 vec1 32 div ssa_1372 = fmul ssa_143, ssa_1370 vec1 32 div ssa_1373 = fmul ssa_120, ssa_1370 vec1 32 div ssa_1374 = frcp ssa_112 vec1 32 div ssa_1375 = ffma ssa_126, ssa_1374, ssa_1371 vec1 32 div ssa_1376 = ffma ssa_129, ssa_1374, ssa_1372 vec1 32 div ssa_1377 = ffma ssa_109, ssa_1374, ssa_1373 vec1 32 div ssa_1378 = fmul ssa_1375, ssa_6 vec1 32 div ssa_1379 = fmul ssa_1376, ssa_5 vec1 32 div ssa_1380 = fmul ssa_1377, ssa_4 vec4 32 div ssa_1381 = vec4 ssa_1351, ssa_1352, ssa_1353, ssa_1350 intrinsic store_output (ssa_1381, ssa_0) (base=8, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA0 slots=1 /*132*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target */ vec4 32 div ssa_1382 = vec4 ssa_1363, ssa_1364, ssa_1365, ssa_0 intrinsic store_output (ssa_1382, ssa_0) (base=10, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA1 slots=1 /*133*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target_1 */ vec4 32 div ssa_1383 = vec4 r13, ssa_1331, ssa_3, ssa_1369 intrinsic store_output (ssa_1383, ssa_0) (base=12, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA2 slots=1 /*134*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target_2 */ vec4 32 div ssa_1384 = vec4 ssa_1378, ssa_1379, ssa_1380, ssa_1 intrinsic store_output (ssa_1384, ssa_0) (base=14, wrmask=xyzw /*15*/, component=0, src_type=float32 /*160*/, io location=FRAG_RESULT_DATA3 slots=1 /*135*/, xfb() /*0*/, xfb2() /*0*/) /* SV_Target_3 */ /* succs: block_80 */ block block_80: } SIMD16 FS compile failed: Failure to register allocate. Reduce number of live scalar values to avoid this. Native code for unnamed fragment shader (null) (sha1 9751963a7ab4003fcd9d5bc365439b48fc93519f) SIMD8 shader: 2049 instructions. 2 loops. 38446 cycles. 0:0 spills:fills, 61 sends, scheduled with mode non-lifo. Promoted 16 constants. Compacted 32784 to 29216 bytes (11%) and(16) g127<1>UW g1<0,1,0>UB 0x0f000f00V { align1 WE_all 1H }; and(16) g4<1>UW g1.1<0,1,0>UB 0x0f0f0000V { align1 WE_all 1H }; shr(8) g20<2>UW g1<0,1,0>UB 0x00000001UD { align1 1Q }; shr(8) g28<2>UW g1.1<0,1,0>UB 0x00000001UD { align1 1Q }; add(16) g105<1>UW g1.4<2,8,0>UW g127<16,16,1>UW { align1 WE_all 1H I@4 }; add(16) g81<1>UW g1.5<2,8,0>UW g4<16,16,1>UW { align1 WE_all 1H I@4 }; mov(8) g61<1>UW g20<16,8,2>UW { align1 1Q I@4 }; mov(8) g112<1>UW g28<16,8,2>UW { align1 1Q I@4 }; add(16) g34<1>UW g105<16,16,1>UW g61<0,1,0>UW { align1 WE_all 1H I@2 }; add(16) g35<1>UW g81<16,16,1>UW g112<0,1,0>UW { align1 WE_all 1H I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; mov(8) g125<1>F g34<16,8,2>UW { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mov(8) g126<1>F g35<16,8,2>UW { align1 1Q }; mov(1) f1<1>UW g1.14<0,1,0>UW { align1 WE_all 1N }; add(8) g53<1>D g8.2<0,1,0>D 3D { align1 1Q compacted }; add(8) g77<1>D g8.2<0,1,0>D 2D { align1 1Q compacted }; add(8) g83<1>D g8.1<0,1,0>D 5D { align1 1Q compacted }; add(8) g89<1>D g8.1<0,1,0>D 4D { align1 1Q compacted }; add(8) g92<1>D g7.5<0,1,0>D 1D { align1 1Q compacted }; sel.l(8) g86<1>UD g7.6<0,1,0>UD 0x000f423fUD { align1 1Q }; sel.l(8) g95<1>UD g7.5<0,1,0>UD 0x000f423fUD { align1 1Q }; sel.l(8) g97<1>UD g8<0,1,0>UD 0x000f423fUD { align1 1Q }; mad(8) g100<1>F g26.3<0,1,0>F g26.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g101<1>F g26.7<0,1,0>F g26.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g51<1>F -g24.3<0,1,0>F g24.1<0,1,0>F -g3<1,1,1>F { align1 1Q }; mad(8) g44<1>F g24.7<0,1,0>F g24.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g104<1>F g25.3<0,1,0>F g25.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g10<1>F g25.7<0,1,0>F g25.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g106<1>F g22.3<0,1,0>F g22.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g107<1>F g22.7<0,1,0>F g22.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g108<1>F -g23.3<0,1,0>F g23.1<0,1,0>F -g3<1,1,1>F { align1 1Q }; mad(8) g109<1>F -g23.7<0,1,0>F g23.5<0,1,0>F -g3<1,1,1>F { align1 1Q }; mad(8) g110<1>F g21.3<0,1,0>F g21.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g111<1>F g21.7<0,1,0>F g21.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g55<1>F g18.3<0,1,0>F g18.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g5<1>F g18.7<0,1,0>F g18.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g114<1>F g19.3<0,1,0>F g19.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g115<1>F g16.3<0,1,0>F g16.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g71<1>F g16.7<0,1,0>F g16.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g117<1>F g17.3<0,1,0>F g17.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g118<1>F g17.7<0,1,0>F g17.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g119<1>F g14.3<0,1,0>F g14.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g121<1>F g14.7<0,1,0>F g14.5<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g123<1>F g15.3<0,1,0>F g15.1<0,1,0>F g3<1,1,1>F { align1 1Q }; mad(8) g124<1>F g15.7<0,1,0>F g15.5<0,1,0>F g3<1,1,1>F { align1 1Q }; add(8) g12<1>D g8.5<0,1,0>D 16D { align1 1Q compacted }; add(8) g11<1>D g8.5<0,1,0>D 14D { align1 1Q compacted }; add(8) g103<1>D g8.5<0,1,0>D 7D { align1 1Q compacted }; add(8) g68<1>D g8.5<0,1,0>D 6D { align1 1Q compacted }; add(8) g69<1>D g8.5<0,1,0>D 5D { align1 1Q compacted }; add(8) g116<1>D g8.5<0,1,0>D 4D { align1 1Q compacted }; add(8) g72<1>D g8.5<0,1,0>D 1D { align1 1Q compacted }; add(8) g52<1>D g8.4<0,1,0>D 16D { align1 1Q compacted }; add(8) g82<1>D g8.2<0,1,0>D 1D { align1 1Q compacted }; add(8) g47<1>D g9.1<0,1,0>D 2D { align1 1Q compacted }; add(8) g48<1>D g9.1<0,1,0>D 1D { align1 1Q compacted }; mov(1) g38<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g38<1>UD g38<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; add(8) g125<1>F g125<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q compacted }; add(8) g126<1>F g126<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q compacted }; sel.l(8) g75<1>UD g53<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g42<1>UD g77<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g84<1>UD g83<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g90<1>UD g89<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g93<1>UD g92<8,8,1>UD 0x000f423fUD { align1 1Q }; asr(8) g29<2>W g1.2<0,1,0>W 15D { align1 1Q }; shl(8) g87<1>D g86<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g96<1>D g95<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g43<1>D g97<8,8,1>D 0x00000006UD { align1 1Q }; mad(8) g46<1>F g100<8,8,1>F g26.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g54<1>F g101<8,8,1>F g26.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g6<1>F g51<8,8,1>F g24.0<0,1,0>F -g2<1,1,1>F { align1 1Q }; mad(8) g56<1>F g44<8,8,1>F g24.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g40<1>F g104<8,8,1>F g25.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g58<1>F g10<8,8,1>F g25.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g59<1>F g106<8,8,1>F g22.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g60<1>F g107<8,8,1>F g22.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g61<1>F g108<8,8,1>F g23.0<0,1,0>F -g2<1,1,1>F { align1 1Q }; mad(8) g112<1>F g109<8,8,1>F g23.4<0,1,0>F -g2<1,1,1>F { align1 1Q }; mad(8) g7<1>F g110<8,8,1>F g21.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g105<1>F g111<8,8,1>F g21.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g113<1>F g114<8,8,1>F g19.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g81<1>F g115<8,8,1>F g16.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g79<1>F g71<8,8,1>F g16.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g62<1>F g117<8,8,1>F g17.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g63<1>F g118<8,8,1>F g17.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g120<1>F g119<8,8,1>F g14.0<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g122<1>F g121<8,8,1>F g14.4<0,1,0>F g2<1,1,1>F { align1 1Q }; mad(8) g41<1>F g123<8,8,1>F g15.0<0,1,0>F g2<1,1,1>F { align1 1Q }; add(8) g53<1>D g9.1<0,1,0>D 3D { align1 1Q compacted }; mad(8) g77<1>F g55<8,8,1>F g18.0<0,1,0>F g2<1,1,1>F { align1 1Q I@7 }; mov(8) g127<1>UD g125<8,8,1>F { align1 1Q }; mov(8) g4<1>UD g126<8,8,1>F { align1 1Q }; shl(8) g76<1>D g75<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g80<1>D g42<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g85<1>D g84<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g91<1>D g90<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g94<1>D g93<8,8,1>D 0x00000006UD { align1 1Q }; mov(8) g99<1>UW g29<16,8,2>UW { align1 1Q }; add3(8) g88<1>D g13.7<0,1,0>D g87<8,8,1>D 128W { align1 1Q }; add3(8) g102<1>D g13.7<0,1,0>D g96<8,8,1>D 128W { align1 1Q }; mad(8) g19<1>F g124<8,8,1>F g15.4<0,1,0>F g2<1,1,1>F { align1 1Q }; add3(8) g75<1>D g13.7<0,1,0>D g43<8,8,1>D 128W { align1 1Q }; mad(8) g42<1>F g5<8,8,1>F g18.4<0,1,0>F g2<1,1,1>F { align1 1Q I@7 }; add3(8) g100<1>D g13.7<0,1,0>D g76<8,8,1>D 128W { align1 1Q I@7 }; add3(8) g101<1>D g13.7<0,1,0>D g80<8,8,1>D 128W { align1 1Q I@7 }; add3(8) g49<1>D g13.7<0,1,0>D g85<8,8,1>D 128W { align1 1Q I@7 }; add3(8) g51<1>D g13.7<0,1,0>D g91<8,8,1>D 128W { align1 1Q I@7 }; add3(8) g50<1>D g13.7<0,1,0>D g94<8,8,1>D 128W { align1 1Q I@7 }; mad(8) g76<1>F g27.3<0,1,0>F g27.1<0,1,0>F g3<1,1,1>F { align1 1Q I@5 }; not(8) g45<1>D g99<8,8,1>W { align1 1Q I@7 }; and(1) g38<1>UD mask0<0,1,0>UD g38<0,1,0>UD { align1 WE_all 1N }; sel.l(8) g20<1>UD g11<8,8,1>UD 0x000f423fUD { align1 1Q }; mov(8) g18<1>UD 0x000001c0UD { align1 WE_all 1Q F@2 }; and(8) g14<1>UD g127<1,1,0>UD 0x0000003fUD { align1 1Q F@6 compacted }; and(8) g15<1>UD g4<1,1,0>UD 0x0000003fUD { align1 1Q F@3 compacted }; fbl(1) g80<1>UD g38<0,1,0>UD { align1 WE_all 1N I@5 }; shl(8) g21<1>D g20<8,8,1>D 0x00000006UD { align1 1Q I@5 }; shl(1) a0<1>UD g80<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g16<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; add3(8) g22<1>D g13.7<0,1,0>D g21<8,8,1>D 128W { align1 1Q I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g16<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g17UD g18UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $0 }; shl(1) a0<1>UD g80<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g23<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.dst }; and(8) g16<1>UD g17.1<0,1,0>UD 0x0000003fUD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g23<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g16UD g14UD nullUD 0x0613a0fc a0.1<0>UD sampler MsgDesc: ld_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 3 rlen 1 { align1 1Q @1 $1 }; shl(1) a0<1>UD g80<0,1,0>UD 0x00000002UD { align1 WE_all 1N $1.src }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g14<1>UD g[a0 352]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g15<1>UD 0x00000000UD { align1 WE_all 1Q $1.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g14<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(1) g14UD g15UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; mad(8) g24<1>F g14.3<0,1,0>F g14.2<0,1,0>F g16<1,1,1>F { align1 1Q $1.dst }; (+f1.0) cmp.ge.f1.0(8) null<1>F g24<8,8,1>F 0x0F /* 0F */ { align1 1Q F@1 }; (-f1.0.any4h) halt(8) JIP: LABEL1 UIP: LABEL0 { align1 1Q }; mov(1) g39<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g39<1>UD g39<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g39<1>UD mask0<0,1,0>UD g39<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g26<1>UD 0x00000000UD { align1 WE_all 1Q }; fbl(1) g55<1>UD g39<0,1,0>UD { align1 WE_all 1N A@2 }; shl(1) a0<1>UD g55<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g5<1>UD g[a0 256]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g123<1>UD f0<0,1,0>UD { align1 WE_all 1N F@7 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g5<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g25UD g26UD nullUD 0x2220d500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V16, transpose, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $3 }; mov(1) f0<1>UD g123<0,1,0>UD { align1 WE_all 1N I@2 }; sel.l(8) g31<1>UD g9<0,1,0>UD 0x000007ffUD { align1 1Q compacted }; sel.l(8) g28<1>UD g8.2<0,1,0>UD 0x000f423fUD { align1 1Q }; mov(8) g22<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; mad(8) g23<1>F g26.0<0,1,0>F g120<8,8,1>F g25.4<0,1,0>F { align1 1Q F@7 }; mad(8) g24<1>F g26.1<0,1,0>F g122<8,8,1>F g25.5<0,1,0>F { align1 1Q F@7 }; mov(8) g104<1>UD g26.4<0,1,0>F { align1 1Q }; mov(8) g67<1>UD 0x00000040UD { align1 WE_all 1Q }; mov(1) g8<1>D 1073741824D { align1 WE_all 1N }; shl(8) g32<1>D g31<8,8,1>D 0x00000005UD { align1 1Q I@6 }; shl(8) g29<1>D g28<8,8,1>D 0x00000006UD { align1 1Q I@6 }; mov(1) g22.2<1>UD 0x0000c000UD { align1 WE_all 1N I@6 }; mov(1) g8.2<1>D 1065353216D { align1 WE_all 1N I@4 }; add(8) g44<1>D g13.6<0,1,0>D g32<1,1,0>D { align1 1Q I@4 compacted }; add3(8) g30<1>D g13.7<0,1,0>D g29<8,8,1>D 128W { align1 1Q I@4 }; shl(1) a0<1>UD g55<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000400UD { align1 WE_all 1N A@1 }; mov(1) g34<1>UD g[a0 384]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g55<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g33<1>UD g[a0 448]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g22.3<1>UD g34<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g33<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g17UD g22UD g23UD 0x022a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 2 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; mad(8) g36<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g18<1,1,1>F { align1 1Q $4.dst }; mad(8) g35<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g17<1,1,1>F { align1 1Q $4.dst }; mul(8) g37<1>F -g36<1,1,0>F g36<1,1,0>F { align1 1Q F@2 compacted }; mul(8) g98<1>F g25<0,1,0>F g36<1,1,0>F { align1 1Q compacted }; mul(8) g78<1>F g25<0,1,0>F g35<1,1,0>F { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g38<1>F g37<8,8,1>F g35<8,8,1>F -g35<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; add.sat(8) g39<1>F g38<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q compacted }; math sqrt(8) g57<1>F g39<8,8,1>F null<8,8,1>F { align1 1Q @1 $5 }; add(8) g70<1>F g57<1,1,0>F 0xbf800000F /* -1F */ { align1 1Q $5.dst compacted }; mad(8) g86<1>F g8.2<0,1,0>F g70<8,8,1>F g25.0<0,1,0>F { align1 1Q F@1 }; mul(8) g64<1>F g86<1,1,0>F g86<1,1,0>F { align1 1Q F@1 compacted }; mad(8) g65<1>F g64<8,8,1>F g98<8,8,1>F g98<1,1,1>F { align1 1Q F@1 }; mad(8) g66<1>F g65<8,8,1>F g78<8,8,1>F g78<1,1,1>F { align1 1Q F@1 }; math rsq(8) g87<1>F g66<8,8,1>F null<8,8,1>F { align1 1Q @1 $6 }; mul(8) g88<1>F g78<1,1,0>F g87<1,1,0>F { align1 1Q $6.dst compacted }; mul(8) g89<1>F g98<1,1,0>F g87<1,1,0>F { align1 1Q compacted }; mov(1) g121<1>UD f0<0,1,0>UD { align1 WE_all 1N compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; or(1) a0.1<1>UD g5<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g17UD g67UD nullUD 0x2220d500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V16, transpose, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $7 }; mov(1) f0<1>UD g121<0,1,0>UD { align1 WE_all 1N I@2 }; add(8) g80<1>F -g122<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.dst }; rndd(8) g91<1>F -g17.4<0,1,0>F { align1 1Q compacted }; rndd(8) g95<1>F -g17.5<0,1,0>F { align1 1Q compacted }; rndd(8) g107<1>F -g17.6<0,1,0>F { align1 1Q compacted }; frc(8) g74<1>F g120<1,1,0>F { align1 1Q compacted }; shl(1) a0<1>UD g55<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g114<1>UD g[a0 160]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g73<1>UD 0x00000080UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; mov(8) g30<1>UD g120<4,4,0>UD { align1 1Q }; mov(8) g31<1>UD g120.1<4,4,0>UD { align1 1Q }; mov(8) g32<1>UD g122<4,4,0>UD { align1 1Q }; mov(8) g33<1>UD g122.1<4,4,0>UD { align1 1Q }; mov(8) g34<1>UD g120<4,4,0>UD { align1 1Q }; mov(8) g35<1>UD g120.2<4,4,0>UD { align1 1Q }; mov(8) g36<1>UD g122<4,4,0>UD { align1 1Q }; mov(8) g37<1>UD g122.2<4,4,0>UD { align1 1Q }; mov(1) g8.1<1>D 1D { align1 WE_all 1N }; frc(8) g11<1>F g80<1,1,0>F { align1 1Q F@5 compacted }; mov(8) g92<1>D -g91<1,1,0>F { align1 1Q F@5 compacted }; mov(8) g96<1>UD -g91<8,8,1>F { align1 1Q }; mov(8) g97<1>UD -g95<8,8,1>F { align1 1Q F@4 }; mov(8) g108<1>D -g107<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g83<1>F g74<1,1,0>F g17.4<0,1,0>F { align1 1Q F@2 compacted }; mul(8) g43<1>F g74<1,1,0>F g17.6<0,1,0>F { align1 1Q compacted }; add(8) g91<1>F -g32<1,1,0>F g33<1,1,0>F { align1 1Q I@3 compacted }; mul(8) g95<1>F g74<1,1,0>F g17<0,1,0>F { align1 1Q I@2 compacted }; mul(8) g84<1>F g11<1,1,0>F g17.5<0,1,0>F { align1 1Q F@5 compacted }; mul(8) g99<1>F g11<1,1,0>F g17.7<0,1,0>F { align1 1Q compacted }; mul(8) g110<1>D g97<8,8,1>D g96<16,8,2>UW { align1 1Q I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; mul(8) g15<1>D g97<8,8,1>D g96.1<16,8,2>UW { align1 1Q }; mov(8) g85<1>D g83<1,1,0>F { align1 1Q F@6 compacted }; mov(8) g10<1>D g43<1,1,0>F { align1 1Q F@5 compacted }; mov(8) g90<1>D g84<1,1,0>F { align1 1Q F@2 compacted }; mov(8) g106<1>D g99<1,1,0>F { align1 1Q F@1 compacted }; add(8) g110.1<2>UW g110.1<16,8,2>UW g15<16,8,2>UW { align1 1Q I@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g5<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g15UD g73UD nullUD 0x2220d500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V16, transpose, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $8 }; mul(8) g93<1>D g90<8,8,1>D g92<16,8,2>UW { align1 1Q I@3 }; mul(8) g3<1>D g90<8,8,1>D g92.1<16,8,2>UW { align1 1Q }; mul(8) g109<1>D g108<8,8,1>D g106<16,8,2>UW { align1 1Q I@4 }; mul(8) g4<1>D g108<8,8,1>D g106.1<16,8,2>UW { align1 1Q }; add(8) g90<1>F -g30<1,1,0>F g31<1,1,0>F { align1 1Q I@3 compacted }; add(8) g92<1>F -g34<1,1,0>F g35<1,1,0>F { align1 1Q I@3 compacted }; add(8) g93.1<2>UW g93.1<16,8,2>UW g3<16,8,2>UW { align1 1Q I@3 }; add(8) g109.1<2>UW g109.1<16,8,2>UW g4<16,8,2>UW { align1 1Q I@2 }; add(8) g94<1>D g93<1,1,0>D g85<1,1,0>D { align1 1Q I@2 compacted }; add(8) g93<1>F -g36<1,1,0>F g37<1,1,0>F { align1 1Q I@1 compacted }; add3(8) g111<1>D g110<8,8,1>D g10<8,8,1>D g109<1,1,1>D { align1 1Q I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; math inv(8) g110<1>F g18.4<0,1,0>F null<8,8,1>F { align1 1Q $9 }; shl(8) g127<1>D g94<8,8,1>D 0x00000003UD { align1 1Q I@2 }; frc(8) g94<1>F g122<1,1,0>F { align1 1Q I@1 compacted }; shl(8) g3<1>D g111<8,8,1>D 0x00000003UD { align1 1Q I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; math inv(8) g111<1>F g18.5<0,1,0>F null<8,8,1>F { align1 1Q $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g114<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g64UD g127UD nullUD 0x22203502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, xy, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g114<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g66UD g3UD nullUD 0x22203502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, xy, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $12 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; math inv(8) g114<1>F g18<0,1,0>F null<8,8,1>F { align1 1Q $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.dst }; mul(8) g108<1>F g74<1,1,0>F g114<1,1,0>F { align1 1Q I@7 compacted }; mul(8) g109<1>F g11<1,1,0>F g114<1,1,0>F { align1 1Q I@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; mov(8) g10<1>UD g15<0,1,0>F { align1 1Q }; mov(8) g106<1>UD g15.1<0,1,0>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.dst }; or(8) g55<1>UD g67<1,1,0>UD g65<1,1,0>UD { align1 1Q $12.dst compacted }; and(8) g107<1>UD g55<1,1,0>UD g15.4<0,1,0>UD { align1 1Q I@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mul(8) g55<1>F g110<1,1,0>F g18<0,1,0>F { align1 1Q I@1 compacted }; mov(8) g5<1>UD 0x00000000UD { align1 WE_all 1Q I@1 }; mov(8) g5<1>UD g107<8,8,1>UD { align1 1Q }; or(4) g5.1<2>UD g5<8,4,2>UD g5.1<8,4,2>UD { align1 WE_all 1N I@1 }; or(2) g5.2<4>UD g5.1<8,2,4>UD g5.2<8,2,4>UD { align1 WE_all 1N I@1 }; or(2) g5.3<4>UD g5.1<8,2,4>UD g5.3<8,2,4>UD { align1 WE_all 1N I@1 }; or(4) g5.4<1>UD g5.3<0,1,0>UD g5.4<4,4,1>UD { align1 WE_all 1N I@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mov(8) g43<1>UD g5.7<0,1,0>UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; mul(8) g5<1>F g111<1,1,0>F g18<0,1,0>F { align1 1Q I@1 compacted }; add(8) g115<1>D g43<1,1,0>D -1D { align1 1Q I@1 compacted }; and.nz.f0.0(8) null<1>UD g115<8,8,1>UD g43<8,8,1>UD { align1 1Q I@1 }; (+f0.0) if(8) JIP: LABEL3 UIP: LABEL2 { align1 1Q }; END B0 ->B1 ->B9 START B1 <-B0 (22 cycles) sel.l(8) g71<1>UD g82<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g118<1>UD g53<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; mov(8) g99<1>UD 0x00000000UD { align1 1Q }; mov(8) g85<1>UD 0x00000000UD { align1 1Q }; mov(8) g84<1>UD 0x00000000UD { align1 1Q }; mov(8) g83<1>UD 0x00000000UD { align1 1Q }; mov(8) g11<1>UD 0x00000000UD { align1 1Q F@3 }; mov(8) g3<1>UD 0x00000000UD { align1 1Q $12.src }; mov(8) g98<1>UD 0x00000000UD { align1 1Q }; mov(8) g78<1>UD 0x00000000UD { align1 1Q }; mov(8) g70<1>UD 0x00000000UD { align1 1Q }; mov(8) g57<1>UD 0x00000000UD { align1 1Q }; mov(8) g80<1>UD 0x3f800000UD { align1 1Q }; mov(8) g82<1>UD 0x00000000UD { align1 1Q }; shl(8) g117<1>D g71<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g119<1>D g118<8,8,1>D 0x00000005UD { align1 1Q }; add3(8) g114<1>D g13.7<0,1,0>D g117<8,8,1>D 128W { align1 1Q A@2 }; add(8) g115<1>D g13.6<0,1,0>D g119<1,1,0>D { align1 1Q I@2 compacted }; END B1 ->B2 START B3 <-B2 <-B7 (820 cycles) LABEL7: lzd(8) g120<1>UD g43<8,8,1>UD { align1 1Q F@5 }; END B2 ->B3 ->B8 sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; add(8) g121<1>D -g120<1,1,0>D 31D { align1 1Q compacted }; add(8) g122<1>D -g121<1,1,0>D 31D { align1 1Q A@1 compacted }; cmp.z.f0.0(8) null<1>D g121<8,8,1>D -1D { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; (-f0.0) sel(8) g123<1>UD g122<8,8,1>UD 0xffffffffUD { align1 1Q }; add(8) g124<1>D -g123<1,1,0>D 31D { align1 1Q I@1 compacted }; cmp.z.f0.0(8) null<1>D g123<8,8,1>D -1D { align1 1Q }; (-f0.0) sel(8) g125<1>UD g124<8,8,1>UD 0xffffffffUD { align1 1Q A@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; shl(8) g126<1>D g8.1<0,1,0>D g125<8,8,1>UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; and(8) g127<1>UD g126<1,1,0>UD g107<1,1,0>UD { align1 1Q A@1 compacted }; xor(8) g43<1>UD g126<1,1,0>UD g43<1,1,0>UD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; cmp.nz.f0.0(8) g20<1>D g127<8,8,1>D 0D { align1 1Q A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; cmp.g.f0.0(8) g21<1>F g80<8,8,1>F 0x0F /* 0F */ { align1 1Q F@1 }; and.nz.f0.0(8) null<1>UD g21<8,8,1>UD g20<8,8,1>UD { align1 1Q A@1 }; (+f0.0) if(8) JIP: LABEL4 UIP: LABEL4 { align1 1Q }; END B3 ->B4 ->B7 START B4 <-B3 (5705 cycles) and.nz.f0.0(8) null<1>UD g126<8,8,1>UD g65<8,8,1>UD { align1 1Q }; bfi1(8) g24<1>UD g125<8,8,1>D 0D { align1 1Q $4.src }; mov(1) g96<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g96<1>UD g96<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; (+f0.0) sel(8) g23<1>UD g65<8,8,1>UD g67<8,8,1>UD { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; (+f0.0) sel(8) g22<1>UD g64<8,8,1>UD g66<8,8,1>UD { align1 1Q F@2 }; and(8) g25<1>UD g23<1,1,0>UD g24<1,1,0>UD { align1 1Q I@2 compacted }; cbit(8) g26<1>UD g25<8,8,1>UD { align1 1Q I@1 }; add(8) g28<1>D g26<1,1,0>D g22<1,1,0>D { align1 1Q A@1 compacted }; shl(8) g20<1>D g28<8,8,1>D 0x00000002UD { align1 1Q A@1 }; and(1) g96<1>UD mask0<0,1,0>UD g96<0,1,0>UD { align1 WE_all 1N I@7 }; mov(8) g24<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; fbl(1) g124<1>UD g96<0,1,0>UD { align1 WE_all 1N I@2 }; mov(1) g24.2<1>UD 0x0000e000UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g124<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g30<1>UD g[a0 160]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g124<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g126<1>UD g[a0 96]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g124<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g124<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g30<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g29UD g20UD nullUD 0x22101502 a0.1<0>UD ugm MsgDesc: ( load_cmask, a32, d32, x, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 1Q @1 $14 }; or(1) g24.3<1>UD g126<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.dst }; shr(8) g36<1>UD g29<1,1,0>UD 0x00000014UD { align1 1Q F@7 compacted }; mov(8) g37<1>UD g29.3<32,8,4>UB { align1 1Q F@7 }; shr(8) g31<1>UD g29<1,1,0>UD 0x0000000aUD { align1 1Q F@7 compacted }; and(8) g32<1>UD g29<1,1,0>UD 0x000003ffUD { align1 1Q compacted }; and(8) g38<1>UD g36<1,1,0>UD 0x0000000fUD { align1 1Q I@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; and(8) g39<1>UD g37<1,1,0>UD 0x0000000fUD { align1 1Q I@4 compacted }; and(8) g33<1>UD g31<1,1,0>UD 0x000003ffUD { align1 1Q I@4 compacted }; mov(8) g34<1>F g32<1,1,0>UD { align1 1Q I@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; shr(8) g73<1>UD g10<1,1,0>UD g38<1,1,0>UD { align1 1Q compacted }; shr(8) g74<1>UD g106<1,1,0>UD g39<1,1,0>UD { align1 1Q A@3 compacted }; mov(8) g35<1>F g33<1,1,0>UD { align1 1Q I@3 compacted }; mov(8) g71<1>F g73<1,1,0>UD { align1 1Q I@2 compacted }; mov(8) g117<1>F g74<1,1,0>UD { align1 1Q I@1 compacted }; mul(8) g118<1>F g71<1,1,0>F g108<1,1,0>F { align1 1Q F@2 compacted }; mul(8) g119<1>F g117<1,1,0>F g109<1,1,0>F { align1 1Q F@2 compacted }; frc(8) g120<1>F g118<1,1,0>F { align1 1Q F@2 compacted }; frc(8) g121<1>F g119<1,1,0>F { align1 1Q F@2 compacted }; mad(8) g122<1>F g110<8,8,1>F g120<8,8,1>F g55<1,1,1>F { align1 1Q F@2 }; mad(8) g123<1>F g111<8,8,1>F g121<8,8,1>F g5<1,1,1>F { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mad(8) g20<1>F g122<8,8,1>F g18.6<0,1,0>F g34<1,1,1>F { align1 1Q F@2 }; mad(8) g21<1>F g123<8,8,1>F g18.7<0,1,0>F g35<1,1,1>F { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g124<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g121UD g24UD g20UD 0x021b80fc a0.1<0>UD sampler MsgDesc: sample_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $4 }; cmp.g.f0.0(8) null<1>F g121<8,8,1>F 0x0F /* 0F */ { align1 1Q $4.dst }; (+f0.0) if(8) JIP: LABEL5 UIP: LABEL5 { align1 1Q }; END B4 ->B5 ->B6 START B5 <-B4 (14960 cycles) add(8) g127<1>D g125<1,1,0>D g104<1,1,0>D { align1 1Q compacted }; mov(1) g97<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g97<1>UD g97<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; shl(8) g20<1>D g127<8,8,1>D 0x00000007UD { align1 1Q I@2 }; and(1) g97<1>UD mask0<0,1,0>UD g97<0,1,0>UD { align1 WE_all 1N I@2 }; fbl(1) g26<1>UD g97<0,1,0>UD { align1 WE_all 1N I@1 }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g117<1>UD g[a0 128]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g21<1>UD g[a0 128]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g120<1>UD f0<0,1,0>UD { align1 WE_all 1N F@5 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g117<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g73UD g21UD nullUD 0x2220d500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V16, transpose, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $15 }; mov(1) f0<1>UD g120<0,1,0>UD { align1 WE_all 1N I@2 }; add(8) g22<1>D g20<1,1,0>D 64D { align1 1Q compacted }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g23<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g119<1>UD f0<0,1,0>UD { align1 WE_all 1N F@6 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g117<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g71UD g23UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $0 }; mov(1) f0<1>UD g119<0,1,0>UD { align1 WE_all 1N I@2 }; add(8) g24<1>D g20<1,1,0>D 96D { align1 1Q $4.src compacted }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g29<1>UD g[a0 256]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g118<1>UD f0<0,1,0>UD { align1 WE_all 1N F@7 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g117<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g117UD g29UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $1 }; mov(1) f0<1>UD g118<0,1,0>UD { align1 WE_all 1N I@2 }; add(8) g118<1>D g9.2<0,1,0>D -130D { align1 1Q compacted }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g30<1>UD g[a0 96]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g32<1>UD 0x000004f0UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; mul(8) g38<1>F g71.1<0,1,0>F g16.4<0,1,0>F { align1 1Q }; mul(8) g120<1>F g90<1,1,0>F g17<0,1,0>F { align1 1Q compacted }; mul(8) g123<1>F g92<1,1,0>F g17<0,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.dst }; mov(8) g28<1>UD g117.3<0,1,0>UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; mad(8) g23<1>F g71.5<0,1,0>F g94<8,8,1>F g71.1<0,1,0>F { align1 1Q }; add3(8) g125<1>D g118<8,8,1>D g117.1<0,1,0>D 130W { align1 1Q I@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g30<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g31UD g32UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $2 }; mov(8) g30<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mul(8) g39<1>F g38<1,1,0>F g91<1,1,0>F { align1 1Q F@4 compacted }; mul(8) g119<1>F g38<1,1,0>F g93<1,1,0>F { align1 1Q compacted }; mul(8) g122<1>F g120<1,1,0>F g38<1,1,0>F { align1 1Q F@5 compacted }; mul(8) g124<1>F g123<1,1,0>F g38<1,1,0>F { align1 1Q F@5 compacted }; mov(8) g120<1>UD g117.6<0,1,0>UW { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; sel.l(8) g126<1>UD g125<8,8,1>UD 0x000f423fUD { align1 1Q }; shl(8) g127<1>D g126<8,8,1>D 0x00000006UD { align1 1Q I@1 }; add3(8) g20<1>D g13.7<0,1,0>D g127<8,8,1>D 128W { align1 1Q I@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g29<1>UD g[a0 128]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mad(8) g20<1>F g71.4<0,1,0>F g95<8,8,1>F g71.1<0,1,0>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; mul.sat(8) g33<1>F g31.4<0,1,0>F g31.7<0,1,0>F { align1 1Q }; mad(8) g34<1>F -g8.2<0,1,0>F g31.2<0,1,0>F g31.5<0,1,0>F { align1 1Q }; mad(8) g35<1>F -g8.2<0,1,0>F g31.3<0,1,0>F g31.5<0,1,0>F { align1 1Q }; mov(8) g31<1>UD g117.4<0,1,0>UW { align1 1Q F@1 }; mad(8) g36<1>F g8.2<0,1,0>F g34<8,8,1>F g33<1,1,1>F { align1 1Q F@2 }; add3(8) g34<1>D g118<8,8,1>D g31<8,8,1>D 130W { align1 1Q A@1 }; mad(8) g37<1>F g8.2<0,1,0>F g35<8,8,1>F g33<1,1,1>F { align1 1Q F@2 }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N F@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000400UD { align1 WE_all 1N A@1 }; mov(1) g35<1>UD g[a0 384]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mul(8) g21<1>F g122<1,1,0>F g36<1,1,0>F { align1 1Q F@2 compacted }; mul(8) g24<1>F g39<1,1,0>F g36<1,1,0>F { align1 1Q compacted }; add3(8) g122<1>D g118<8,8,1>D g120<8,8,1>D 130W { align1 1Q A@2 }; or(1) g30.3<1>UD g35<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sel.l(8) g36<1>UD g34<8,8,1>UD 0x000f423fUD { align1 1Q A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mul(8) g22<1>F g124<1,1,0>F g37<1,1,0>F { align1 1Q compacted }; mul(8) g25<1>F g119<1,1,0>F g37<1,1,0>F { align1 1Q compacted }; mov(8) g37<1>UD g0<8,8,1>UD { align1 WE_all 1Q F@1 }; sel.l(8) g123<1>UD g122<8,8,1>UD 0x000f423fUD { align1 1Q I@4 }; shl(8) g120<1>D g36<8,8,1>D 0x00000006UD { align1 1Q I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g29<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g20UD g30UD g20UD 0x024a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 4 { align1 1Q @1 $3 }; mov(8) g29<1>UD g28.1<16,8,2>UW { align1 1Q }; mov(8) g24<1>UD g117.2<0,1,0>UD { align1 1Q $3.src }; mov(1) g37.2<1>UD 0x0000e000UD { align1 WE_all 1N I@5 }; mad(8) g28<1>F g71.6<0,1,0>F g95<8,8,1>F g71.0<0,1,0>F { align1 1Q I@3 }; shl(8) g124<1>D g123<8,8,1>D 0x00000006UD { align1 1Q A@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; add3(8) g30<1>D g118<8,8,1>D g29<8,8,1>D 130W { align1 1Q I@4 }; mul(8) g29<1>F g71<0,1,0>F g90<1,1,0>F { align1 1Q I@1 compacted }; mov(8) g39<1>UD g24.1<16,8,2>UW { align1 1Q A@4 }; or(1) g37.3<1>UD g35<0,1,0>D 0x00000001UD { align1 WE_all 1N I@4 }; add3(8) g125<1>D g13.7<0,1,0>D g124<8,8,1>D 128W { align1 1Q I@4 }; add3(8) g124<1>D g13.7<0,1,0>D g120<8,8,1>D 128W { align1 1Q I@7 }; sel.l(8) g31<1>UD g30<8,8,1>UD 0x000f423fUD { align1 1Q I@5 }; mul(8) g30<1>F g71<0,1,0>F g92<1,1,0>F { align1 1Q I@1 compacted }; add3(8) g118<1>D g118<8,8,1>D g39<8,8,1>D 130W { align1 1Q I@5 }; mov(8) g39<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@5 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g126<1>UD g[a0 416]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@5 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g125<1>UD g[a0 384]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g39.2<1>UD 0x0000e000UD { align1 WE_all 1N I@3 }; or(1) g39.3<1>UD g35<0,1,0>D 0x00000001UD { align1 WE_all 1N I@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; add(8) g32<1>F -g23<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q $3.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.dst }; add(8) g22<1>F g73.2<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q $3.dst compacted }; add(8) g33<1>F g121<1,1,0>F -g32<1,1,0>F { align1 1Q F@2 compacted }; sel.l(8) g121<1>UD g118<8,8,1>UD 0x000f423fUD { align1 1Q A@1 }; mov(8) g118<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mad.sat(8) g119<1>F g32<8,8,1>F g71.2<0,1,0>F g33<1,1,1>F { align1 1Q F@1 }; shl(8) g32<1>D g31<8,8,1>D 0x00000006UD { align1 1Q A@1 }; mul(8) g33<1>F g71<0,1,0>F g93<1,1,0>F { align1 1Q compacted }; shl(8) g122<1>D g121<8,8,1>D 0x00000006UD { align1 1Q I@3 }; mad(8) g31<1>F g71.7<0,1,0>F g94<8,8,1>F g71.0<0,1,0>F { align1 1Q I@2 }; mov(1) g118.2<1>UD 0x00008000UD { align1 WE_all 1N I@3 }; mov(8) g121<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; add3(8) g38<1>D g13.7<0,1,0>D g32<8,8,1>D 128W { align1 1Q I@4 }; mul(8) g32<1>F g71<0,1,0>F g91<1,1,0>F { align1 1Q I@1 compacted }; add(8) g34<1>F g119<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@4 compacted }; or(1) g118.3<1>UD g35<0,1,0>D 0x00000001UD { align1 WE_all 1N I@3 }; mov(1) g121.2<1>UD 0x0000c000UD { align1 WE_all 1N I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g126<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g123UD g39UD g28UD 0x021a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $4 }; add3(8) g126<1>D g13.7<0,1,0>D g122<8,8,1>D 128W { align1 1Q I@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000400UD { align1 WE_all 1N A@1 }; mov(1) g39<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; mad(8) g36<1>F g8.2<0,1,0>F g8.0<0,1,0>F -(abs)g34<1,1,1>F { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mul(8) g38<1>F g32<1,1,0>F g16<0,1,0>F { align1 1Q $4.src compacted }; mov(8) g34<1>F g28<1,1,0>F { align1 1Q $4.src compacted }; or(1) g121.3<1>UD g35<0,1,0>D 0x00000001UD { align1 WE_all 1N I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mul(8) g35<1>F g29<1,1,0>F g16<0,1,0>F { align1 1Q $4.src compacted }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@3 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g127<1>UD g[a0 448]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mul(8) g126<1>F g119<1,1,0>F g73.3<0,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g125<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g24UD g118UD g28UD 0x023a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 3 { align1 1Q @1 $5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g39<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g122UD g37UD g28UD 0x021a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $6 }; mul(8) g39<1>F g33<1,1,0>F g16<0,1,0>F { align1 1Q $6.src compacted }; mov(8) g37<1>F g31<1,1,0>F { align1 1Q $6.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; math.sat sqrt(8) g124<1>F g36<8,8,1>F null<8,8,1>F { align1 1Q $7 }; mad(8) g31<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g21<1,1,1>F { align1 1Q $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mul(8) g36<1>F g30<1,1,0>F g16<0,1,0>F { align1 1Q $6.src compacted }; add(8) g21<1>F g73.1<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; mad(8) g30<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g20<1,1,1>F { align1 1Q $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; sel.l(8) g28<1>F g80<1,1,0>F g126<1,1,0>F { align1 1Q F@7 compacted }; add(8) g20<1>F g73<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; mad.sat(8) g125<1>F -g99<8,8,1>F g73.3<0,1,0>F g124<1,1,1>F { align1 1Q $7.dst }; add(8) g99<1>F g126<1,1,0>F g99<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g127<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g34UD g121UD g34UD 0x022a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 2 { align1 1Q @1 $5 }; add(8) g80<1>F g80<1,1,0>F -g28<1,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mad.sat(8) g29<1>F g73.5<0,1,0>F g73.4<0,1,0>F g123<1,1,1>F { align1 1Q $4.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.dst }; mad.sat(8) g120<1>F g74.5<0,1,0>F g74.4<0,1,0>F g123<1,1,1>F { align1 1Q }; mad.sat(8) g127<1>F g74.7<0,1,0>F g74.6<0,1,0>F g120<1,1,1>F { align1 1Q F@1 }; mul(8) g36<1>F g24<1,1,0>F g28<1,1,0>F { align1 1Q $5.dst compacted }; mul(8) g37<1>F g25<1,1,0>F g28<1,1,0>F { align1 1Q $5.dst compacted }; mul(8) g38<1>F g26<1,1,0>F g28<1,1,0>F { align1 1Q $5.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mad.sat(8) g118<1>F g74.1<0,1,0>F g74.0<0,1,0>F g122<1,1,1>F { align1 1Q $6.dst compacted }; mad(8) g33<1>F g8.2<0,1,0>F g22<8,8,1>F g127<1,1,1>F { align1 1Q F@5 }; mad(8) g32<1>F g8.2<0,1,0>F g21<8,8,1>F g127<1,1,1>F { align1 1Q $6.src }; mad(8) g23<1>F g8.2<0,1,0>F g20<8,8,1>F g127<1,1,1>F { align1 1Q }; mul(8) g22<1>F g28<1,1,0>F g117<0,1,0>F { align1 1Q compacted }; mul(8) g21<1>F (abs)g71.3<0,1,0>F g125<1,1,0>F { align1 1Q compacted }; mad(8) g20<1>F -g83<8,8,1>F g71.3<0,1,0>F g31<1,1,1>F { align1 1Q }; mad(8) g127<1>F -g84<8,8,1>F g71.3<0,1,0>F g30<1,1,1>F { align1 1Q }; mad.sat(8) g119<1>F g74.3<0,1,0>F g74.2<0,1,0>F g118<1,1,1>F { align1 1Q F@7 }; mad.sat(8) g30<1>F g73.7<0,1,0>F g73.6<0,1,0>F g29<1,1,1>F { align1 1Q }; mad(8) g57<1>F g57<8,8,1>F g33<8,8,1>F g38<1,1,1>F { align1 1Q F@7 }; mad(8) g70<1>F g70<8,8,1>F g32<8,8,1>F g37<1,1,1>F { align1 1Q F@7 }; sel.ge(8) g85<1>F g85<1,1,0>F g21<1,1,0>F { align1 1Q F@7 compacted }; mad(8) g78<1>F g78<8,8,1>F g23<8,8,1>F g36<1,1,1>F { align1 1Q F@7 }; mad(8) g21<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g35<1,1,1>F { align1 1Q $5.dst }; mad(8) g83<1>F g83<8,8,1>F g20<8,8,1>F g125<1,1,1>F { align1 1Q F@7 }; mad(8) g20<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g34<1,1,1>F { align1 1Q $5.dst }; mad(8) g84<1>F g84<8,8,1>F g127<8,8,1>F g125<1,1,1>F { align1 1Q F@7 }; mad(8) g3<1>F g3<8,8,1>F g28<8,8,1>F g119<1,1,1>F { align1 1Q F@7 }; mad(8) g98<1>F g98<8,8,1>F g28<8,8,1>F g30<1,1,1>F { align1 1Q F@7 }; mad(8) g11<1>F g11<8,8,1>F g21<8,8,1>F g22<1,1,1>F { align1 1Q F@6 }; mad(8) g82<1>F g82<8,8,1>F g20<8,8,1>F g22<1,1,1>F { align1 1Q F@5 }; END B5 ->B6 START B6 <-B5 <-B4 (40 cycles) LABEL5: endif(8) JIP: LABEL4 { align1 1Q }; END B6 ->B7 START B7 <-B6 <-B3 (230 cycles) LABEL4: endif(8) JIP: LABEL6 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; add(8) g23<1>D g43<1,1,0>D -1D { align1 1Q A@5 compacted }; and.z.f0.0(8) null<1>UD g23<8,8,1>UD g43<8,8,1>UD { align1 1Q I@1 }; LABEL6: (-f0.0) while(8) JIP: LABEL7 { align1 1Q }; END B7 ->B2 ->B8 ->B3 START B8 <-B2 <-B7 (4 cycles) else(8) JIP: LABEL2 UIP: LABEL2 { align1 1Q }; END B8 ->B9 ->B10 START B9 <-B0 <-B8 (13 cycles) LABEL3: mov(8) g80<1>UD 0x3f800000UD { align1 1Q A@1 }; mov(8) g57<1>UD 0x00000000UD { align1 1Q I@7 }; mov(8) g70<1>UD 0x00000000UD { align1 1Q }; mov(8) g78<1>UD 0x00000000UD { align1 1Q F@7 }; mov(8) g98<1>UD 0x00000000UD { align1 1Q F@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mov(8) g3<1>UD 0x00000000UD { align1 1Q F@4 }; mov(8) g11<1>UD 0x00000000UD { align1 1Q F@2 }; mov(8) g82<1>UD 0x00000000UD { align1 1Q F@1 }; mov(8) g83<1>UD 0x00000000UD { align1 1Q F@7 }; mov(8) g84<1>UD 0x00000000UD { align1 1Q F@5 }; mov(8) g85<1>UD 0x00000000UD { align1 1Q F@7 }; mov(8) g99<1>UD 0x00000000UD { align1 1Q }; END B9 ->B10 START B10 <-B9 <-B8 (33 cycles) LABEL2: endif(8) JIP: LABEL1 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; cmp.g.f0.0(8) g24<1>F g80<8,8,1>F 0x0F /* 0F */ { align1 1Q A@1 }; cmp.nz.f0.0(8) g25<1>D g43<8,8,1>D 0D { align1 1Q }; and.nz.f0.0(8) null<1>UD g24<8,8,1>UD g25<8,8,1>UD { align1 1Q A@1 }; (+f0.0) if(8) JIP: LABEL8 UIP: LABEL8 { align1 1Q }; END B10 ->B11 ->B12 START B11 <-B10 (798 cycles) lzd(8) g26<1>UD g43<8,8,1>UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.dst }; mov(1) g64<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g64<1>UD g64<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; add(8) g28<1>D -g26<1,1,0>D 31D { align1 1Q A@2 compacted }; add(8) g29<1>D -g28<1,1,0>D 31D { align1 1Q I@1 compacted }; cmp.z.f0.0(8) null<1>D g28<8,8,1>D -1D { align1 1Q }; (-f0.0) sel(8) g30<1>UD g29<8,8,1>UD 0xffffffffUD { align1 1Q A@2 }; add(8) g31<1>D -g30<1,1,0>D 31D { align1 1Q A@1 compacted }; cmp.z.f0.0(8) null<1>D g30<8,8,1>D -1D { align1 1Q }; (-f0.0) sel(8) g32<1>UD g31<8,8,1>UD 0xffffffffUD { align1 1Q I@2 }; add(8) g33<1>D g32<1,1,0>D g104<1,1,0>D { align1 1Q I@1 compacted }; shl(8) g35<1>D g33<8,8,1>D 0x00000007UD { align1 1Q A@1 }; and(1) g64<1>UD mask0<0,1,0>UD g64<0,1,0>UD { align1 WE_all 1N I@7 }; fbl(1) g34<1>UD g64<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g118<1>UD g[a0 128]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000400UD { align1 WE_all 1N A@1 }; mov(1) g36<1>UD g[a0 96]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g74<1>UD f0<0,1,0>UD { align1 WE_all 1N F@5 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g118<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g64UD g36UD nullUD 0x2220d500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V16, transpose, L1STATE_L3MOCS dst_len = 2, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $8 }; mov(1) f0<1>UD g74<0,1,0>UD { align1 WE_all 1N I@2 }; add(8) g37<1>D g35<1,1,0>D 64D { align1 1Q F@7 compacted }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000400UD { align1 WE_all 1N A@1 }; mov(1) g67<1>UD g[a0 160]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g73<1>UD f0<0,1,0>UD { align1 WE_all 1N $8.src compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g118<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g66UD g67UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $9 }; mov(1) f0<1>UD g73<0,1,0>UD { align1 WE_all 1N I@2 }; add(8) g73<1>D g35<1,1,0>D 96D { align1 1Q compacted }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g97<1>UD g[a0 288]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g67<1>UD f0<0,1,0>UD { align1 WE_all 1N $9.src compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g118<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g96UD g97UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $10 }; mov(1) f0<1>UD g67<0,1,0>UD { align1 WE_all 1N I@2 }; add(8) g115<1>D g9.2<0,1,0>D -130D { align1 1Q compacted }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g43<1>UD g[a0 96]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g101<1>UD 0x000004f0UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; mul(8) g108<1>F g66.1<0,1,0>F g16.4<0,1,0>F { align1 1Q F@5 }; mul(8) g111<1>F g90<1,1,0>F g17<0,1,0>F { align1 1Q F@7 compacted }; mul(8) g5<1>F g92<1,1,0>F g17<0,1,0>F { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; mov(8) g74<1>UD g96.3<0,1,0>UD { align1 1Q }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N $8.src }; add(1) a0<1>UD a0<0,1,0>UD 0x00000400UD { align1 WE_all 1N A@1 }; mov(1) g36<1>UD g[a0 384]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g67<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; mov(8) g97<1>UD g96.6<0,1,0>UW { align1 1Q }; mov(8) g26<1>UD g96.2<0,1,0>UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mad(8) g20<1>F g66.4<0,1,0>F g95<8,8,1>F g66.1<0,1,0>F { align1 1Q F@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mad(8) g23<1>F g66.5<0,1,0>F g94<8,8,1>F g66.1<0,1,0>F { align1 1Q F@7 }; mad(8) g28<1>F g66.6<0,1,0>F g95<8,8,1>F g66.0<0,1,0>F { align1 1Q }; mad(8) g31<1>F g66.7<0,1,0>F g94<8,8,1>F g66.0<0,1,0>F { align1 1Q }; mul(8) g29<1>F g66<0,1,0>F g90<1,1,0>F { align1 1Q compacted }; mul(8) g30<1>F g66<0,1,0>F g92<1,1,0>F { align1 1Q compacted }; add3(8) g71<1>D g115<8,8,1>D g96.1<0,1,0>D 130W { align1 1Q I@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; or(1) a0.1<1>UD g43<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g100UD g101UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $11 }; mul(8) g109<1>F g108<1,1,0>F g91<1,1,0>F { align1 1Q F@7 compacted }; mul(8) g110<1>F g108<1,1,0>F g93<1,1,0>F { align1 1Q compacted }; mul(8) g55<1>F g111<1,1,0>F g108<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mul(8) g114<1>F g5<1,1,0>F g108<1,1,0>F { align1 1Q compacted }; mov(8) g32<1>UD g74.1<16,8,2>UW { align1 1Q I@6 }; or(1) g67.3<1>UD g36<0,1,0>D 0x00000001UD { align1 WE_all 1N I@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; mov(8) g123<1>UD g26.1<16,8,2>UW { align1 1Q }; add3(8) g43<1>D g115<8,8,1>D g97<8,8,1>D 130W { align1 1Q I@6 }; mov(8) g74<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; sel.l(8) g117<1>UD g71<8,8,1>UD 0x000f423fUD { align1 1Q I@6 }; mov(8) g97<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; add(8) g71<1>F g64.2<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q I@2 compacted }; add3(8) g33<1>D g115<8,8,1>D g32<8,8,1>D 130W { align1 1Q I@7 }; mul(8) g32<1>F g66<0,1,0>F g91<1,1,0>F { align1 1Q I@1 compacted }; add3(8) g125<1>D g115<8,8,1>D g123<8,8,1>D 130W { align1 1Q I@6 }; mov(1) g74.2<1>UD 0x0000e000UD { align1 WE_all 1N I@5 }; shl(8) g118<1>D g117<8,8,1>D 0x00000006UD { align1 1Q I@5 }; mov(1) g97.2<1>UD 0x0000e000UD { align1 WE_all 1N I@5 }; sel.l(8) g35<1>UD g33<8,8,1>UD 0x000f423fUD { align1 1Q I@5 }; mul(8) g33<1>F g66<0,1,0>F g93<1,1,0>F { align1 1Q I@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; sel.l(8) g126<1>UD g125<8,8,1>UD 0x000f423fUD { align1 1Q }; or(1) g74.3<1>UD g36<0,1,0>D 0x00000001UD { align1 WE_all 1N I@5 }; add3(8) g119<1>D g13.7<0,1,0>D g118<8,8,1>D 128W { align1 1Q I@5 }; or(1) g97.3<1>UD g36<0,1,0>D 0x00000001UD { align1 WE_all 1N I@5 }; shl(8) g37<1>D g35<8,8,1>D 0x00000006UD { align1 1Q I@5 }; mul(8) g35<1>F g29<1,1,0>F g16<0,1,0>F { align1 1Q A@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; shl(8) g127<1>D g126<8,8,1>D 0x00000006UD { align1 1Q I@5 }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g120<1>UD g[a0 224]<0,1,0>UD { align1 WE_all 1N A@1 }; add3(8) g38<1>D g13.7<0,1,0>D g37<8,8,1>D 128W { align1 1Q I@3 }; mov(8) g37<1>F g31<1,1,0>F { align1 1Q I@1 compacted }; add3(8) g17<1>D g13.7<0,1,0>D g127<8,8,1>D 128W { align1 1Q I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000400UD { align1 WE_all 1N A@1 }; mov(1) g39<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mul(8) g38<1>F g32<1,1,0>F g16<0,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.dst }; mul.sat(8) g51<1>F g100.4<0,1,0>F g100.7<0,1,0>F { align1 1Q }; mad(8) g104<1>F -g8.2<0,1,0>F g100.2<0,1,0>F g100.5<0,1,0>F { align1 1Q }; mad(8) g10<1>F -g8.2<0,1,0>F g100.3<0,1,0>F g100.5<0,1,0>F { align1 1Q }; sel.l(8) g100<1>UD g43<8,8,1>UD 0x000f423fUD { align1 1Q F@1 }; mov(8) g43<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; shl(8) g101<1>D g100<8,8,1>D 0x00000006UD { align1 1Q I@2 }; mad(8) g106<1>F g8.2<0,1,0>F g104<8,8,1>F g51<1,1,1>F { align1 1Q F@2 }; mov(8) g100<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mov(1) g43.2<1>UD 0x00008000UD { align1 WE_all 1N I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; mad(8) g107<1>F g8.2<0,1,0>F g10<8,8,1>F g51<1,1,1>F { align1 1Q }; add3(8) g51<1>D g13.7<0,1,0>D g101<8,8,1>D 128W { align1 1Q A@1 }; mov(1) g100.2<1>UD 0x0000c000UD { align1 WE_all 1N I@3 }; or(1) g43.3<1>UD g36<0,1,0>D 0x00000001UD { align1 WE_all 1N I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mul(8) g21<1>F g55<1,1,0>F g106<1,1,0>F { align1 1Q F@2 compacted }; mul(8) g24<1>F g109<1,1,0>F g106<1,1,0>F { align1 1Q compacted }; mov(8) g106<1>UD g96.4<0,1,0>UW { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mul(8) g22<1>F g114<1,1,0>F g107<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g25<1>F g110<1,1,0>F g107<1,1,0>F { align1 1Q compacted }; add(8) g114<1>F g64<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g44<1>UD g[a0 96]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g100.3<1>UD g36<0,1,0>D 0x00000001UD { align1 WE_all 1N I@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mul(8) g36<1>F g30<1,1,0>F g16<0,1,0>F { align1 1Q compacted }; add3(8) g107<1>D g115<8,8,1>D g106<8,8,1>D 130W { align1 1Q A@3 }; add(8) g115<1>F g64.1<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q I@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; or(1) a0.1<1>UD g120<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g20UD g67UD g20UD 0x024a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 4 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; or(1) a0.1<1>UD g39<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g67UD g74UD g28UD 0x021a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $12 }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N $4.src }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g24<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; mul(8) g39<1>F g33<1,1,0>F g16<0,1,0>F { align1 1Q $12.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@4 }; or(1) a0.1<1>UD g44<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g73UD g97UD g28UD 0x021a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $13 }; sel.l(8) g108<1>UD g107<8,8,1>UD 0x000f423fUD { align1 1Q I@2 }; shl(8) g109<1>D g108<8,8,1>D 0x00000006UD { align1 1Q A@1 }; add3(8) g110<1>D g13.7<0,1,0>D g109<8,8,1>D 128W { align1 1Q A@1 }; shl(1) a0<1>UD g34<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g111<1>UD g[a0 448]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mov(8) g34<1>F g28<1,1,0>F { align1 1Q $13.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g111<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g15UD g43UD g28UD 0x023a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 3 { align1 1Q @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g24<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g24UD g100UD g34UD 0x022a40fc a0.1<0>UD sampler MsgDesc: sample_d SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 2 { align1 1Q @1 $5 }; add(8) g123<1>F -g23<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q $4.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mad(8) g121<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g20<1,1,1>F { align1 1Q $4.dst }; mad(8) g122<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g21<1,1,1>F { align1 1Q $4.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; mad.sat(8) g67<1>F g65.1<0,1,0>F g65.0<0,1,0>F g67<1,1,1>F { align1 1Q $12.dst compacted }; mad.sat(8) g55<1>F g65.5<0,1,0>F g65.4<0,1,0>F g73<1,1,1>F { align1 1Q $13.dst }; mad.sat(8) g104<1>F g64.5<0,1,0>F g64.4<0,1,0>F g73<1,1,1>F { align1 1Q }; mad.sat(8) g124<1>F g123<8,8,1>F g66.2<0,1,0>F g23<1,1,1>F { align1 1Q F@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mad(8) g28<1>F -g84<8,8,1>F g66.3<0,1,0>F g121<1,1,1>F { align1 1Q F@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mad(8) g29<1>F -g83<8,8,1>F g66.3<0,1,0>F g122<1,1,1>F { align1 1Q F@6 }; mad.sat(8) g73<1>F g65.3<0,1,0>F g65.2<0,1,0>F g67<1,1,1>F { align1 1Q F@6 }; mad.sat(8) g5<1>F g65.7<0,1,0>F g65.6<0,1,0>F g55<1,1,1>F { align1 1Q F@6 }; mad.sat(8) g10<1>F g64.7<0,1,0>F g64.6<0,1,0>F g104<1,1,1>F { align1 1Q F@6 }; add(8) g125<1>F g124<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@6 compacted }; mul(8) g26<1>F g124<1,1,0>F g64.3<0,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; mad(8) g119<1>F g8.2<0,1,0>F g71<8,8,1>F g5<1,1,1>F { align1 1Q }; mad(8) g118<1>F g8.2<0,1,0>F g115<8,8,1>F g5<1,1,1>F { align1 1Q }; mad(8) g117<1>F g8.2<0,1,0>F g114<8,8,1>F g5<1,1,1>F { align1 1Q }; mad(8) g126<1>F g8.2<0,1,0>F g8.0<0,1,0>F -(abs)g125<1,1,1>F { align1 1Q F@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sel.l(8) g31<1>F g80<1,1,0>F g26<1,1,0>F { align1 1Q F@5 compacted }; mad(8) g98<1>F g98<8,8,1>F g31<8,8,1>F g10<1,1,1>F { align1 1Q F@1 }; mad(8) g3<1>F g3<8,8,1>F g31<8,8,1>F g73<1,1,1>F { align1 1Q }; math.sat sqrt(8) g127<1>F g126<8,8,1>F null<8,8,1>F { align1 1Q @4 $15 }; mul(8) g120<1>F g15<1,1,0>F g31<1,1,0>F { align1 1Q $14.dst compacted }; mul(8) g121<1>F g16<1,1,0>F g31<1,1,0>F { align1 1Q $14.dst compacted }; mul(8) g122<1>F g17<1,1,0>F g31<1,1,0>F { align1 1Q $14.dst compacted }; mad(8) g26<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g24<1,1,1>F { align1 1Q $5.dst }; mad(8) g78<1>F g78<8,8,1>F g117<8,8,1>F g120<1,1,1>F { align1 1Q F@4 }; mad.sat(8) g18<1>F -g99<8,8,1>F g64.3<0,1,0>F g127<1,1,1>F { align1 1Q $15.dst }; mad(8) g70<1>F g70<8,8,1>F g118<8,8,1>F g121<1,1,1>F { align1 1Q F@5 }; mad(8) g57<1>F g57<8,8,1>F g119<8,8,1>F g122<1,1,1>F { align1 1Q F@5 }; mad(8) g83<1>F g83<8,8,1>F g29<8,8,1>F g18<1,1,1>F { align1 1Q F@3 }; mad(8) g84<1>F g84<8,8,1>F g28<8,8,1>F g18<1,1,1>F { align1 1Q }; mul(8) g30<1>F (abs)g66.3<0,1,0>F g18<1,1,0>F { align1 1Q $14.src compacted }; mul(8) g29<1>F g31<1,1,0>F g96<0,1,0>F { align1 1Q compacted }; mad(8) g28<1>F -g8.2<0,1,0>F g8.0<0,1,0>F g25<1,1,1>F { align1 1Q $5.dst }; sel.ge(8) g85<1>F g85<1,1,0>F g30<1,1,0>F { align1 1Q F@3 compacted }; mad(8) g82<1>F g82<8,8,1>F g26<8,8,1>F g29<1,1,1>F { align1 1Q F@3 }; mad(8) g11<1>F g11<8,8,1>F g28<8,8,1>F g29<1,1,1>F { align1 1Q F@3 }; END B11 ->B12 START B12 <-B11 <-B10 (670 cycles) LABEL8: endif(8) JIP: LABEL1 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; mad(8) g73<1>F g8.2<0,1,0>F g87<8,8,1>F g86<1,1,1>F { align1 1Q }; mul(8) g30<1>F -g11<1,1,0>F g11<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mul(8) g34<1>F -g83<1,1,0>F g83<1,1,0>F { align1 1Q A@7 compacted }; add(8) g39<1>F g83<1,1,0>F -g11<1,1,0>F { align1 1Q $5.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; add(8) g38<1>F g84<1,1,0>F -g82<1,1,0>F { align1 1Q A@6 compacted }; mov(1) g8.4<1>D 933741996D { align1 WE_all 1N F@5 }; mad(8) g31<1>F g30<8,8,1>F g82<8,8,1>F -g82<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mad(8) g35<1>F g34<8,8,1>F g84<8,8,1>F -g84<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.dst }; mad(8) g66<1>F g11<8,8,1>F g85<8,8,1>F g39<1,1,1>F { align1 1Q A@4 }; mad(8) g65<1>F g82<8,8,1>F g85<8,8,1>F g38<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mad(8) g95<1>F g8.4<0,1,0>F g27.0<0,1,0>F g2<1,1,1>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; add.sat(8) g32<1>F g31<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; add.sat(8) g36<1>F g35<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q F@5 compacted }; mul(8) g83<1>F g66<1,1,0>F g73<1,1,0>F { align1 1Q F@5 compacted }; mul(8) g82<1>F g65<1,1,0>F g73<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; add.sat(8) g96<1>F g95<1,1,0>F g76<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; math sqrt(8) g33<1>F g32<8,8,1>F null<8,8,1>F { align1 1Q @5 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; math sqrt(8) g37<1>F g36<8,8,1>F null<8,8,1>F { align1 1Q @4 $1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.dst }; add(8) g64<1>F g37<1,1,0>F -g33<1,1,0>F { align1 1Q $1.dst compacted }; mad(8) g67<1>F g33<8,8,1>F g85<8,8,1>F g64<1,1,1>F { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mul(8) g74<1>F g73<1,1,0>F g67<1,1,0>F { align1 1Q F@1 compacted }; mad(8) g80<1>F g74<8,8,1>F -g66<8,8,1>F g89<1,1,1>F { align1 1Q F@1 }; mad(8) g11<1>F g80<8,8,1>F -g65<8,8,1>F g88<1,1,1>F { align1 1Q F@1 }; mov(1) g65<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g65<1>UD g65<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; add(8) g86<1>F g11<1,1,0>F -g67<1,1,0>F { align1 1Q F@1 compacted }; mad(8) g85<1>F g83<8,8,1>F g89<8,8,1>F g11<1,1,1>F { align1 1Q F@7 }; mad(8) g84<1>F g82<8,8,1>F g88<8,8,1>F g11<1,1,1>F { align1 1Q F@7 }; mul(8) g87<1>F g86<1,1,0>F g73<1,1,0>F { align1 1Q F@3 compacted }; mul(8) g88<1>F g87<1,1,0>F g87<1,1,0>F { align1 1Q F@1 compacted }; mad(8) g89<1>F g88<8,8,1>F g85<8,8,1>F g85<1,1,1>F { align1 1Q F@1 }; mad(8) g90<1>F g89<8,8,1>F g84<8,8,1>F g84<1,1,1>F { align1 1Q F@1 }; math rsq(8) g91<1>F g90<8,8,1>F null<8,8,1>F { align1 1Q @1 $2 }; mul(8) g93<1>F g85<1,1,0>F g91<1,1,0>F { align1 1Q $2.dst compacted }; mul(8) g92<1>F g84<1,1,0>F g91<1,1,0>F { align1 1Q compacted }; mul(8) g94<1>F g87<1,1,0>F g91<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mul(8) g97<1>F g93<1,1,0>F g79<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mul(8) g43<1>F g93<1,1,0>F g62<1,1,0>F { align1 1Q I@6 compacted }; mul(8) g99<1>F g93<1,1,0>F g63<1,1,0>F { align1 1Q I@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g100<1>F g97<8,8,1>F g77<8,8,1>F g92<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g101<1>F g43<8,8,1>F g42<8,8,1>F g92<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g51<1>F g99<8,8,1>F g113<8,8,1>F g92<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g44<1>F g100<8,8,1>F g41<8,8,1>F g94<1,1,1>F { align1 1Q }; mad(8) g104<1>F g101<8,8,1>F g19<8,8,1>F g94<1,1,1>F { align1 1Q F@3 }; mad(8) g10<1>F g51<8,8,1>F g81<8,8,1>F g94<1,1,1>F { align1 1Q F@3 }; and(1) g65<1>UD mask0<0,1,0>UD g65<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g108<1>UD 0x00000060UD { align1 WE_all 1Q I@7 }; fbl(1) g15<1>UD g65<0,1,0>UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g15<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g106<1>UD g[a0 352]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(1) g62<1>UD f0<0,1,0>UD { align1 WE_all 1N F@7 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g106<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g107UD g108UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $3 }; mov(1) f0<1>UD g62<0,1,0>UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g15<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g109<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g111<1>UD 0x00000080UD { align1 WE_all 1Q }; mov(1) g113<1>UD f0<0,1,0>UD { align1 WE_all 1N F@4 compacted }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g109<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g110UD g111UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $4 }; mov(1) f0<1>UD g113<0,1,0>UD { align1 WE_all 1N I@2 }; mov(1) g8.3<1>D 1056964608D { align1 WE_all 1N }; shl(1) a0<1>UD g15<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g115<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g117<1>UD 0x000001d0UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sel.l(8) g20<1>UD g53<1,1,0>UD 0x000007ffUD { align1 1Q $4.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; sel.l(8) g16<1>UD g72<8,8,1>UD 0x000f423fUD { align1 1Q }; mov(8) g101<1>UD g0<8,8,1>UD { align1 WE_all 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; mad(8) g5<1>F g105<8,8,1>F g8.3<0,1,0>F g19<1,1,1>F { align1 1Q }; mad(8) g114<1>F g59<8,8,1>F g8.3<0,1,0>F g81<1,1,1>F { align1 1Q }; mad(8) g55<1>F g7<8,8,1>F g8.3<0,1,0>F g41<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; or(1) a0.1<1>UD g115<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g71UD g117UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; shl(8) g21<1>D g20<8,8,1>D 0x00000005UD { align1 1Q I@3 }; shl(8) g17<1>D g16<8,8,1>D 0x00000006UD { align1 1Q I@3 }; mov(1) g101.2<1>UD 0x0000e000UD { align1 WE_all 1N I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; add(8) g22<1>D g13.6<0,1,0>D g21<1,1,0>D { align1 1Q $4.dst compacted }; add3(8) g18<1>D g13.7<0,1,0>D g17<8,8,1>D 128W { align1 1Q I@3 }; shl(1) a0<1>UD g15<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g24<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; shl(1) a0<1>UD g15<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g23<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g101.3<1>UD g24<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.dst }; mad(8) g124<1>F g71.6<0,1,0>F g114<8,8,1>F g71.2<0,1,0>F { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; mad(8) g123<1>F g71.5<0,1,0>F g5<8,8,1>F g71.1<0,1,0>F { align1 1Q }; mad(8) g122<1>F g71.4<0,1,0>F g55<8,8,1>F g71.0<0,1,0>F { align1 1Q F@3 }; add(8) g120<1>F -g124<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q F@3 compacted }; add(8) g119<1>F -g123<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q F@3 compacted }; add(8) g118<1>F -g122<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g23<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g74UD g101UD g122UD 0x021b80fc a0.1<0>UD sampler MsgDesc: sample_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; sel.l(8) g126<1>F g124<1,1,0>F g120<1,1,0>F { align1 1Q $6.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; sel.l(8) g125<1>F g123<1,1,0>F g119<1,1,0>F { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; sel.l(8) g121<1>F g122<1,1,0>F g118<1,1,0>F { align1 1Q $6.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; sel.l(8) g127<1>F g125<1,1,0>F g126<1,1,0>F { align1 1Q F@2 compacted }; sel.l(8) g2<1>F g121<1,1,0>F g127<1,1,0>F { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mul.sat(8) g15<1>F g2<1,1,0>F 0x41200000F /* 10F */ { align1 1Q compacted }; mad.sat.g.f0.0(8) g27<1>F g8.2<0,1,0>F g15<8,8,1>F -g74<1,1,1>F { align1 1Q @1 $6.dst }; (+f0.0) if(8) JIP: LABEL9 UIP: LABEL9 { align1 1Q }; END B12 ->B13 ->B50 mov(1) g66<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g66<1>UD g66<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g66<1>UD mask0<0,1,0>UD g66<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g30<1>UD 0x00000350UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sel.l(8) g64<1>UD g52<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g67<1>UD g47<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; add(8) g63<1>F g114<8,8,1>F 0x45fa0000F /* 8000F */ { align1 1Q }; fbl(1) g25<1>UD g66<0,1,0>UD { align1 WE_all 1N I@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; shl(8) g65<1>D g64<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g73<1>D g67<8,8,1>D 0x00000005UD { align1 1Q I@3 }; shl(1) a0<1>UD g25<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@3 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g26<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; add3(8) g66<1>D g13.7<0,1,0>D g65<8,8,1>D 128W { align1 1Q I@3 }; add(8) g74<1>D g13.6<0,1,0>D g73<1,1,0>D { align1 1Q A@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g26<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g29UD g30UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.dst }; mad(8) g28<1>F g29.3<0,1,0>F g5<8,8,1>F g29.1<0,1,0>F { align1 1Q }; mad(8) g21<1>F g29.2<0,1,0>F g55<8,8,1>F g29.0<0,1,0>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; add(8) g32<1>F -g28<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; add(8) g31<1>F g21<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@2 compacted }; cmp.l.f0.0(8) g33<1>F (abs)g32<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; cmp.l.f0.0(8) g34<1>F (abs)g31<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; and(8) g62<1>UD g34<1,1,0>UD g33<1,1,0>UD { align1 1Q F@1 compacted }; END B13 ->B14 START B15 <-B14 <-B21 (170 cycles) LABEL14: mov.nz.f0.0(8) null<1>D g62<8,8,1>D { align1 1Q I@1 }; END B14 ->B15 ->B22 (+f0.0) if(8) JIP: LABEL11 UIP: LABEL10 { align1 1Q }; END B15 ->B16 ->B17 START B16 <-B15 (80 cycles) mov(8) g125<1>UD 0x00000000UD { align1 1Q $8.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mov(8) g123<1>UD g21<8,8,1>UD { align1 1Q F@3 }; mov(8) g29<1>UD g28<8,8,1>UD { align1 1Q F@3 }; else(8) JIP: LABEL10 UIP: LABEL10 { align1 1Q }; END B16 ->B17 ->B20 LABEL11: mov(1) g67<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g67<1>UD g67<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g67<1>UD mask0<0,1,0>UD g67<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g75<1>UD 0x00000360UD { align1 WE_all 1Q $9.src }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; fbl(1) g72<1>UD g67<0,1,0>UD { align1 WE_all 1N A@2 }; shl(1) a0<1>UD g72<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g52<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g52<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g53UD g75UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $9 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.dst }; mad(8) g29<1>F g53.3<0,1,0>F g5<8,8,1>F g53.1<0,1,0>F { align1 1Q A@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mad(8) g123<1>F g53.2<0,1,0>F g55<8,8,1>F g53.0<0,1,0>F { align1 1Q A@4 }; add(8) g77<1>F -g29<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; add(8) g76<1>F g123<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@2 compacted }; cmp.l.f0.0(8) g42<1>F (abs)g77<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; cmp.l.f0.0(8) g79<1>F (abs)g76<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; and.nz.f0.0(8) null<1>UD g79<8,8,1>UD g42<8,8,1>UD { align1 1Q F@1 }; (-f0.0) if(8) JIP: LABEL12 UIP: LABEL12 { align1 1Q }; END B17 ->B18 ->B19 START B18 <-B17 (70 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mov(8) g30<1>UD 0xffffffffUD { align1 1Q }; break(8) JIP: LABEL12 UIP: LABEL13 { align1 1Q }; END B18 ->B14 ->B22 ->B19 START B19 <-B18 <-B17 (50 cycles) LABEL12: endif(8) JIP: LABEL10 { align1 1Q }; mov(8) g125<1>UD 0x3f800000UD { align1 1Q $8.src }; END B19 ->B20 START B20 <-B19 <-B16 (4420 cycles) LABEL10: endif(8) JIP: LABEL13 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; add(8) g124<1>F -g29<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q A@3 compacted }; mov(1) g73<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g73<1>UD g73<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g73<1>UD mask0<0,1,0>UD g73<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g51<1>UD g0<8,8,1>UD { align1 WE_all 1Q $8.src }; fbl(1) g16<1>UD g73<0,1,0>UD { align1 WE_all 1N I@2 }; mov(1) g51.2<1>UD 0x0000e000UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g16<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g11<1>UD g[a0 320]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g16<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g80<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g51.3<1>UD g11<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g80<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g72UD g51UD g123UD 0x021b80fc a0.1<0>UD sampler MsgDesc: sample_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $8 }; mul(8) g82<1>F g72<8,8,1>F 0x467a0000F /* 16000F */ { align1 1Q $8.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; cmp.l.f0.0(8) g30<1>F g63<1,1,0>F g82<1,1,0>F { align1 1Q compacted }; break(8) JIP: LABEL13 UIP: LABEL13 { align1 1Q }; END B20 ->B14 ->B22 ->B21 START B21 <-B20 (40 cycles) LABEL13: while(8) JIP: LABEL14 { align1 1Q }; END B21 ->B15 START B22 <-B14 <-B18 <-B20 (14 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mov.nz.f0.0(8) null<1>D g30<8,8,1>D { align1 1Q }; (+f0.0) if(8) JIP: LABEL16 UIP: LABEL15 { align1 1Q }; END B22 ->B23 ->B24 START B23 <-B22 (5 cycles) mov(8) g31<1>UD 0x3f800000UD { align1 1Q F@1 }; else(8) JIP: LABEL15 UIP: LABEL15 { align1 1Q }; END B23 ->B24 ->B49 START B24 <-B22 <-B23 (12 cycles) LABEL16: mov.nz.f0.0(8) null<1>D g62<8,8,1>D { align1 1Q I@3 }; (+f0.0) if(8) JIP: LABEL18 UIP: LABEL17 { align1 1Q }; END B24 ->B25 ->B26 START B25 <-B24 (8 cycles) mov(8) g33<1>UD g28<8,8,1>UD { align1 1Q I@5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; mov(8) g16<1>UD g21<8,8,1>UD { align1 1Q }; mov(8) g32<1>UD 0x00000000UD { align1 1Q F@2 }; else(8) JIP: LABEL17 UIP: LABEL17 { align1 1Q }; END B25 ->B26 ->B27 LABEL18: mov(1) g74<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g74<1>UD g74<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g74<1>UD mask0<0,1,0>UD g74<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g86<1>UD 0x00000360UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; fbl(1) g83<1>UD g74<0,1,0>UD { align1 WE_all 1N I@3 }; shl(1) a0<1>UD g83<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g84<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g84<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g85UD g86UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; mad(8) g33<1>F g85.3<0,1,0>F g5<8,8,1>F g85.1<0,1,0>F { align1 1Q I@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@7 }; mad(8) g16<1>F g85.2<0,1,0>F g55<8,8,1>F g85.0<0,1,0>F { align1 1Q }; add(8) g88<1>F -g33<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; add(8) g87<1>F g16<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@2 compacted }; cmp.l.f0.0(8) g89<1>F (abs)g88<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; cmp.l.f0.0(8) g90<1>F (abs)g87<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; and.nz.f0.0(8) null<1>UD g90<8,8,1>UD g89<8,8,1>UD { align1 1Q F@1 }; (+f0.0) sel(8) g32<1>UD g8.1<0,1,0>UD 0x00000002UD { align1 1Q A@7 }; END B26 ->B27 START B27 <-B26 <-B25 (23 cycles) LABEL17: endif(8) JIP: LABEL15 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; mov(8) g18<1>F g32<1,1,0>UD { align1 1Q compacted }; mov(1) g8.4<1>D 948114031D { align1 WE_all 1N A@2 }; cmp.l.f0.0(8) null<1>F g18<1,1,0>F 0x40000000F /* 2F */ { align1 1Q F@1 compacted }; (+f0.0) if(8) JIP: LABEL20 UIP: LABEL19 { align1 1Q }; END B27 ->B28 ->B29 START B28 <-B27 (58 cycles) add(8) g17<1>F -g33<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q A@6 compacted }; sel.l(8) g91<1>UD g48<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; mad(8) g15<1>F g8.2<0,1,0>F g8.4<0,1,0>F -g63<1,1,1>F { align1 1Q }; mov(1) g72<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g72<1>UD g72<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; shl(8) g92<1>D g91<8,8,1>D 0x00000005UD { align1 1Q I@2 }; add(8) g93<1>D g13.6<0,1,0>D g92<1,1,0>D { align1 1Q I@1 compacted }; and(1) g72<1>UD mask0<0,1,0>UD g72<0,1,0>UD { align1 WE_all 1N I@3 }; mov(8) g106<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; fbl(1) g24<1>UD g72<0,1,0>UD { align1 WE_all 1N I@2 }; mov(1) g106.2<1>UD 0x0000e000UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g24<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g95<1>UD g[a0 416]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g24<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g94<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g106.3<1>UD g95<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g94<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g34UD g106UD g15UD 0x021b90fc a0.1<0>UD sampler MsgDesc: sample_c_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $3 }; else(8) JIP: LABEL19 UIP: LABEL19 { align1 1Q }; END B28 ->B29 ->B30 START B29 <-B27 <-B28 (377 cycles) LABEL20: mov(8) g34<1>UD 0x3f800000UD { align1 1Q $3.dst }; END B29 ->B30 START B30 <-B29 <-B28 (16 cycles) LABEL19: endif(8) JIP: LABEL15 { align1 1Q }; mov.nz.f0.0(8) null<1>D g62<8,8,1>D { align1 1Q }; (+f0.0) if(8) JIP: LABEL22 UIP: LABEL21 { align1 1Q }; END B30 ->B31 ->B32 START B31 <-B30 (8 cycles) mov(8) g36<1>UD g28<8,8,1>UD { align1 1Q $1.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mov(8) g16<1>UD g21<8,8,1>UD { align1 1Q F@5 }; mov(8) g35<1>UD 0x00000000UD { align1 1Q }; else(8) JIP: LABEL21 UIP: LABEL21 { align1 1Q }; END B31 ->B32 ->B33 LABEL22: mov(1) g52<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g52<1>UD g52<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g52<1>UD mask0<0,1,0>UD g52<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g100<1>UD 0x00000360UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; fbl(1) g97<1>UD g52<0,1,0>UD { align1 WE_all 1N I@3 }; shl(1) a0<1>UD g97<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g43<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g43<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g99UD g100UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mad(8) g36<1>F g99.3<0,1,0>F g5<8,8,1>F g99.1<0,1,0>F { align1 1Q I@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mad(8) g16<1>F g99.2<0,1,0>F g55<8,8,1>F g99.0<0,1,0>F { align1 1Q A@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; add(8) g51<1>F -g36<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; add(8) g101<1>F g16<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; cmp.l.f0.0(8) g106<1>F (abs)g51<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; cmp.l.f0.0(8) g108<1>F (abs)g101<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; and.nz.f0.0(8) null<1>UD g108<8,8,1>UD g106<8,8,1>UD { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@7 }; (+f0.0) sel(8) g35<1>UD g8.1<0,1,0>UD 0x00000002UD { align1 1Q }; END B32 ->B33 START B33 <-B32 <-B31 (23 cycles) LABEL21: endif(8) JIP: LABEL15 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mov(8) g18<1>F g35<1,1,0>UD { align1 1Q A@1 compacted }; cmp.l.f0.0(8) null<1>F g18<1,1,0>F 0x40000000F /* 2F */ { align1 1Q F@1 compacted }; (+f0.0) if(8) JIP: LABEL24 UIP: LABEL23 { align1 1Q }; END B33 ->B34 ->B35 START B34 <-B33 (59 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; add(8) g17<1>F -g36<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q A@6 compacted }; sel.l(8) g109<1>UD g48<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; mad(8) g15<1>F g8.2<0,1,0>F g8.4<0,1,0>F -g63<1,1,1>F { align1 1Q $3.src }; mov(1) g53<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g53<1>UD g53<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; shl(8) g111<1>D g109<8,8,1>D 0x00000005UD { align1 1Q I@2 }; add(8) g114<1>D g13.6<0,1,0>D g111<1,1,0>D { align1 1Q I@1 compacted }; and(1) g53<1>UD mask0<0,1,0>UD g53<0,1,0>UD { align1 WE_all 1N I@3 }; mov(8) g108<1>UD g0<8,8,1>UD { align1 WE_all 1Q $3.src }; fbl(1) g25<1>UD g53<0,1,0>UD { align1 WE_all 1N I@2 }; mov(1) g108.2<1>UD 0x0000e000UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g25<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g71<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g25<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g115<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g108.3<1>UD g71<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g115<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g37UD g108UD g15UD 0x021b90fc a0.1<0>UD sampler MsgDesc: sample_c_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $3 }; else(8) JIP: LABEL23 UIP: LABEL23 { align1 1Q }; END B34 ->B35 ->B36 START B35 <-B33 <-B34 (377 cycles) LABEL24: mov(8) g37<1>UD 0x3f800000UD { align1 1Q $3.dst }; END B35 ->B36 START B36 <-B35 <-B34 (20 cycles) LABEL23: endif(8) JIP: LABEL15 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; add(8) g117<1>F g37<1,1,0>F g34<1,1,0>F { align1 1Q compacted }; mov.nz.f0.0(8) null<1>D g62<8,8,1>D { align1 1Q }; (+f0.0) if(8) JIP: LABEL26 UIP: LABEL25 { align1 1Q }; END B36 ->B37 ->B38 START B37 <-B36 (8 cycles) mov(8) g39<1>UD g28<8,8,1>UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mov(8) g16<1>UD g21<8,8,1>UD { align1 1Q A@6 }; mov(8) g38<1>UD 0x00000000UD { align1 1Q }; else(8) JIP: LABEL25 UIP: LABEL25 { align1 1Q }; END B37 ->B38 ->B39 LABEL26: sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; mov(1) g75<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g75<1>UD g75<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g75<1>UD mask0<0,1,0>UD g75<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g121<1>UD 0x00000360UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; fbl(1) g118<1>UD g75<0,1,0>UD { align1 WE_all 1N I@3 }; shl(1) a0<1>UD g118<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g119<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g119<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g120UD g121UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $12 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.dst }; mad(8) g39<1>F g120.3<0,1,0>F g5<8,8,1>F g120.1<0,1,0>F { align1 1Q I@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mad(8) g16<1>F g120.2<0,1,0>F g55<8,8,1>F g120.0<0,1,0>F { align1 1Q A@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; add(8) g123<1>F -g39<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; add(8) g122<1>F g16<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; cmp.l.f0.0(8) g124<1>F (abs)g123<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; cmp.l.f0.0(8) g125<1>F (abs)g122<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; and.nz.f0.0(8) null<1>UD g125<8,8,1>UD g124<8,8,1>UD { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@7 }; (+f0.0) sel(8) g38<1>UD g8.1<0,1,0>UD 0x00000002UD { align1 1Q }; END B38 ->B39 START B39 <-B38 <-B37 (23 cycles) LABEL25: endif(8) JIP: LABEL15 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mov(8) g18<1>F g38<1,1,0>UD { align1 1Q A@2 compacted }; cmp.l.f0.0(8) null<1>F g18<1,1,0>F 0x40000000F /* 2F */ { align1 1Q F@1 compacted }; (+f0.0) if(8) JIP: LABEL28 UIP: LABEL27 { align1 1Q }; END B39 ->B40 ->B41 START B40 <-B39 (59 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; add(8) g17<1>F -g39<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q A@6 compacted }; sel.l(8) g126<1>UD g48<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; mad(8) g15<1>F g8.2<0,1,0>F g8.4<0,1,0>F -g63<1,1,1>F { align1 1Q $3.src }; mov(1) g76<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g76<1>UD g76<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; shl(8) g127<1>D g126<8,8,1>D 0x00000005UD { align1 1Q I@2 }; add(8) g2<1>D g13.6<0,1,0>D g127<1,1,0>D { align1 1Q I@1 compacted }; and(1) g76<1>UD mask0<0,1,0>UD g76<0,1,0>UD { align1 WE_all 1N I@3 }; mov(8) g109<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; fbl(1) g26<1>UD g76<0,1,0>UD { align1 WE_all 1N I@2 }; mov(1) g109.2<1>UD 0x0000e000UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; mov(1) g25<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g26<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g24<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g109.3<1>UD g25<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g24<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g113UD g109UD g15UD 0x021b90fc a0.1<0>UD sampler MsgDesc: sample_c_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $3 }; else(8) JIP: LABEL27 UIP: LABEL27 { align1 1Q }; END B40 ->B41 ->B42 START B41 <-B39 <-B40 (378 cycles) LABEL28: sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; mov(8) g113<1>UD 0x3f800000UD { align1 1Q }; END B41 ->B42 START B42 <-B41 <-B40 (19 cycles) LABEL27: endif(8) JIP: LABEL15 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; add(8) g26<1>F g113<1,1,0>F g117<1,1,0>F { align1 1Q compacted }; mov.nz.f0.0(8) null<1>D g62<8,8,1>D { align1 1Q }; (+f0.0) if(8) JIP: LABEL30 UIP: LABEL29 { align1 1Q }; END B42 ->B43 ->B44 START B43 <-B42 (5 cycles) mov(8) g62<1>UD 0x00000000UD { align1 1Q }; else(8) JIP: LABEL29 UIP: LABEL29 { align1 1Q }; END B43 ->B44 ->B45 LABEL30: mov(1) g77<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g77<1>UD g77<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g77<1>UD mask0<0,1,0>UD g77<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g33<1>UD 0x00000360UD { align1 WE_all 1Q F@7 }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; fbl(1) g29<1>UD g77<0,1,0>UD { align1 WE_all 1N I@3 }; shl(1) a0<1>UD g29<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g30<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g30<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g32UD g33UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.dst }; mad(8) g28<1>F g32.3<0,1,0>F g5<8,8,1>F g32.1<0,1,0>F { align1 1Q }; mad(8) g21<1>F g32.2<0,1,0>F g55<8,8,1>F g32.0<0,1,0>F { align1 1Q }; add(8) g35<1>F -g28<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; add(8) g34<1>F g21<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@2 compacted }; cmp.l.f0.0(8) g36<1>F (abs)g35<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; cmp.l.f0.0(8) g37<1>F (abs)g34<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q F@2 compacted }; and.nz.f0.0(8) null<1>UD g37<8,8,1>UD g36<8,8,1>UD { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@7 }; (+f0.0) sel(8) g62<1>UD g8.1<0,1,0>UD 0x00000002UD { align1 1Q }; END B44 ->B45 START B45 <-B44 <-B43 (23 cycles) LABEL29: endif(8) JIP: LABEL15 { align1 1Q }; mov(8) g23<1>F g62<1,1,0>UD { align1 1Q I@2 compacted }; cmp.l.f0.0(8) null<1>F g23<1,1,0>F 0x40000000F /* 2F */ { align1 1Q F@1 compacted }; (+f0.0) if(8) JIP: LABEL32 UIP: LABEL31 { align1 1Q }; END B45 ->B46 ->B47 START B46 <-B45 (60 cycles) add(8) g22<1>F -g28<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; mad(8) g20<1>F g8.2<0,1,0>F g8.4<0,1,0>F -g63<1,1,1>F { align1 1Q }; sel.l(8) g38<1>UD g48<1,1,0>UD 0x000007ffUD { align1 1Q F@7 compacted }; mov(1) g42<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g42<1>UD g42<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; shl(8) g39<1>D g38<8,8,1>D 0x00000005UD { align1 1Q A@2 }; add(8) g48<1>D g13.6<0,1,0>D g39<1,1,0>D { align1 1Q I@1 compacted }; and(1) g42<1>UD mask0<0,1,0>UD g42<0,1,0>UD { align1 WE_all 1N I@3 }; mov(8) g111<1>UD g0<8,8,1>UD { align1 WE_all 1Q $4.src }; fbl(1) g28<1>UD g42<0,1,0>UD { align1 WE_all 1N A@2 }; mov(1) g111.2<1>UD 0x0000e000UD { align1 WE_all 1N I@2 }; shl(1) a0<1>UD g28<0,1,0>UD 0x00000002UD { align1 WE_all 1N A@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g113<1>UD g[a0]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g28<0,1,0>UD 0x00000002UD { align1 WE_all 1N }; add(1) a0<1>UD a0<0,1,0>UD 0x00000800UD { align1 WE_all 1N A@1 }; mov(1) g49<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g111.3<1>UD g113<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g49<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g63UD g111UD g20UD 0x021b90fc a0.1<0>UD sampler MsgDesc: sample_c_lz SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 1 { align1 1Q @1 $4 }; else(8) JIP: LABEL31 UIP: LABEL31 { align1 1Q }; END B46 ->B47 ->B48 START B47 <-B45 <-B46 (378 cycles) LABEL32: sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mov(8) g63<1>UD 0x3f800000UD { align1 1Q F@4 }; END B47 ->B48 START B48 <-B47 <-B46 (12 cycles) LABEL31: endif(8) JIP: LABEL15 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; add(8) g62<1>F g63<1,1,0>F g26<1,1,0>F { align1 1Q A@2 compacted }; mul(8) g31<1>F g62<1,1,0>F 0x3e800000F /* 0.25F */ { align1 1Q F@1 compacted }; END B48 ->B49 START B49 <-B48 <-B23 (6 cycles) LABEL15: endif(8) JIP: LABEL9 { align1 1Q }; mul(8) g27<1>F g31<1,1,0>F g27<1,1,0>F { align1 1Q A@1 compacted }; END B49 ->B50 START B50 <-B49 <-B12 (17 cycles) LABEL9: endif(8) JIP: LABEL1 { align1 1Q }; cmp.g.f0.0(8) null<1>F g27<8,8,1>F 0x0F /* 0F */ { align1 1Q F@1 }; mov(1) g8.5<1>D 1036831949D { align1 WE_all 1N A@2 }; (+f0.0) if(8) JIP: LABEL33 UIP: LABEL33 { align1 1Q }; END B50 ->B51 ->B55 mov(1) g79<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g79<1>UD g79<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g79<1>UD mask0<0,1,0>UD g79<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g66<1>UD 0x00000340UD { align1 WE_all 1Q I@7 }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; mad(8) g75<1>F g7<8,8,1>F g8.5<0,1,0>F g41<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; mad(8) g76<1>F -g105<8,8,1>F g8.5<0,1,0>F -g19<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@6 }; mad(8) g77<1>F g59<8,8,1>F g8.5<0,1,0>F g81<1,1,1>F { align1 1Q }; mov(1) g80<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g80<1>UD g80<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; fbl(1) g63<1>UD g79<0,1,0>UD { align1 WE_all 1N A@4 }; shl(1) a0<1>UD g63<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g64<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g64<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g65UD g66UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $14 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.dst }; cmp.l.f0.0(8) g67<1>F g59<1,1,0>F g65.3<0,1,0>F { align1 1Q compacted }; cmp.g.f0.0(8) g73<1>F g65.2<0,1,0>F 0x0F /* 0F */ { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; and(8) g74<1>UD g73<1,1,0>UD g67<1,1,0>UD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mov(8) g72<1>F -g74<1,1,0>D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; add(8) g52<1>F -g72<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; mul(8) g53<1>F g52<1,1,0>F g27<1,1,0>F { align1 1Q compacted }; and(1) g80<1>UD mask0<0,1,0>UD g80<0,1,0>UD { align1 WE_all 1N I@4 }; mov(8) g11<1>UD 0x00000240UD { align1 WE_all 1Q }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; mov(1) g8.6<1>D 1027604480D { align1 WE_all 1N F@6 }; fbl(1) g42<1>UD g80<0,1,0>UD { align1 WE_all 1N I@4 }; shl(1) a0<1>UD g42<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000600UD { align1 WE_all 1N A@1 }; mov(1) g79<1>UD g[a0 64]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g79<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g80UD g11UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $15 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.dst }; mul(8) g82<1>F g80<0,1,0>F 0x41aaaaabF /* 21.3333F */ { align1 1Q }; mul(8) g83<1>F g80.1<0,1,0>F 0x41aaaaabF /* 21.3333F */ { align1 1Q }; rndd(8) g84<1>F g82<1,1,0>F { align1 1Q F@2 compacted }; rndd(8) g85<1>F g83<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; mad(8) g86<1>F g75<8,8,1>F g8.6<0,1,0>F -g84<1,1,1>F { align1 1Q }; mad(8) g87<1>F g76<8,8,1>F g8.6<0,1,0>F g85<1,1,1>F { align1 1Q F@2 }; mov(1) g8.7<1>D 1034594987D { align1 WE_all 1N F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mad(8) g124<1>F g8.3<0,1,0>F g8.7<0,1,0>F g86<1,1,1>F { align1 1Q $8.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mad(8) g125<1>F g8.3<0,1,0>F g8.7<0,1,0>F g87<1,1,1>F { align1 1Q F@2 }; sel.ge(8) g89<1>F g124<1,1,0>F g125<1,1,0>F { align1 1Q F@1 compacted }; sel.l(8) g88<1>F g124<1,1,0>F g125<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; cmp.g.f0.0(8) g90<1>F g89<8,8,1>F 0x3f800000F /* 1F */ { align1 1Q F@2 }; cmp.l.f0.0(8) g91<1>F g88<1,1,0>F 0x0F /* 0F */ { align1 1Q F@2 compacted }; or.nz.f0.0(8) null<1>UD g91<8,8,1>UD g90<8,8,1>UD { align1 1Q F@1 }; (+f0.0) if(8) JIP: LABEL35 UIP: LABEL34 { align1 1Q }; END B51 ->B52 ->B53 START B52 <-B51 (5 cycles) mov(8) g64<1>UD 0x00000000UD { align1 1Q }; else(8) JIP: LABEL34 UIP: LABEL34 { align1 1Q }; END B52 ->B53 ->B54 START B53 <-B51 <-B52 (581 cycles) LABEL35: sel.l(8) g94<1>UD g116<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g43<1>UD g9.1<0,1,0>UD 0x000007ffUD { align1 1Q compacted }; mul(8) g92<1>F g80.2<0,1,0>F 0x41aaaaabF /* 21.3333F */ { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; mov(1) g11<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g11<1>UD g11<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; shl(8) g95<1>D g94<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g99<1>D g43<8,8,1>D 0x00000005UD { align1 1Q I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; rndd(8) g93<1>F g92<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; add3(8) g97<1>D g13.7<0,1,0>D g95<8,8,1>D 128W { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; add(8) g100<1>D g13.6<0,1,0>D g99<1,1,0>D { align1 1Q compacted }; and(1) g11<1>UD mask0<0,1,0>UD g11<0,1,0>UD { align1 WE_all 1N I@5 }; sel.l(8) g51<1>UD g12<8,8,1>UD 0x000f423fUD { align1 1Q $8.src }; mov(8) g5<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mov(8) g55<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mov(1) g8.7<1>D 1115684864D { align1 WE_all 1N F@7 }; mov(1) g2<1>D 989855872D { align1 WE_all 1N }; fbl(1) g29<1>UD g11<0,1,0>UD { align1 WE_all 1N I@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; shl(8) g106<1>D g51<8,8,1>D 0x00000006UD { align1 1Q I@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mad(8) g115<1>F g8.7<0,1,0>F g8.6<0,1,0>F g93<1,1,1>F { align1 1Q }; shl(1) a0<1>UD g29<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g119<1>UD g[a0 128]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g29<0,1,0>UD 0x00000002UD { align1 WE_all 1N $6.src }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g101<1>UD g[a0 32]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; add3(8) g108<1>D g13.7<0,1,0>D g106<8,8,1>D 128W { align1 1Q I@3 }; or(1) g5.3<1>UD g119<0,1,0>D 0x00000001UD { align1 WE_all 1N I@3 }; or(1) g55.3<1>UD g119<0,1,0>D 0x00000001UD { align1 WE_all 1N I@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; shl(1) a0<1>UD g29<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@3 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g109<1>UD g[a0 384]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; or(1) a0.1<1>UD g101<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g15UD g55UD g124UD 0x024a80fc a0.1<0>UD sampler MsgDesc: gather4 SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 4 { align1 1Q @1 $0 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g109<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g20UD g5UD g124UD 0x024a80fc a0.1<0>UD sampler MsgDesc: gather4 SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 4 { align1 1Q @1 $1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mov(8) g5<1>F g17<1,1,0>UD { align1 1Q $0.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mov(8) g111<1>F g15<1,1,0>UD { align1 1Q $0.dst compacted }; mov(8) g55<1>F g16<1,1,0>UD { align1 1Q $0.dst compacted }; mov(8) g114<1>F g18<1,1,0>UD { align1 1Q $0.dst compacted }; mov(8) g120<1>F g20<1,1,0>UD { align1 1Q $1.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; mov(8) g121<1>F g21<1,1,0>UD { align1 1Q $1.dst compacted }; mov(8) g122<1>F g22<1,1,0>UD { align1 1Q $1.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mov(8) g123<1>F g23<1,1,0>UD { align1 1Q $1.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@7 }; mad(8) g118<1>F g115<8,8,1>F g2.0<0,1,0>F -g5<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mad(8) g71<1>F g115<8,8,1>F g2.0<0,1,0>F -g111<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mad(8) g117<1>F g115<8,8,1>F g2.0<0,1,0>F -g55<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; mad(8) g119<1>F g115<8,8,1>F g2.0<0,1,0>F -g114<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mad(8) g124<1>F g115<8,8,1>F g2.0<0,1,0>F -g120<1,1,1>F { align1 1Q F@7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; mad(8) g125<1>F g115<8,8,1>F g2.0<0,1,0>F -g121<1,1,1>F { align1 1Q F@7 }; mad(8) g126<1>F g115<8,8,1>F g2.0<0,1,0>F -g122<1,1,1>F { align1 1Q F@7 }; mad(8) g127<1>F g115<8,8,1>F g2.0<0,1,0>F -g123<1,1,1>F { align1 1Q F@7 }; mov(1) g2.1<1>D 1124106240D { align1 WE_all 1N F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; cmp.l.f0.0(8) g24<1>F g124<1,1,0>F g77<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; cmp.l.f0.0(8) g25<1>F g77<1,1,0>F g71<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; cmp.l.f0.0(8) g28<1>F g77<1,1,0>F g117<1,1,0>F { align1 1Q compacted }; mov(1) g2.2<1>D 1101703851D { align1 WE_all 1N I@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; cmp.l.f0.0(8) g29<1>F g125<1,1,0>F g77<1,1,0>F { align1 1Q compacted }; cmp.l.f0.0(8) g31<1>F g77<1,1,0>F g118<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; cmp.l.f0.0(8) g32<1>F g126<1,1,0>F g77<1,1,0>F { align1 1Q F@7 compacted }; cmp.l.f0.0(8) g34<1>F g77<1,1,0>F g119<1,1,0>F { align1 1Q compacted }; cmp.l.f0.0(8) g35<1>F g127<1,1,0>F g77<1,1,0>F { align1 1Q F@7 compacted }; and(8) g26<1>UD g25<1,1,0>UD g24<1,1,0>UD { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; mad(8) g9<1>F g2.1<0,1,0>F g2.2<0,1,0>F g87<1,1,1>F { align1 1Q }; mad(8) g2<1>F g2.1<0,1,0>F g2.2<0,1,0>F g86<1,1,1>F { align1 1Q }; and(8) g30<1>UD g28<1,1,0>UD g29<1,1,0>UD { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; and(8) g33<1>UD g31<1,1,0>UD g32<1,1,0>UD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $1.src }; and(8) g36<1>UD g34<1,1,0>UD g35<1,1,0>UD { align1 1Q F@3 compacted }; mov(8) g37<1>F -g26<1,1,0>D { align1 1Q I@4 compacted }; mov(8) g38<1>F -g30<1,1,0>D { align1 1Q I@3 compacted }; frc(8) g16<1>F g9<1,1,0>F { align1 1Q F@4 compacted }; mov(8) g39<1>F -g33<1,1,0>D { align1 1Q I@2 compacted }; frc(8) g15<1>F g2<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; mov(8) g48<1>F -g36<1,1,0>D { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; add(8) g62<1>F -g16<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; add(8) g49<1>F -g15<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mul(8) g50<1>F g38<1,1,0>F g15<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mul(8) g63<1>F g62<1,1,0>F g15<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; mul(8) g66<1>F g62<1,1,0>F g49<1,1,0>F { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g113<1>F g50<8,8,1>F g49<8,8,1>F g37<1,1,1>F { align1 1Q }; mul(8) g65<1>F g63<1,1,0>F g39<1,1,0>F { align1 1Q F@3 compacted }; mad(8) g67<1>F g65<8,8,1>F g48<8,8,1>F g66<1,1,1>F { align1 1Q F@1 }; mad(8) g64<1>F g67<8,8,1>F g16<8,8,1>F g113<1,1,1>F { align1 1Q F@1 }; END B53 ->B54 START B54 <-B53 <-B52 (12 cycles) LABEL34: endif(8) JIP: LABEL33 { align1 1Q }; add(8) g73<1>F -g64<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q A@1 compacted }; mul(8) g27<1>F g53<1,1,0>F g73<1,1,0>F { align1 1Q F@1 compacted }; END B54 ->B55 START B55 <-B54 <-B50 (67 cycles) LABEL33: endif(8) JIP: LABEL1 { align1 1Q }; mov(1) g82<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g82<1>UD g82<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g82<1>UD mask0<0,1,0>UD g82<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g72<1>UD 0x00000000UD { align1 WE_all 1Q F@4 }; mov(1) f0<1>UD mask0<0,1,0>UD { align1 WE_all 1N }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; add(8) g52<1>F g107<0,1,0>F g107.1<0,1,0>F { align1 1Q compacted }; fbl(1) g74<1>UD g82<0,1,0>UD { align1 WE_all 1N A@3 }; shl(1) a0<1>UD g74<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@1 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000c00UD { align1 WE_all 1N A@1 }; mov(1) g12<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g12<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; (+f0.0.any8h) send(1) g116UD g72UD nullUD 0x2210c500 a0.1<0>UD ugm MsgDesc: ( load, a32, d32, V8, transpose, L1STATE_L3MOCS dst_len = 1, src0_len = 1, src1_len = 0 bss ) ex_bso surface_state_index 0 { align1 WE_all 1N @1 $2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; cmp.l.f0.0(8) g53<1>F g52<8,8,1>F 0x3a83126fF /* 0.001F */ { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $9.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; cmp.l.f0.0(8) g75<1>F g27<8,8,1>F 0x3c23d70aF /* 0.01F */ { align1 1Q }; or.nz.f0.0(8) null<1>UD g53<8,8,1>UD g75<8,8,1>UD { align1 1Q F@1 }; (+f0.0) if(8) JIP: LABEL37 UIP: LABEL36 { align1 1Q }; END B55 ->B56 ->B57 START B56 <-B55 (15 cycles) mov(8) g12<1>UD 0x00000000UD { align1 1Q }; mov(8) g74<1>UD 0x00000000UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; mov(8) g73<1>UD 0x00000000UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@6 }; mov(8) g67<1>UD 0x3f800000UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mov(8) g66<1>UD 0x3f800000UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mov(8) g65<1>UD 0x3f800000UD { align1 1Q }; else(8) JIP: LABEL36 UIP: LABEL36 { align1 1Q }; END B56 ->B57 ->B68 START B57 <-B55 <-B56 (12 cycles) LABEL37: cmp.l.f0.0(8) null<1>F g60<1,1,0>F 0x42f00000F /* 120F */ { align1 1Q compacted }; (+f0.0) if(8) JIP: LABEL39 UIP: LABEL38 { align1 1Q }; END B57 ->B58 ->B59 START B58 <-B57 (25 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; add.sat(8) g76<1>F g81<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; add(8) g77<1>F g107.1<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; add(8) g80<1>F g27<1,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mul(8) g79<1>F -g27<1,1,0>F g110.3<0,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad.sat(8) g42<1>F g77<8,8,1>F g27<8,8,1>F g76<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mul(8) g11<1>F g80<1,1,0>F g110.2<0,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mad(8) g66<1>F g8.2<0,1,0>F g42<8,8,1>F g11<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; mad(8) g65<1>F g8.2<0,1,0>F g42<8,8,1>F g79<1,1,1>F { align1 1Q }; else(8) JIP: LABEL38 UIP: LABEL38 { align1 1Q }; END B58 ->B59 ->B60 START B59 <-B57 <-B58 (5 cycles) LABEL39: sync nop(1) null<0,1,0>UB { align1 WE_all 1N $14.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; mov(8) g66<1>UD 0x3f800000UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mov(8) g65<1>UD 0x3f800000UD { align1 1Q }; END B59 ->B60 START B60 <-B59 <-B58 (16 cycles) LABEL38: endif(8) JIP: LABEL36 { align1 1Q }; cmp.l.f0.0(8) null<1>F g60<1,1,0>F 0x42700000F /* 60F */ { align1 1Q compacted }; (+f0.0) if(8) JIP: LABEL41 UIP: LABEL40 { align1 1Q }; END B60 ->B61 ->B62 START B61 <-B60 (906 cycles) sel.l(8) g84<1>UD g103<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g87<1>UD g47<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@7 }; mul(8) g82<1>F g7<8,8,1>F 0x3e4ccccdF /* 0.2F */ { align1 1Q }; mul(8) g83<1>F g105<8,8,1>F 0x3e4ccccdF /* 0.2F */ { align1 1Q }; shl(8) g85<1>D g84<8,8,1>D 0x00000006UD { align1 1Q I@2 }; shl(8) g88<1>D g87<8,8,1>D 0x00000005UD { align1 1Q A@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; frc(8) g117<1>F g82<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; frc(8) g118<1>F g83<1,1,0>F { align1 1Q compacted }; mov(1) g83<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g83<1>UD g83<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; add3(8) g86<1>D g13.7<0,1,0>D g85<8,8,1>D 128W { align1 1Q }; add(8) g89<1>D g13.6<0,1,0>D g88<1,1,0>D { align1 1Q I@3 compacted }; and(1) g83<1>UD mask0<0,1,0>UD g83<0,1,0>UD { align1 WE_all 1N I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; frc(8) g114<1>F g116.2<0,1,0>F { align1 1Q compacted }; mul(8) g90<1>F g7<8,8,1>F 0x3d23d70aF /* 0.04F */ { align1 1Q $2.src }; mul(8) g91<1>F g105<8,8,1>F 0x3d23d70aF /* 0.04F */ { align1 1Q }; mov(8) g119<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mul(8) g111<1>F g105<1,1,0>F 0x3fc00000F /* 1.5F */ { align1 1Q compacted }; mul(8) g109<1>F g7<1,1,0>F 0x3fc00000F /* 1.5F */ { align1 1Q $3.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mad(8) g97<1>F g8.2<0,1,0>F g110.3<0,1,0>F -g27<1,1,1>F { align1 1Q }; sel.l(8) g125<1>UD g69<8,8,1>UD 0x000f423fUD { align1 1Q $8.src }; add(8) g101<1>F g27<1,1,0>F 0xbf800000F /* -1F */ { align1 1Q $6.src compacted }; add(8) g51<1>F -g66<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q $8.src compacted }; mul(8) g9<1>F g107<0,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q compacted }; fbl(1) g30<1>UD g83<0,1,0>UD { align1 WE_all 1N I@3 }; mul(8) g115<1>F g114<1,1,0>F 0x41800000F /* 16F */ { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; frc(8) g121<1>F g90<1,1,0>F { align1 1Q compacted }; frc(8) g122<1>F g91<1,1,0>F { align1 1Q F@7 compacted }; frc(8) g5<1>F g111<1,1,0>F { align1 1Q F@7 compacted }; mov(8) g114<1>UD g0<8,8,1>UD { align1 WE_all 1Q F@4 }; frc(8) g55<1>F g109<1,1,0>F { align1 1Q F@7 compacted }; shl(8) g126<1>D g125<8,8,1>D 0x00000006UD { align1 1Q I@3 }; add(8) g43<1>F g97<1,1,0>F -g65<1,1,0>F { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mad(8) g106<1>F g51<8,8,1>F g110.2<0,1,0>F g101<1,1,1>F { align1 1Q F@7 }; mov(8) g125<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; shl(1) a0<1>UD g30<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g124<1>UD g[a0 288]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g30<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@7 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g120<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; rndd(8) g71<1>F g115<1,1,0>F { align1 1Q compacted }; add3(8) g127<1>D g13.7<0,1,0>D g126<8,8,1>D 128W { align1 1Q I@4 }; mov(1) g125.2<1>UD 0x0000c000UD { align1 WE_all 1N I@4 }; or(1) g119.3<1>UD g124<0,1,0>D 0x00000001UD { align1 WE_all 1N I@4 }; or(1) g114.3<1>UD g124<0,1,0>D 0x00000001UD { align1 WE_all 1N I@7 }; shl(1) a0<1>UD g30<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@4 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000e00UD { align1 WE_all 1N A@1 }; mov(1) g2<1>UD g[a0 480]<0,1,0>UD { align1 WE_all 1N A@1 }; or(1) g125.3<1>UD g124<0,1,0>D 0x00000001UD { align1 WE_all 1N I@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; or(1) a0.1<1>UD g120<0,1,0>UD 0x00000000UD { align1 WE_all 1N $4.src }; send(8) g20UD g119UD g121UD 0x024a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 4 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; frc(8) g121<1>F g5<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; or(1) a0.1<1>UD g120<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g15UD g114UD g117UD 0x024a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 4 { align1 1Q @1 $3 }; frc(8) g120<1>F g55<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mul(8) g117<1>F g71<1,1,0>F 0x3e800000F /* 0.25F */ { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; rndd(8) g119<1>F g117<1,1,0>F { align1 1Q F@1 compacted }; frc(8) g118<1>F g117<1,1,0>F { align1 1Q $3.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; add(8) g123<1>F g121<1,1,0>F -g119<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mul(8) g124<1>F g123<1,1,0>F 0x3e800000F /* 0.25F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; frc(8) g127<1>F g124<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mad.sat(8) g92<1>F -g8.2<0,1,0>F g18<8,8,1>F g23<1,1,1>F { align1 1Q $3.dst }; mov(1) g8.7<1>D 1048576000D { align1 WE_all 1N F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mad(8) g122<1>F g118<8,8,1>F g8.7<0,1,0>F g120<1,1,1>F { align1 1Q $4.src }; add(8) g94<1>F g92<8,8,1>F 0xbe99999aF /* -0.3F */ { align1 1Q F@2 }; add(8) g93<1>F g92<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; mul.sat(8) g95<1>F g94<8,8,1>F 0x40a00001F /* 5F */ { align1 1Q }; mul.sat(8) g12<1>F g93<8,8,1>F 0x411ffffeF /* 10F */ { align1 1Q F@2 }; frc(8) g126<1>F g122<1,1,0>F { align1 1Q A@5 compacted }; mad(8) g99<1>F g65<8,8,1>F g43<8,8,1>F g95<1,1,1>F { align1 1Q F@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mad(8) g108<1>F g66<8,8,1>F g106<8,8,1>F g95<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@4 }; add(8) g67<1>F -g12<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g2<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g52UD g125UD g126UD 0x022a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 2 { align1 1Q @1 $8 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; add(8) g100<1>F g97<1,1,0>F -g99<1,1,0>F { align1 1Q compacted }; mad(8) g66<1>F g108<8,8,1>F g12<8,8,1>F -g108<1,1,1>F { align1 1Q F@3 }; mad(8) g65<1>F g99<8,8,1>F g12<8,8,1>F g100<1,1,1>F { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; add(8) g15<1>F g52<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q $8.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.dst }; add(8) g16<1>F g53<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q $8.dst compacted }; mad(8) g17<1>F g8.3<0,1,0>F g9<8,8,1>F g15<1,1,1>F { align1 1Q @2 $3.dst }; mad(8) g18<1>F g8.3<0,1,0>F g9<8,8,1>F g16<1,1,1>F { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; mul(8) g73<1>F g17<1,1,0>F g12<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; mul(8) g74<1>F g18<1,1,0>F g12<1,1,0>F { align1 1Q compacted }; else(8) JIP: LABEL40 UIP: LABEL40 { align1 1Q }; END B61 ->B62 ->B63 START B62 <-B60 <-B61 (7 cycles) LABEL41: mov(8) g12<1>UD 0x00000000UD { align1 1Q F@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; mov(8) g74<1>UD 0x00000000UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; mov(8) g73<1>UD 0x00000000UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mov(8) g67<1>UD 0x3f800000UD { align1 1Q }; END B62 ->B63 START B63 <-B62 <-B61 (16 cycles) LABEL40: endif(8) JIP: LABEL36 { align1 1Q }; cmp.l.f0.0(8) null<1>F g60<1,1,0>F 0x41f00000F /* 30F */ { align1 1Q compacted }; (+f0.0) if(8) JIP: LABEL42 UIP: LABEL42 { align1 1Q }; END B63 ->B64 ->B65 START B64 <-B63 (510 cycles) sel.l(8) g21<1>UD g68<8,8,1>UD 0x000f423fUD { align1 1Q $4.dst }; sel.l(8) g24<1>UD g47<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; mul(8) g20<1>F g59<1,1,0>F 0x3e800000F /* 0.25F */ { align1 1Q $4.dst compacted }; mul(8) g28<1>F g105<1,1,0>F 0x3f400000F /* 0.75F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; mul(8) g33<1>F g7<1,1,0>F 0x3f400000F /* 0.75F */ { align1 1Q compacted }; mov(1) g84<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g84<1>UD g84<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; shl(8) g22<1>D g21<8,8,1>D 0x00000006UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@3 }; shl(8) g25<1>D g24<8,8,1>D 0x00000005UD { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g29<1>F g20<8,8,1>F g8.5<0,1,0>F g116.2<0,1,0>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; add3(8) g23<1>D g13.7<0,1,0>D g22<8,8,1>D 128W { align1 1Q I@2 }; add(8) g26<1>D g13.6<0,1,0>D g25<1,1,0>D { align1 1Q I@2 compacted }; and(1) g84<1>UD mask0<0,1,0>UD g84<0,1,0>UD { align1 WE_all 1N I@5 }; mov(8) g127<1>UD g0<8,8,1>UD { align1 WE_all 1Q $8.src }; mov(8) g30<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mul(8) g31<1>F g116.2<0,1,0>F 0x3c75c290F /* 0.015F */ { align1 1Q }; mov(1) g8.7<1>D 1051260355D { align1 WE_all 1N F@2 }; mov(8) g35<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mov(8) g38<1>UD g0<8,8,1>UD { align1 WE_all 1Q }; mul(8) g36<1>F g105<8,8,1>F 0x3e7d70a4F /* 0.2475F */ { align1 1Q $1.src }; mul(8) g68<1>F g7<8,8,1>F 0x3e7d70a4F /* 0.2475F */ { align1 1Q }; add(8) g53<1>F g27<1,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; mov(8) g34<1>F g29<1,1,0>F { align1 1Q F@5 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $0.src }; fbl(1) g32<1>UD g84<0,1,0>UD { align1 WE_all 1N I@6 }; mov(1) g127.2<1>UD 0x00008000UD { align1 WE_all 1N I@6 }; mov(1) g30.2<1>UD 0x00008000UD { align1 WE_all 1N I@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@5 }; mad(8) g37<1>F g31<8,8,1>F g8.7<0,1,0>F g29<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mad(8) g48<1>F g8.2<0,1,0>F g110.2<0,1,0>F -g27<1,1,1>F { align1 1Q }; mov(1) g35.2<1>UD 0x00008000UD { align1 WE_all 1N I@5 }; mov(1) g38.2<1>UD 0x00008000UD { align1 WE_all 1N I@5 }; mad(8) g75<1>F g8.2<0,1,0>F g110.2<0,1,0>F g53<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; shl(1) a0<1>UD g32<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@5 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g126<1>UD g[a0 320]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g32<0,1,0>UD 0x00000002UD { align1 WE_all 1N $8.src }; add(1) a0<1>UD a0<0,1,0>UD 0x00000200UD { align1 WE_all 1N A@1 }; mov(1) g125<1>UD g[a0 224]<0,1,0>UD { align1 WE_all 1N A@1 }; mov(8) g69<1>F g37<1,1,0>F { align1 1Q F@3 compacted }; or(1) g127.3<1>UD g126<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; or(1) g30.3<1>UD g126<0,1,0>D 0x00000001UD { align1 WE_all 1N I@6 }; or(1) g35.3<1>UD g126<0,1,0>D 0x00000001UD { align1 WE_all 1N I@6 }; or(1) g38.3<1>UD g126<0,1,0>D 0x00000001UD { align1 WE_all 1N I@6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@2 }; add(8) g76<1>F g75<1,1,0>F -g66<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@3 }; or(1) a0.1<1>UD g125<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g15UD g30UD g33UD 0x023a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 3 { align1 1Q @1 $4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@2 }; or(1) a0.1<1>UD g125<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g20UD g35UD g36UD 0x023a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 3 { align1 1Q @1 $5 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g125<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g23UD g38UD g68UD 0x023a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 3 { align1 1Q @1 $6 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@1 }; or(1) a0.1<1>UD g125<0,1,0>UD 0x00000000UD { align1 WE_all 1N $8.src }; send(8) g124UD g127UD g28UD 0x023a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 3 { align1 1Q @1 $8 }; add(8) g38<1>F (abs)g81<1,1,0>F 0xbf400000F /* -0.75F */ { align1 1Q $6.src compacted }; mul(8) g68<1>F g107<0,1,0>F g107.1<0,1,0>F { align1 1Q $6.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mul(8) g77<1>F g76<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q compacted }; mul.sat(8) g39<1>F g38<1,1,0>F 0xc0000000F /* -2F */ { align1 1Q F@3 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul(8) g69<1>F g68<1,1,0>F g39<1,1,0>F { align1 1Q F@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; mul(8) g42<1>F g77<1,1,0>F g69<1,1,0>F { align1 1Q compacted }; mad(8) g12<1>F g12<8,8,1>F g12<8,8,1>F -g69<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@5 }; add(8) g32<1>F g23<1,1,0>F -g20<1,1,0>F { align1 1Q $6.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; add(8) g33<1>F g24<1,1,0>F -g21<1,1,0>F { align1 1Q $6.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; add(8) g34<1>F g25<1,1,0>F -g22<1,1,0>F { align1 1Q $6.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; add(8) g28<1>F g15<1,1,0>F -g124<1,1,0>F { align1 1Q $4.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; add(8) g29<1>F g16<1,1,0>F -g125<1,1,0>F { align1 1Q $4.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.dst }; add(8) g30<1>F g17<1,1,0>F -g126<1,1,0>F { align1 1Q $4.dst compacted }; add(8) g50<1>F g124<1,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; add(8) g62<1>F g125<1,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; add(8) g36<1>F g22<1,1,0>F g126<1,1,0>F { align1 1Q $5.src compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@6 }; add(8) g49<1>F g32<1,1,0>F g28<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@6 }; add(8) g113<1>F g33<1,1,0>F g29<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; add(8) g35<1>F g34<1,1,0>F g30<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@6 }; add(8) g102<1>F g50<1,1,0>F g20<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@6 }; add(8) g63<1>F g62<1,1,0>F g21<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mad(8) g37<1>F g36<8,8,1>F (abs)g19<8,8,1>F g35<1,1,1>F { align1 1Q F@3 }; mad(8) g59<1>F g102<8,8,1>F (abs)g19<8,8,1>F g49<1,1,1>F { align1 1Q F@3 }; mad(8) g64<1>F g63<8,8,1>F (abs)g19<8,8,1>F g113<1,1,1>F { align1 1Q F@3 }; mad(8) g66<1>F g66<8,8,1>F g37<8,8,1>F g42<1,1,1>F { align1 1Q F@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mad(8) g72<1>F -g73<8,8,1>F g48<8,8,1>F g59<1,1,1>F { align1 1Q }; mad(8) g52<1>F -g74<8,8,1>F g48<8,8,1>F g64<1,1,1>F { align1 1Q F@3 }; mad(8) g73<1>F g73<8,8,1>F g69<8,8,1>F g72<1,1,1>F { align1 1Q F@2 }; mad(8) g74<1>F g74<8,8,1>F g69<8,8,1>F g52<1,1,1>F { align1 1Q F@2 }; END B64 ->B65 START B65 <-B64 <-B63 (16 cycles) LABEL42: endif(8) JIP: LABEL36 { align1 1Q }; cmp.l.f0.0(8) null<1>F g60<1,1,0>F 0x41700000F /* 15F */ { align1 1Q compacted }; (+f0.0) if(8) JIP: LABEL43 UIP: LABEL43 { align1 1Q }; END B65 ->B66 ->B67 START B66 <-B65 (480 cycles) sync nop(1) null<0,1,0>UB { align1 WE_all 1N $15.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; sel.l(8) g11<1>UD g103<8,8,1>UD 0x000f423fUD { align1 1Q }; sel.l(8) g84<1>UD g47<1,1,0>UD 0x000007ffUD { align1 1Q compacted }; add(8) g79<1>F g81<1,1,0>F 0xbf400000F /* -0.75F */ { align1 1Q F@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; frc(8) g48<1>F g105<1,1,0>F { align1 1Q compacted }; frc(8) g47<1>F g7<1,1,0>F { align1 1Q I@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; shl(8) g82<1>D g11<8,8,1>D 0x00000006UD { align1 1Q }; shl(8) g85<1>D g84<8,8,1>D 0x00000005UD { align1 1Q I@2 }; mul.sat(8) g80<1>F g79<8,8,1>F 0x411ffffeF /* 10F */ { align1 1Q F@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; add3(8) g83<1>D g13.7<0,1,0>D g82<8,8,1>D 128W { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@2 }; add(8) g86<1>D g13.6<0,1,0>D g85<1,1,0>D { align1 1Q compacted }; mov(1) g85<1>UD sr0.3<0,1,0>UD { align1 WE_all 1N A@1 compacted }; and(1) g85<1>UD g85<0,1,0>UD 0xffffffffUD { align1 WE_all 1N A@1 }; and(1) g85<1>UD mask0<0,1,0>UD g85<0,1,0>UD { align1 WE_all 1N I@1 }; mov(8) g69<1>UD g0<8,8,1>UD { align1 WE_all 1Q F@6 }; mul(8) g92<1>F g107<0,1,0>F 0x40d55558F /* 6.66667F */ { align1 1Q }; mov(1) g8.7<1>D 1053609165D { align1 WE_all 1N F@7 }; add(8) g99<1>F g107.1<0,1,0>F 0xbf800000F /* -1F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; add(8) g51<1>F -g65<1,1,0>F 0x3f800000F /* 1F */ { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $13.src }; fbl(1) g33<1>UD g85<0,1,0>UD { align1 WE_all 1N I@3 }; mov(1) g69.2<1>UD 0x00008000UD { align1 WE_all 1N I@3 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mul(8) g93<1>F g92<1,1,0>F g107<0,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mad(8) g106<1>F g51<8,8,1>F g110.3<0,1,0>F -g27<1,1,1>F { align1 1Q F@2 }; shl(1) a0<1>UD g33<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@2 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g88<1>UD g[a0 192]<0,1,0>UD { align1 WE_all 1N A@1 }; shl(1) a0<1>UD g33<0,1,0>UD 0x00000002UD { align1 WE_all 1N I@7 }; add(1) a0<1>UD a0<0,1,0>UD 0x00000a00UD { align1 WE_all 1N A@1 }; mov(1) g87<1>UD g[a0 96]<0,1,0>UD { align1 WE_all 1N A@1 }; mul(8) g94<1>F g93<1,1,0>F g27<1,1,0>F { align1 1Q F@2 compacted }; or(1) g69.3<1>UD g88<0,1,0>D 0x00000001UD { align1 WE_all 1N I@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; mul(8) g95<1>F g94<1,1,0>F g80<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@1 }; or(1) a0.1<1>UD g87<0,1,0>UD 0x00000000UD { align1 WE_all 1N }; send(8) g24UD g69UD g47UD 0x023a00fc a0.1<0>UD sampler MsgDesc: sample SIMD8 Surface = 252 Sampler = 0 ex_bso mlen 1 rlen 3 { align1 1Q @1 $7 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N I@6 }; mad(8) g89<1>F g24<8,8,1>F g8.7<0,1,0>F g116.2<0,1,0>F { align1 1Q $7.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; add(8) g100<1>F g99<1,1,0>F g25<1,1,0>F { align1 1Q $7.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@3 }; mul(8) g97<1>F g95<1,1,0>F g26<1,1,0>F { align1 1Q $7.dst compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $6.src }; mul.sat(8) g101<1>F g100<8,8,1>F 0x411ffffeF /* 10F */ { align1 1Q F@2 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; frc(8) g90<1>F g89<1,1,0>F { align1 1Q F@4 compacted }; mul(8) g107<1>F g101<1,1,0>F g106<1,1,0>F { align1 1Q F@2 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mul(8) g108<1>F -g101<1,1,0>F g66<1,1,0>F { align1 1Q compacted }; add.sat(8) g91<1>F -g90<8,8,1>F 0x3e199998F /* 0.15F */ { align1 1Q F@3 }; mul(8) g43<1>F g97<1,1,0>F g91<1,1,0>F { align1 1Q F@1 compacted }; mad(8) g65<1>F g65<8,8,1>F g43<8,8,1>F g107<1,1,1>F { align1 1Q F@1 }; mad(8) g66<1>F g66<8,8,1>F g43<8,8,1>F g108<1,1,1>F { align1 1Q F@4 }; END B66 ->B67 START B67 <-B66 <-B65 (4 cycles) LABEL43: endif(8) JIP: LABEL36 { align1 1Q }; END B67 ->B68 START B68 <-B67 <-B56 (23 cycles) LABEL36: endif(8) JIP: LABEL1 { align1 1Q }; mul(8) g4<1>F g66<1,1,0>F g98<1,1,0>F { align1 1Q A@1 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.dst }; mad(8) g116<1>F g73<8,8,1>F g44<8,8,1>F g67<1,1,1>F { align1 1Q A@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $2.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N A@4 }; mad(8) g72<1>F g74<8,8,1>F g104<8,8,1>F g67<1,1,1>F { align1 1Q }; mad(8) g52<1>F g12<8,8,1>F g10<8,8,1>F g67<1,1,1>F { align1 1Q A@6 }; mov.nz.f0.0(8) null<1>D g45<8,8,1>D { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; mul(8) g109<1>F g65<1,1,0>F g78<1,1,0>F { align1 1Q A@4 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; mul(8) g110<1>F g65<1,1,0>F g70<1,1,0>F { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; mul(8) g111<1>F g65<1,1,0>F g57<1,1,0>F { align1 1Q compacted }; (-f0.0) if(8) JIP: LABEL44 UIP: LABEL44 { align1 1Q }; END B68 ->B69 ->B70 START B69 <-B68 (21 cycles) mul(8) g55<1>F g81<1,1,0>F g52<1,1,0>F { align1 1Q F@4 compacted }; mul(8) g115<1>F -g116<1,1,0>F 0x40000000F /* 2F */ { align1 1Q F@7 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; mul(8) g71<1>F -g72<1,1,0>F 0x40000000F /* 2F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $5.src }; mul(8) g117<1>F -g52<1,1,0>F 0x40000000F /* 2F */ { align1 1Q compacted }; mad(8) g5<1>F g55<8,8,1>F g72<8,8,1>F g19<1,1,1>F { align1 1Q F@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $3.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; mad(8) g114<1>F g5<8,8,1>F g116<8,8,1>F g41<1,1,1>F { align1 1Q }; mad(8) g116<1>F g41<8,8,1>F g114<8,8,1>F g115<1,1,1>F { align1 1Q F@1 }; mad(8) g72<1>F g19<8,8,1>F g114<8,8,1>F g71<1,1,1>F { align1 1Q F@5 }; mad(8) g52<1>F g81<8,8,1>F g114<8,8,1>F g117<1,1,1>F { align1 1Q F@5 }; END B69 ->B70 START B70 <-B69 <-B68 (1403 cycles) LABEL44: endif(8) JIP: LABEL1 { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; sel.ge(8) g118<1>F (abs)g72<1,1,0>F (abs)g52<1,1,0>F { align1 1Q compacted }; mul(8) g124<1>F g96<8,8,1>F 0x427c0000F /* 63F */ { align1 1Q $8.src }; math inv(8) g127<1>F g56<8,8,1>F null<8,8,1>F { align1 1Q $8 }; math inv(8) g7<1>F g54<8,8,1>F null<8,8,1>F { align1 1Q $9 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.src }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@5 }; mul(8) g121<1>F g116<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q compacted }; mul(8) g122<1>F g72<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q compacted }; mul(8) g123<1>F g52<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q $8.src compacted }; mul(8) g24<1>F g14.1<0,1,0>F 0x3eaaaaabF /* 0.333333F */ { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; math sqrt(8) g21<1>F g109<8,8,1>F null<8,8,1>F { align1 1Q @7 $10 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@7 }; math sqrt(8) g22<1>F g110<8,8,1>F null<8,8,1>F { align1 1Q $11 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $4.src }; math sqrt(8) g23<1>F g111<8,8,1>F null<8,8,1>F { align1 1Q @7 $12 }; sel.ge(8) g119<1>F (abs)g116<1,1,0>F g118<1,1,0>F { align1 1Q F@6 compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mov(8) g125<1>UD g124<8,8,1>F { align1 1Q F@6 }; mul(8) g2<1>F g61<1,1,0>F g127<1,1,0>F { align1 1Q $8.dst compacted }; mul(8) g61<1>F g112<1,1,0>F g127<1,1,0>F { align1 1Q compacted }; mul(8) g112<1>F g6<1,1,0>F g127<1,1,0>F { align1 1Q compacted }; math inv(8) g120<1>F g119<8,8,1>F null<8,8,1>F { align1 1Q @4 $13 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $8.src }; mov(8) g126<1>F g125<1,1,0>UD { align1 1Q I@1 compacted }; mad(8) g9<1>F g2<8,8,1>F g7<8,8,1>F g40<1,1,1>F { align1 1Q @4 $9.dst }; mad(8) g105<1>F g61<8,8,1>F g7<8,8,1>F g58<1,1,1>F { align1 1Q F@4 }; mad(8) g81<1>F g112<8,8,1>F g7<8,8,1>F g46<1,1,1>F { align1 1Q F@4 }; mul(8) g46<1>F g126<8,8,1>F 0x3b808081F /* 0.00392157F */ { align1 1Q F@4 }; mad(8) g50<1>F g8.3<0,1,0>F g120<8,8,1>F g123<1,1,1>F { align1 1Q $13.dst }; mad(8) g49<1>F g8.3<0,1,0>F g120<8,8,1>F g122<1,1,1>F { align1 1Q }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $7.src }; mad(8) g48<1>F g8.3<0,1,0>F g120<8,8,1>F g121<1,1,1>F { align1 1Q }; mul(8) g124<1>F g9<1,1,0>F 0x3f000000F /* 0.5F */ { align1 1Q A@1 compacted }; mul(8) g125<1>F g105<1,1,0>F 0xbf000000F /* -0.5F */ { align1 1Q F@7 compacted }; mul(8) g126<1>F g81<8,8,1>F 0x447a0000F /* 1000F */ { align1 1Q F@7 }; LABEL1: halt(8) JIP: LABEL0 UIP: LABEL0 { align1 1Q }; LABEL0: sync nop(1) null<0,1,0>UB { align1 WE_all 1N $10.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $11.dst }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N $12.dst }; (+f1.0) sendc(8) nullUD g21UD nullUD 0x08070400 0x00000000 render MsgDesc: RT write SIMD8 CoarseWrite Surface = 0 mlen 4 ex_mlen 0 rlen 0 { align1 1Q $14 }; mov(8) g102<1>F 0x0F /* 0F */ { align1 1Q compacted }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; (+f1.0) sendc(8) nullUD g48UD g102UD 0x06070401 0x00001040 render MsgDesc: RT write SIMD8 CoarseWrite Surface = 1 mlen 3 ex_mlen 1 rlen 0 { align1 1Q $15 }; mov(8) g45<1>F 0x3eaaaaabF /* 0.333333F */ { align1 1Q I@4 }; sync nop(1) null<0,1,0>UB { align1 WE_all 1N F@1 }; (+f1.0) sendc(8) nullUD g3UD g45UD 0x04070402 0x00002080 render MsgDesc: RT write SIMD8 CoarseWrite Surface = 2 mlen 2 ex_mlen 2 rlen 0 { align1 1Q $0 }; mov(8) g123<1>F 0x3f800000F /* 1F */ { align1 1Q compacted }; (+f1.0) sendc(8) nullUD g124UD g123UD 0x06071403 0x00003040 render MsgDesc: RT write SIMD8 LastRT CoarseWrite Surface = 3 mlen 3 ex_mlen 1 rlen 0 { align1 1Q A@1 EOT }; END B70 ../../SOURCE/master/src/intel/compiler/brw_compiler.h:2215:44: runtime error: shift exponent 65 is too large for 64-bit type 'long long unsigned int' Fossilize INFO: Total binary size for Shader Module: 47416 (27368 compressed) Fossilize INFO: Opening archive took 0 ms: Fossilize INFO: Parsing archive took 10 ms: Fossilize INFO: Playing back 2 shader modules took 0.001 s (accumulated time) Fossilize INFO: Shader cache evicted 0 shader modules in total Fossilize INFO: Playing back 1 graphics pipelines took 0.951 s (accumulated time) Fossilize INFO: Playing back 0 compute pipelines took 0.000 s (accumulated time) Fossilize INFO: Playing back 0 raytracing pipelines took 0.000 s (accumulated time) Fossilize INFO: Threads were idling in total for 0.000 s (accumulated time) Fossilize INFO: Threads were active in total for 0.954 s (accumulated time) Fossilize INFO: Total peak memory consumption by parser: 0.197 MB. Fossilize INFO: Replayed 39 objects in 1022 ms: Fossilize INFO: samplers: 2 Fossilize INFO: descriptor set layouts: 11 Fossilize INFO: pipeline layouts: 20 Fossilize INFO: render passes: 3 Fossilize INFO: compute pipelines: 0 Fossilize INFO: graphics pipelines: 1 Fossilize INFO: raytracing pipelines: 0