Reference API Documentation

xtc.itf

xtc.itf.back

xtc.itf.back.backend

xtc.itf.back.backend.Backend

An abstract implementation of specific Graph implementation.

A Backend is constructed from an input Graph and provides backend-specific implementations of the graph operations. It serves as a bridge between the abstract graph representation and concrete backend implementations (e.g., MLIR, TVM, JIR).

The Implementer provides access to associated Scheduler and Compiler instances for applying transformations and generating executable code.

Source code in xtc/itf/back/backend.py
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
class Backend(ABC):
    """An abstract implementation of specific Graph implementation.

    A Backend is constructed from an input Graph and provides backend-specific
    implementations of the graph operations. It serves as a bridge between the abstract
    graph representation and concrete backend implementations (e.g., MLIR, TVM, JIR).

    The Implementer provides access to associated Scheduler and Compiler instances
    for applying transformations and generating executable code.
    """

    @abstractmethod
    def get_scheduler(self, **kwargs: Any) -> Scheduler:
        """Returns the scheduler associated with this implementation.

        Args:
            kwargs: scheduler configuration

        Returns:
            The scheduler for applying transformations
        """
        ...

    @abstractmethod
    def get_compiler(self, **kwargs: Any) -> Compiler:
        """Returns the compiler associated with this implementation.

        Args:
            kwargs: compiler configuration

        Returns:
            The compiler for generating executable code
        """
        ...

    @property
    @abstractmethod
    def graph(self) -> Graph:
        """Returns the graph being implemented.

        Returns:
            The source graph for this implementation
        """
        ...
xtc.itf.back.backend.Backend.graph abstractmethod property

Returns the graph being implemented.

Returns:

Type Description
Graph

The source graph for this implementation

xtc.itf.back.backend.Backend.get_compiler(**kwargs) abstractmethod

Returns the compiler associated with this implementation.

Parameters:

Name Type Description Default
kwargs Any

compiler configuration

{}

Returns:

Type Description
Compiler

The compiler for generating executable code

Source code in xtc/itf/back/backend.py
36
37
38
39
40
41
42
43
44
45
46
@abstractmethod
def get_compiler(self, **kwargs: Any) -> Compiler:
    """Returns the compiler associated with this implementation.

    Args:
        kwargs: compiler configuration

    Returns:
        The compiler for generating executable code
    """
    ...
xtc.itf.back.backend.Backend.get_scheduler(**kwargs) abstractmethod

Returns the scheduler associated with this implementation.

Parameters:

Name Type Description Default
kwargs Any

scheduler configuration

{}

Returns:

Type Description
Scheduler

The scheduler for applying transformations

Source code in xtc/itf/back/backend.py
24
25
26
27
28
29
30
31
32
33
34
@abstractmethod
def get_scheduler(self, **kwargs: Any) -> Scheduler:
    """Returns the scheduler associated with this implementation.

    Args:
        kwargs: scheduler configuration

    Returns:
        The scheduler for applying transformations
    """
    ...

xtc.itf.comp

xtc.itf.comp.compiler

xtc.itf.comp.compiler.Compiler

An abstract implementation of a compiler for a given backend and schedule.

A Compiler takes a backend-specific implementation and schedule and generates an executable Module. It handles the final stage of converting the optimized intermediate representation into executable code for the target platform.

Source code in xtc/itf/comp/compiler.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
class Compiler(ABC):
    """An abstract implementation of a compiler for a given backend and schedule.

    A Compiler takes a backend-specific implementation and schedule and generates
    an executable Module. It handles the final stage of converting the optimized
    intermediate representation into executable code for the target platform.
    """

    @abstractmethod
    def compile(self, schedule: Schedule) -> Module:
        """Compiles the implementation according to the given schedule.

        Args:
            schedule: The schedule specifying transformations and optimizations

        Returns:
            The compiled executable module
        """
        ...

    @property
    @abstractmethod
    def backend(self) -> "xtc.itf.back.Backend":
        """Returns the implementer associated with this compiler.

        Returns:
            The backend this compiler generates code for
        """
        ...
xtc.itf.comp.compiler.Compiler.backend abstractmethod property

Returns the implementer associated with this compiler.

Returns:

Type Description
Backend

The backend this compiler generates code for

xtc.itf.comp.compiler.Compiler.compile(schedule) abstractmethod

Compiles the implementation according to the given schedule.

Parameters:

Name Type Description Default
schedule Schedule

The schedule specifying transformations and optimizations

required

Returns:

Type Description
Module

The compiled executable module

Source code in xtc/itf/comp/compiler.py
19
20
21
22
23
24
25
26
27
28
29
@abstractmethod
def compile(self, schedule: Schedule) -> Module:
    """Compiles the implementation according to the given schedule.

    Args:
        schedule: The schedule specifying transformations and optimizations

    Returns:
        The compiled executable module
    """
    ...

xtc.itf.comp.module

xtc.itf.comp.module.Module

An abstract representation of an executable module.

A Module is the final output of the compilation process, representing compiled code that can be executed. It is produced by a Compiler after applying transformations specified by a Schedule to an Implementer's representation of a Graph.

Modules can be exported as shared objects for direct execution and evaluation, or for usage in larger applications. They can be executed and evaluated using Executor and Evaluator classes to measure performance and validate correctness.

Source code in xtc/itf/comp/module.py
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
class Module(ABC):
    """An abstract representation of an executable module.

    A Module is the final output of the compilation process, representing
    compiled code that can be executed. It is produced by a Compiler after
    applying transformations specified by a Schedule to an Implementer's
    representation of a Graph.

    Modules can be exported as shared objects for direct execution and evaluation,
    or for usage in larger applications. They can be executed and evaluated using
    Executor and Evaluator classes to measure performance and validate correctness.
    """

    @property
    @abstractmethod
    def file_type(self) -> str:
        """The module type, can be target dependent.

        Available types are: "executable", "shlib"

        Returns:
            the type of the module
        """
        ...

    @property
    @abstractmethod
    def name(self) -> str:
        """The module name.

        The module name may be used to identify a module to
        execute.

        Returns:
            the name of the module
        """
        ...

    @property
    @abstractmethod
    def payload_name(self) -> str:
        """The payload name for the module.

        The name of the payload to execute for the module.
        Generally the entry point inside the module.

        Returns:
            the name of the executable payload inside the module
        """
        ...

    @property
    @abstractmethod
    def file_name(self) -> str:
        """The storage file name of the module.

        The file name extension should match the module file type.

        Returns:
            the path to the generated module file
        """
        ...

    @abstractmethod
    def export(self) -> None:
        """Exports the module to a format suitable for execution.

        This method handles the final step of making the compiled code
        available for execution, typically by writing it to a shared
        object file or similar executable format.
        """
        ...

    @abstractmethod
    def get_evaluator(self, **kwargs: Any) -> Evaluator:
        """Returns a suitable evaluator for the module.

        Args:
            kwargs: evaluator configuration

        Returns:
            The evaluator for executing the module
        """
        ...

    @abstractmethod
    def get_executor(self, **kwargs: Any) -> Executor:
        """Returns a suitable executor for the module.

        Args:
            kwargs: executor configuration

        Returns:
            The executor for executing the module
        """
        ...
xtc.itf.comp.module.Module.file_name abstractmethod property

The storage file name of the module.

The file name extension should match the module file type.

Returns:

Type Description
str

the path to the generated module file

xtc.itf.comp.module.Module.file_type abstractmethod property

The module type, can be target dependent.

Available types are: "executable", "shlib"

Returns:

Type Description
str

the type of the module

xtc.itf.comp.module.Module.name abstractmethod property

The module name.

The module name may be used to identify a module to execute.

Returns:

Type Description
str

the name of the module

xtc.itf.comp.module.Module.payload_name abstractmethod property

The payload name for the module.

The name of the payload to execute for the module. Generally the entry point inside the module.

Returns:

Type Description
str

the name of the executable payload inside the module

xtc.itf.comp.module.Module.export() abstractmethod

Exports the module to a format suitable for execution.

This method handles the final step of making the compiled code available for execution, typically by writing it to a shared object file or similar executable format.

Source code in xtc/itf/comp/module.py
74
75
76
77
78
79
80
81
82
@abstractmethod
def export(self) -> None:
    """Exports the module to a format suitable for execution.

    This method handles the final step of making the compiled code
    available for execution, typically by writing it to a shared
    object file or similar executable format.
    """
    ...
xtc.itf.comp.module.Module.get_evaluator(**kwargs) abstractmethod

Returns a suitable evaluator for the module.

Parameters:

Name Type Description Default
kwargs Any

evaluator configuration

{}

Returns:

Type Description
Evaluator

The evaluator for executing the module

Source code in xtc/itf/comp/module.py
84
85
86
87
88
89
90
91
92
93
94
@abstractmethod
def get_evaluator(self, **kwargs: Any) -> Evaluator:
    """Returns a suitable evaluator for the module.

    Args:
        kwargs: evaluator configuration

    Returns:
        The evaluator for executing the module
    """
    ...
xtc.itf.comp.module.Module.get_executor(**kwargs) abstractmethod

Returns a suitable executor for the module.

Parameters:

Name Type Description Default
kwargs Any

executor configuration

{}

Returns:

Type Description
Executor

The executor for executing the module

Source code in xtc/itf/comp/module.py
 96
 97
 98
 99
100
101
102
103
104
105
106
@abstractmethod
def get_executor(self, **kwargs: Any) -> Executor:
    """Returns a suitable executor for the module.

    Args:
        kwargs: executor configuration

    Returns:
        The executor for executing the module
    """
    ...

xtc.itf.data

xtc.itf.data.tensor

xtc.itf.data.tensor.ConstantTensorType
Source code in xtc/itf/data/tensor.py
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
class ConstantTensorType(TensorType):
    @property
    @abstractmethod
    @override
    def shape(self) -> ConstantShapeType:
        """Returns the tensor's constant shape as a tuple of dimension sizes.

        Returns:
            The size of each dimension in the tensor
        """
        ...

    @property
    @abstractmethod
    @override
    def dtype(self) -> ConstantDataType:
        """Returns the tensor's constant data type.

        Returns:
            The underlying data type of the tensor elements
        """
        ...
xtc.itf.data.tensor.ConstantTensorType.dtype abstractmethod property

Returns the tensor's constant data type.

Returns:

Type Description
ConstantDataType

The underlying data type of the tensor elements

xtc.itf.data.tensor.ConstantTensorType.shape abstractmethod property

Returns the tensor's constant shape as a tuple of dimension sizes.

Returns:

Type Description
ConstantShapeType

The size of each dimension in the tensor

xtc.itf.data.tensor.Tensor

An abstract representation of a multidimensional object.

A Tensor is a fundamental input/output type in the dataflow graph, representing multidimensional data with associated type information. Tensors are used as inputs and outputs for Node operations in the Graph, and their dimensions and types can be used for inference throughout the compilation process.

Source code in xtc/itf/data/tensor.py
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
class Tensor(ABC):
    """An abstract representation of a multidimensional object.

    A Tensor is a fundamental input/output type in the dataflow graph,
    representing multidimensional data with associated type information.
    Tensors are used as inputs and outputs for Node operations in the Graph,
    and their dimensions and types can be used for inference throughout
    the compilation process.
    """

    @property
    @abstractmethod
    def type(self) -> TensorType:
        """Returns the tensor's type information.

        Returns:
            The type descriptor containing shape and dtype information
        """
        ...

    @property
    @abstractmethod
    def data(self) -> Any:
        """Returns the tensor's linearized data.

        Returns:
            any: The tensor's data
        """
        ...

    @abstractmethod
    def numpy(self) -> numpy.typing.NDArray:
        """Convert the tensor to a numpy array.

        Returns:
            The tensor's data as a numpy array
        """
        ...
xtc.itf.data.tensor.Tensor.data abstractmethod property

Returns the tensor's linearized data.

Returns:

Name Type Description
any Any

The tensor's data

xtc.itf.data.tensor.Tensor.type abstractmethod property

Returns the tensor's type information.

Returns:

Type Description
TensorType

The type descriptor containing shape and dtype information

xtc.itf.data.tensor.Tensor.numpy() abstractmethod

Convert the tensor to a numpy array.

Returns:

Type Description
NDArray

The tensor's data as a numpy array

Source code in xtc/itf/data/tensor.py
111
112
113
114
115
116
117
118
@abstractmethod
def numpy(self) -> numpy.typing.NDArray:
    """Convert the tensor to a numpy array.

    Returns:
        The tensor's data as a numpy array
    """
    ...
xtc.itf.data.tensor.TensorType

An abstract representation of a tensor's type information.

TensorType defines the shape and data type characteristics of a tensor, providing the necessary information for type inference and validation during graph operations. This includes the tensor's dimensionality, size along each dimension, and the underlying data type.

Source code in xtc/itf/data/tensor.py
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
class TensorType(ABC):
    """An abstract representation of a tensor's type information.

    TensorType defines the shape and data type characteristics of a tensor,
    providing the necessary information for type inference and validation
    during graph operations. This includes the tensor's dimensionality,
    size along each dimension, and the underlying data type.
    """

    @property
    @abstractmethod
    def shape(self) -> ShapeType:
        """Returns the tensor's shape as a tuple of dimension sizes.

        Returns:
            The size of each dimension in the tensor
        """
        ...

    @property
    @abstractmethod
    def dtype(self) -> DataType:
        """Returns the tensor's data type.

        Returns:
            The underlying data type of the tensor elements
        """
        ...

    @property
    @abstractmethod
    def ndim(self) -> int:
        """Returns the number of dimensions in the tensor.

        Returns:
            The tensor's dimensionality
        """
        ...
xtc.itf.data.tensor.TensorType.dtype abstractmethod property

Returns the tensor's data type.

Returns:

Type Description
DataType

The underlying data type of the tensor elements

xtc.itf.data.tensor.TensorType.ndim abstractmethod property

Returns the number of dimensions in the tensor.

Returns:

Type Description
int

The tensor's dimensionality

xtc.itf.data.tensor.TensorType.shape abstractmethod property

Returns the tensor's shape as a tuple of dimension sizes.

Returns:

Type Description
ShapeType

The size of each dimension in the tensor

xtc.itf.exec

xtc.itf.exec.evaluator

xtc.itf.exec.evaluator.Evaluator

An abstract implementation of a Module performance evaluator.

An Evaluator measures and validates the performance of compiled Modules. It works alongside Executors to provide performance metrics and correctness validation of the compiled code. This is crucial for assessing the effectiveness of different compilation strategies and optimizations.

Evaluators can measure metrics like execution time, throughput, and validate output correctness against reference implementations.

Source code in xtc/itf/exec/evaluator.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
class Evaluator(ABC):
    """An abstract implementation of a Module performance evaluator.

    An Evaluator measures and validates the performance of compiled Modules.
    It works alongside Executors to provide performance metrics and correctness
    validation of the compiled code. This is crucial for assessing the
    effectiveness of different compilation strategies and optimizations.

    Evaluators can measure metrics like execution time, throughput, and
    validate output correctness against reference implementations.
    """

    @abstractmethod
    def evaluate(self) -> tuple[list[float], int, str]:
        """Evaluates the performance of the associated Module.

        Executes the Module multiple times to gather performance metrics,
        potentially validating correctness against reference implementations.

        Returns:
            List of performance measurements (typically execution times in seconds), error coe and error message
        """
        ...

    @property
    @abstractmethod
    def module(self) -> "xtc.itf.comp.Module":
        """Returns the Module being evaluated.

        Returns:
            The compiled Module this evaluator is measuring
        """
        ...
xtc.itf.exec.evaluator.Evaluator.module abstractmethod property

Returns the Module being evaluated.

Returns:

Type Description
Module

The compiled Module this evaluator is measuring

xtc.itf.exec.evaluator.Evaluator.evaluate() abstractmethod

Evaluates the performance of the associated Module.

Executes the Module multiple times to gather performance metrics, potentially validating correctness against reference implementations.

Returns:

Type Description
tuple[list[float], int, str]

List of performance measurements (typically execution times in seconds), error coe and error message

Source code in xtc/itf/exec/evaluator.py
22
23
24
25
26
27
28
29
30
31
32
@abstractmethod
def evaluate(self) -> tuple[list[float], int, str]:
    """Evaluates the performance of the associated Module.

    Executes the Module multiple times to gather performance metrics,
    potentially validating correctness against reference implementations.

    Returns:
        List of performance measurements (typically execution times in seconds), error coe and error message
    """
    ...

xtc.itf.exec.executor

xtc.itf.exec.executor.Executor

An abstract implementation of a Module executor.

An Executor is responsible for running compiled Modules and validating their execution. It provides the runtime environment for executing the compiled code, typically in the form of shared objects generated by the compilation process.

Source code in xtc/itf/exec/executor.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
class Executor(ABC):
    """An abstract implementation of a Module executor.

    An Executor is responsible for running compiled Modules and validating their
    execution. It provides the runtime environment for executing the compiled code,
    typically in the form of shared objects generated by the compilation process.
    """

    @abstractmethod
    def execute(self) -> int:
        """Executes the associated Module.

        Runs the compiled code contained in the Module, providing the necessary
        runtime environment and handling any required setup/teardown.

        Returns:
            Status code indicating execution success (0) or failure (non-zero)
        """
        ...

    @property
    @abstractmethod
    def module(self) -> "xtc.itf.comp.Module":
        """Returns the Module being executed.

        Returns:
            The compiled Module this executor is responsible for running
        """
        ...
xtc.itf.exec.executor.Executor.module abstractmethod property

Returns the Module being executed.

Returns:

Type Description
Module

The compiled Module this executor is responsible for running

xtc.itf.exec.executor.Executor.execute() abstractmethod

Executes the associated Module.

Runs the compiled code contained in the Module, providing the necessary runtime environment and handling any required setup/teardown.

Returns:

Type Description
int

Status code indicating execution success (0) or failure (non-zero)

Source code in xtc/itf/exec/executor.py
18
19
20
21
22
23
24
25
26
27
28
@abstractmethod
def execute(self) -> int:
    """Executes the associated Module.

    Runs the compiled code contained in the Module, providing the necessary
    runtime environment and handling any required setup/teardown.

    Returns:
        Status code indicating execution success (0) or failure (non-zero)
    """
    ...

xtc.itf.graph

xtc.itf.graph.graph

xtc.itf.graph.graph.Graph

An abstract representation of a dataflow graph over Tensor types.

A Graph is a directed acyclic graph (DAG) over Node objects with input Tensors and output Tensors. From given input Tensor dimensions and type, all node inputs, outputs and graph outputs Tensor dimensions and types can be inferred.

A Graph can be evaluated by interpretation or compiled through various backends (mlir, tvm, jir) using corresponding Backend, Scheduler, and Compiler classes.

Nodes in the graph are keyed by their uid which is globally unique.

Source code in xtc/itf/graph/graph.py
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
class Graph(ABC):
    """An abstract representation of a dataflow graph over Tensor types.

    A Graph is a directed acyclic graph (DAG) over Node objects with input Tensors
    and output Tensors. From given input Tensor dimensions and type, all node inputs,
    outputs and graph outputs Tensor dimensions and types can be inferred.

    A Graph can be evaluated by interpretation or compiled through various backends
    (mlir, tvm, jir) using corresponding Backend, Scheduler, and Compiler classes.

    Nodes in the graph are keyed by their uid which is globally unique.
    """

    @property
    @abstractmethod
    def name(self) -> str:
        """Returns the name of this graph. May be non-unique or empty.

        Returns:
            The graph's name
        """
        ...

    @property
    @abstractmethod
    def nodes(self) -> Mapping[str, Node]:
        """Returns a dictionary of all nodes in the graph, keyed by node uid.

        Returns:
            Dictionary mapping node names to Node objects
        """
        ...

    @property
    @abstractmethod
    def inputs(self) -> Sequence[str]:
        """Returns the list of input tensor uids for this graph.

        Returns:
            List of input tensor uids
        """
        ...

    @property
    @abstractmethod
    def outputs(self) -> Sequence[str]:
        """Returns the list of output tensor uids for the graph.

        Returns:
            List of output tensor uids
        """
        ...

    @property
    @abstractmethod
    def inputs_types(self) -> Sequence[TensorType] | None:
        """Returns the list of inputs tensor types

        Returns None when no input tensor type was given
        and forward_types was not called.

        Returns:
            List of input tensor types or None if undef
        """
        ...

    @property
    def outputs_types(self) -> Sequence[TensorType] | None:
        """Returns the list of outputs tensor types

        Returns None when no input tensor type was given
        and forward_types was not called.

        Returns:
            List of output tensor types or None if undef
        """
        ...

    @property
    @abstractmethod
    def inputs_nodes(self) -> Sequence[Node]:
        """Returns the list of inputs nodes.

        The list is such that:
        - nodes appear in the same order as outputs
        - nodes appear only once in the list, hence
          the size may differe from the inputs size

        Returns:
            List of input nodes
        """
        ...

    @property
    @abstractmethod
    def outputs_nodes(self) -> Sequence[Node]:
        """Returns the list of output nodes.

        The list is such that:
        - nodes appear in the same order as outputs
        - nodes appear only once in the list, hence
          the size may differe from the outputs size

        Returns:
            List of output nodes
        """
        ...

    @abstractmethod
    def forward_types(self, inputs_types: Sequence[TensorType]) -> Sequence[TensorType]:
        """Infers output tensor types from input tensor types.

        Args:
            inputs_types: List of input tensor types

        Returns:
            List of inferred output tensor types
        """
        ...

    @abstractmethod
    def forward(self, inputs: Sequence[Tensor]) -> Sequence[Tensor]:
        """Evaluate the graph with input tensors to produce output tensors.

        Args:
            inputs: List of input tensors

        Returns:
            List of output tensors
        """
        ...
xtc.itf.graph.graph.Graph.inputs abstractmethod property

Returns the list of input tensor uids for this graph.

Returns:

Type Description
Sequence[str]

List of input tensor uids

xtc.itf.graph.graph.Graph.inputs_nodes abstractmethod property

Returns the list of inputs nodes.

The list is such that: - nodes appear in the same order as outputs - nodes appear only once in the list, hence the size may differe from the inputs size

Returns:

Type Description
Sequence[Node]

List of input nodes

xtc.itf.graph.graph.Graph.inputs_types abstractmethod property

Returns the list of inputs tensor types

Returns None when no input tensor type was given and forward_types was not called.

Returns:

Type Description
Sequence[TensorType] | None

List of input tensor types or None if undef

xtc.itf.graph.graph.Graph.name abstractmethod property

Returns the name of this graph. May be non-unique or empty.

Returns:

Type Description
str

The graph's name

xtc.itf.graph.graph.Graph.nodes abstractmethod property

Returns a dictionary of all nodes in the graph, keyed by node uid.

Returns:

Type Description
Mapping[str, Node]

Dictionary mapping node names to Node objects

xtc.itf.graph.graph.Graph.outputs abstractmethod property

Returns the list of output tensor uids for the graph.

Returns:

Type Description
Sequence[str]

List of output tensor uids

xtc.itf.graph.graph.Graph.outputs_nodes abstractmethod property

Returns the list of output nodes.

The list is such that: - nodes appear in the same order as outputs - nodes appear only once in the list, hence the size may differe from the outputs size

Returns:

Type Description
Sequence[Node]

List of output nodes

xtc.itf.graph.graph.Graph.outputs_types property

Returns the list of outputs tensor types

Returns None when no input tensor type was given and forward_types was not called.

Returns:

Type Description
Sequence[TensorType] | None

List of output tensor types or None if undef

xtc.itf.graph.graph.Graph.forward(inputs) abstractmethod

Evaluate the graph with input tensors to produce output tensors.

Parameters:

Name Type Description Default
inputs Sequence[Tensor]

List of input tensors

required

Returns:

Type Description
Sequence[Tensor]

List of output tensors

Source code in xtc/itf/graph/graph.py
131
132
133
134
135
136
137
138
139
140
141
@abstractmethod
def forward(self, inputs: Sequence[Tensor]) -> Sequence[Tensor]:
    """Evaluate the graph with input tensors to produce output tensors.

    Args:
        inputs: List of input tensors

    Returns:
        List of output tensors
    """
    ...
xtc.itf.graph.graph.Graph.forward_types(inputs_types) abstractmethod

Infers output tensor types from input tensor types.

Parameters:

Name Type Description Default
inputs_types Sequence[TensorType]

List of input tensor types

required

Returns:

Type Description
Sequence[TensorType]

List of inferred output tensor types

Source code in xtc/itf/graph/graph.py
119
120
121
122
123
124
125
126
127
128
129
@abstractmethod
def forward_types(self, inputs_types: Sequence[TensorType]) -> Sequence[TensorType]:
    """Infers output tensor types from input tensor types.

    Args:
        inputs_types: List of input tensor types

    Returns:
        List of inferred output tensor types
    """
    ...

xtc.itf.graph.node

xtc.itf.graph.node.Node

An abstract representation of a node in a dataflow graph.

A Node represents a pure operation on input Tensor objects, resulting in output Tensor objects. Each node has a unique global uid, a set of input and output tensors, and an associated Operator that defines its semantic behavior.

Source code in xtc/itf/graph/node.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
class Node(ABC):
    """An abstract representation of a node in a dataflow graph.

    A Node represents a pure operation on input Tensor objects, resulting in output
    Tensor objects. Each node has a unique global uid, a set of input
    and output tensors, and an associated Operator that defines its semantic behavior.
    """

    @property
    @abstractmethod
    def uid(self) -> str:
        """Returns the globally unique id over all created nodes.

        Returns:
            The node's globally unique id
        """
        ...

    @property
    @abstractmethod
    def name(self) -> str:
        """Returns the name of this node. Can be non-unique or empty.

        Returns:
            The node's name
        """
        ...

    @property
    @abstractmethod
    def inputs(self) -> list[str]:
        """Returns the list of input tensors uids for this node.

        As of now nodes can only have one output, hence a
        tensor uid is the same as the producing node uid.

        Returns:
            List of input tensor names
        """
        ...

    @property
    @abstractmethod
    def outputs(self) -> list[str]:
        """Returns the list of output tensors uids for this node.

        As of now nodes can only have one output, hence the
        node outputs is the list containing the node uid itself.

        Returns:
            List of output tensor names
        """
        ...

    @property
    @abstractmethod
    def inputs_types(self) -> Sequence[TensorType] | None:
        """Returns the list of inputs tensor types

        Returns None when no input tensor type was given
        and forward_types was not called.

        Returns:
            List of input tensor types or None if undef
        """
        ...

    @property
    def outputs_types(self) -> Sequence[TensorType] | None:
        """Returns the list of outputs tensor types

        Returns None when no input tensor type was given
        and forward_types was not called.

        Returns:
            List of output tensor types or None if undef
        """
        ...

    @property
    @abstractmethod
    def preds(self) -> Sequence[str]:
        """Returns the list of predecessor nodes uids.

        The list is such that:
        - nodes appear in the same order of inputs
        - nodes appear only once, hence the number of preds
          may not be equal to the number of inputs

        Returns:
            List of predecessors nodes uids
        """
        ...

    @property
    @abstractmethod
    def preds_nodes(self) -> Sequence["Node"]:
        """Returns the list of predecessor nodes.

        The list is the same as for preds, but contains the nodes
        instead of the nodes' uids.

        Returns:
            List of predecessors nodes
        """
        ...

    @property
    @abstractmethod
    def operator(self) -> Operator:
        """Returns the operator that defines this node's behavior.

        Returns:
            The algebraic operator associated with this node
        """
        ...

    @property
    @abstractmethod
    def operation(self) -> Operation:
        """Returns the operation that defines this node's behavior.

        The operation specifies the operator behavior and the instanciated
        operator dimensions.

        Returns:
            The algebraic operation associated with this node
        """
        ...

    @abstractmethod
    def forward_types(self, inputs_types: Sequence[TensorType]) -> Sequence[TensorType]:
        """Infers output tensor types from input tensor types.

        Args:
            inputs_types: List of input tensor types

        Returns:
            List of inferred output tensor types
        """
        ...

    @abstractmethod
    def forward(self, inputs: Sequence[Tensor]) -> Sequence[Tensor]:
        """Evaluate the node with input tensors to produce output tensors.

        Args:
            inputs: List of input tensors

        Returns:
            List of output tensors
        """
        ...
xtc.itf.graph.node.Node.inputs abstractmethod property

Returns the list of input tensors uids for this node.

As of now nodes can only have one output, hence a tensor uid is the same as the producing node uid.

Returns:

Type Description
list[str]

List of input tensor names

xtc.itf.graph.node.Node.inputs_types abstractmethod property

Returns the list of inputs tensor types

Returns None when no input tensor type was given and forward_types was not called.

Returns:

Type Description
Sequence[TensorType] | None

List of input tensor types or None if undef

xtc.itf.graph.node.Node.name abstractmethod property

Returns the name of this node. Can be non-unique or empty.

Returns:

Type Description
str

The node's name

xtc.itf.graph.node.Node.operation abstractmethod property

Returns the operation that defines this node's behavior.

The operation specifies the operator behavior and the instanciated operator dimensions.

Returns:

Type Description
Operation

The algebraic operation associated with this node

xtc.itf.graph.node.Node.operator abstractmethod property

Returns the operator that defines this node's behavior.

Returns:

Type Description
Operator

The algebraic operator associated with this node

xtc.itf.graph.node.Node.outputs abstractmethod property

Returns the list of output tensors uids for this node.

As of now nodes can only have one output, hence the node outputs is the list containing the node uid itself.

Returns:

Type Description
list[str]

List of output tensor names

xtc.itf.graph.node.Node.outputs_types property

Returns the list of outputs tensor types

Returns None when no input tensor type was given and forward_types was not called.

Returns:

Type Description
Sequence[TensorType] | None

List of output tensor types or None if undef

xtc.itf.graph.node.Node.preds abstractmethod property

Returns the list of predecessor nodes uids.

The list is such that: - nodes appear in the same order of inputs - nodes appear only once, hence the number of preds may not be equal to the number of inputs

Returns:

Type Description
Sequence[str]

List of predecessors nodes uids

xtc.itf.graph.node.Node.preds_nodes abstractmethod property

Returns the list of predecessor nodes.

The list is the same as for preds, but contains the nodes instead of the nodes' uids.

Returns:

Type Description
Sequence[Node]

List of predecessors nodes

xtc.itf.graph.node.Node.uid abstractmethod property

Returns the globally unique id over all created nodes.

Returns:

Type Description
str

The node's globally unique id

xtc.itf.graph.node.Node.forward(inputs) abstractmethod

Evaluate the node with input tensors to produce output tensors.

Parameters:

Name Type Description Default
inputs Sequence[Tensor]

List of input tensors

required

Returns:

Type Description
Sequence[Tensor]

List of output tensors

Source code in xtc/itf/graph/node.py
154
155
156
157
158
159
160
161
162
163
164
@abstractmethod
def forward(self, inputs: Sequence[Tensor]) -> Sequence[Tensor]:
    """Evaluate the node with input tensors to produce output tensors.

    Args:
        inputs: List of input tensors

    Returns:
        List of output tensors
    """
    ...
xtc.itf.graph.node.Node.forward_types(inputs_types) abstractmethod

Infers output tensor types from input tensor types.

Parameters:

Name Type Description Default
inputs_types Sequence[TensorType]

List of input tensor types

required

Returns:

Type Description
Sequence[TensorType]

List of inferred output tensor types

Source code in xtc/itf/graph/node.py
142
143
144
145
146
147
148
149
150
151
152
@abstractmethod
def forward_types(self, inputs_types: Sequence[TensorType]) -> Sequence[TensorType]:
    """Infers output tensor types from input tensor types.

    Args:
        inputs_types: List of input tensor types

    Returns:
        List of inferred output tensor types
    """
    ...

xtc.itf.graph.operation

xtc.itf.graph.operation.Operation

An abstract representation of an Operation, itself a specialized Operator.

An Operation represents the computation performed by a Node, i.e. an Operator specification and instanciated dimensions and types for the inputs and outputs.

The Operation computation is currently internal, though the inputs and outputs accesses functions are available through the accesses_maps.

Source code in xtc/itf/graph/operation.py
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
class Operation(ABC):
    """An abstract representation of an Operation, itself a specialized Operator.

    An Operation represents the computation performed by a Node, i.e. an Operator
    specification and instanciated dimensions and types for the inputs and
    outputs.

    The Operation computation is currently internal,
    though the inputs and outputs accesses functions are available through the
    accesses_maps.
    """

    @property
    @abstractmethod
    def name(self) -> str:
        """Returns the unique name of this operations's operator.

        Returns:
            The node's operator unique name
        """
        ...

    @property
    @abstractmethod
    def attrs(self) -> OperationAttrs:
        """Returns the dict of attributes for this operation.

        Returns:
            Dict of attributes per name
        """
        ...

    @property
    @abstractmethod
    def inputs_types(self) -> Sequence[TensorType]:
        """Returns the list of input tensors types for this operation.

        Returns:
            List of input tensors types
        """
        ...

    @property
    @abstractmethod
    def outputs_types(self) -> Sequence[TensorType]:
        """Returns the list of output tensors types for this operation.

        Returns:
            List of output tensors types
        """
        ...

    @property
    @abstractmethod
    def dims(self) -> Mapping[DimSpec, DimSize]:
        """Returns the dict of dimensions size for this operation.

        A dimension size may be resolved (int) or symbolic (str).

        Returns:
            Dict of dim name, dim size
        """
        ...

    @abstractmethod
    def dims_kind(self, kind: str) -> Sequence[DimSpec]:
        """Returns the list of dimensions of the given kind.

        The kind argument is currently one of:
        - "P" for parallel dims
        - "R" for reduction axes

        Returns:
            List of dims names
        """
        ...

    @property
    @abstractmethod
    def accesses_maps(self) -> AccessesMaps:
        """Returns the accesses map for this operation.

        Accesses maps are a 3-tuple with:
        - operation dimensions names tuple,
        - tuple of inputs accesses tuples for each input,
        - tuple of outputs accesses tuples for each output,

        Returns:
            Accesses map for this operation
        """
        ...
xtc.itf.graph.operation.Operation.accesses_maps abstractmethod property

Returns the accesses map for this operation.

Accesses maps are a 3-tuple with: - operation dimensions names tuple, - tuple of inputs accesses tuples for each input, - tuple of outputs accesses tuples for each output,

Returns:

Type Description
AccessesMaps

Accesses map for this operation

xtc.itf.graph.operation.Operation.attrs abstractmethod property

Returns the dict of attributes for this operation.

Returns:

Type Description
OperationAttrs

Dict of attributes per name

xtc.itf.graph.operation.Operation.dims abstractmethod property

Returns the dict of dimensions size for this operation.

A dimension size may be resolved (int) or symbolic (str).

Returns:

Type Description
Mapping[DimSpec, DimSize]

Dict of dim name, dim size

xtc.itf.graph.operation.Operation.inputs_types abstractmethod property

Returns the list of input tensors types for this operation.

Returns:

Type Description
Sequence[TensorType]

List of input tensors types

xtc.itf.graph.operation.Operation.name abstractmethod property

Returns the unique name of this operations's operator.

Returns:

Type Description
str

The node's operator unique name

xtc.itf.graph.operation.Operation.outputs_types abstractmethod property

Returns the list of output tensors types for this operation.

Returns:

Type Description
Sequence[TensorType]

List of output tensors types

xtc.itf.graph.operation.Operation.dims_kind(kind) abstractmethod

Returns the list of dimensions of the given kind.

The kind argument is currently one of: - "P" for parallel dims - "R" for reduction axes

Returns:

Type Description
Sequence[DimSpec]

List of dims names

Source code in xtc/itf/graph/operation.py
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
@abstractmethod
def dims_kind(self, kind: str) -> Sequence[DimSpec]:
    """Returns the list of dimensions of the given kind.

    The kind argument is currently one of:
    - "P" for parallel dims
    - "R" for reduction axes

    Returns:
        List of dims names
    """
    ...

xtc.itf.operator

xtc.itf.operator.operator

xtc.itf.operator.operator.Operator

An abstract representation of the algebraic operation for a node.

An Operator defines the semantic behavior of operations in the graph, including how it transforms input tensor types and how it processes tensor data. It provides both type inference capabilities and concrete implementations of the operation.

Source code in xtc/itf/operator/operator.py
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
class Operator(ABC):
    """An abstract representation of the algebraic operation for a node.

    An Operator defines the semantic behavior of operations in the graph, including
    how it transforms input tensor types and how it processes tensor data. It provides
    both type inference capabilities and concrete implementations of the operation.
    """

    @property
    @abstractmethod
    def name(self) -> str:
        """Returns the unique identifier for this operator type.

        Returns:
            The operator's name
        """
        ...

    @abstractmethod
    def forward_types(self, inputs_types: Sequence[TensorType]) -> Sequence[TensorType]:
        """Infers output tensor types from input tensor types.

        Args:
            inputs_types: List of input tensor types

        Returns:
            List of inferred output tensor types
        """
        ...

    @abstractmethod
    def forward(self, inputs: Sequence[Tensor]) -> Sequence[Tensor]:
        """Evaluate the operator with input tensors to produce output tensors.

        Args:
            inputs: List of input tensors

        Returns:
            List of output tensors
        """
        ...
xtc.itf.operator.operator.Operator.name abstractmethod property

Returns the unique identifier for this operator type.

Returns:

Type Description
str

The operator's name

xtc.itf.operator.operator.Operator.forward(inputs) abstractmethod

Evaluate the operator with input tensors to produce output tensors.

Parameters:

Name Type Description Default
inputs Sequence[Tensor]

List of input tensors

required

Returns:

Type Description
Sequence[Tensor]

List of output tensors

Source code in xtc/itf/operator/operator.py
40
41
42
43
44
45
46
47
48
49
50
@abstractmethod
def forward(self, inputs: Sequence[Tensor]) -> Sequence[Tensor]:
    """Evaluate the operator with input tensors to produce output tensors.

    Args:
        inputs: List of input tensors

    Returns:
        List of output tensors
    """
    ...
xtc.itf.operator.operator.Operator.forward_types(inputs_types) abstractmethod

Infers output tensor types from input tensor types.

Parameters:

Name Type Description Default
inputs_types Sequence[TensorType]

List of input tensor types

required

Returns:

Type Description
Sequence[TensorType]

List of inferred output tensor types

Source code in xtc/itf/operator/operator.py
28
29
30
31
32
33
34
35
36
37
38
@abstractmethod
def forward_types(self, inputs_types: Sequence[TensorType]) -> Sequence[TensorType]:
    """Infers output tensor types from input tensor types.

    Args:
        inputs_types: List of input tensor types

    Returns:
        List of inferred output tensor types
    """
    ...

xtc.itf.schd

xtc.itf.schd.schedule

xtc.itf.schd.schedule.Schedule

An abstract representation of the result of transformations from a scheduler.

A Schedule captures all the transformations and optimizations that have been applied to an implementation by its associated Scheduler. It serves as an intermediate representation between scheduling operations and code generation, allowing Compilers to generate optimized executable code based on the specified transformations.

Schedules are backend-specific and contain the necessary information for their associated Compiler to generate code optimized for the target platform and runtime. Common transformations captured in a Schedule include: - Tiling - Loop interchange - Vectorization - Parallelization - Loop unrolling

Source code in xtc/itf/schd/schedule.py
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
class Schedule(ABC):
    """An abstract representation of the result of transformations from a scheduler.

    A Schedule captures all the transformations and optimizations that have been
    applied to an implementation by its associated Scheduler. It serves as an
    intermediate representation between scheduling operations and code generation,
    allowing Compilers to generate optimized executable code based on the
    specified transformations.

    Schedules are backend-specific and contain the necessary information for
    their associated Compiler to generate code optimized for the target
    platform and runtime. Common transformations captured in a Schedule include:
    - Tiling
    - Loop interchange
    - Vectorization
    - Parallelization
    - Loop unrolling
    """

    @property
    @abstractmethod
    def scheduler(self) -> "xtc.itf.schd.Scheduler":
        """Returns the scheduler that generated this schedule.

        Returns:
        """
        ...
xtc.itf.schd.schedule.Schedule.scheduler abstractmethod property

Returns the scheduler that generated this schedule.

Returns:

xtc.itf.schd.scheduler

xtc.itf.schd.scheduler.Scheduler

An abstract implementation of the backend scheduler.

A Scheduler is constructed from a given Backend and is responsible for applying primitive scheduling operations and transformations to the implementation. It generates a Schedule that captures these transformations, which can then be used by a Compiler to generate optimized executable code.

Schedulers are backend-specific and work with their associated Backend to provide optimization capabilities appropriate for the target platform and runtime.

Source code in xtc/itf/schd/scheduler.py
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
class Scheduler(ABC):
    """An abstract implementation of the backend scheduler.

    A Scheduler is constructed from a given Backend and is responsible for
    applying primitive scheduling operations and transformations to the implementation.
    It generates a Schedule that captures these transformations, which can then be
    used by a Compiler to generate optimized executable code.

    Schedulers are backend-specific and work with their associated Backend
    to provide optimization capabilities appropriate for the target platform
    and runtime.
    """

    @abstractmethod
    def schedule(self) -> Schedule:
        """Creates a Schedule from the applied transformations.

        Returns a Schedule object that captures all the transformations and
        optimizations that have been applied to the implementation. This
        Schedule can then be used by a Compiler to generate executable code.

        Returns:
            Schedule: The resulting schedule containing all applied transformations
        """
        ...

    @property
    @abstractmethod
    def backend(self) -> "xtc.itf.back.Backend":
        """Returns the backend associated with this scheduler.

        Returns:
            Backend: The backend-specific implementation this scheduler
                     applies transformations to
        """
        ...

    @abstractmethod
    def set_dims(self, dims: list[str]) -> None:
        """Redefines dimensions names.

        Use provided abstract dimensions names for the scheduler
        transformantions instead of the default operation dimensions names.

        This should be set before applying the transformations

        Args:
            dims: list of dimensions names
        """
        ...

    @abstractmethod
    def split(
        self, dim: str, segments: dict[str, int], root: str = DEFAULT_ROOT
    ) -> None:
        """Split a dimension into `len(segments)` segments.

        Each segment is characterized by a starting/cutting point,
        which is also the endpoint of the previous segment, and by
        the name of the new axis created by the cut. The segments
        items must be provided in ascending order of the cut points
        on the axis.

        Args:
            dim: name of the dimension to split
            segments: ordered dict of new root name and segment
                      starting point
        """
        ...

    def strip_mine(
        self, dim: str, tiles: dict[str, int], root: str = DEFAULT_ROOT
    ) -> None:
        """Apply a multi level strip mining transformation on the given dimension.

        The strip mining can be seen as a multi level 1D tiling where the
        given tile sizes are interpreter outer to inner.
        After this transformation, the number of axis for the given initial
        dimension is `1 + len(tiles)` where the first axis inherits
        the name of the dimension, and the remaining axis names are
        given by the given tiles keys.
        Each 1D tile size must be greater or equal to the inner tile sizes.
        Some backend may not support non-divisible tile sizes, in which
        case an assertion is raised.

        Args:
            dim: name of the dimension to strip mine
            tiles: dict outer to inner of axis name and tile size
            root: the parent split (or the operator's absolute root)
        """
        self.tile(dim=dim, tiles=tiles, root=root)

    @abstractmethod
    def tile(self, dim: str, tiles: dict[str, int], root: str = DEFAULT_ROOT) -> None:
        """Apply a multi level tiling operation.

        As of now the interface is limited to a single dimension tiling,
        hence it is equivalent to strip mining the given dimension.

        In order to create multi dimensional tiles, strip mine each dimension
        with tile or stip_mine and use interchange to reorder generated axes
        accordingly.

        Args:
            dim: name of the dimension to tile
            tiles: dict outer to inner of axis name and tile size
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def interchange(self, permutation: list[str], root: str = DEFAULT_ROOT) -> None:
        """Apply interchange over all axes.

        The given permutation of axes names is interpreted
        outer to inner and must have the same size as the
        number of axes after tiling.

        Args:
            permutation: outer to inner axes names permutation
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def vectorize(self, axes: list[str], root: str = DEFAULT_ROOT) -> None:
        """Apply vectorizations on the given axes names.

        The axes names given must all be inner axes and parallel axes, full
        unrolling and vectorization of all given axes is implied.

        Args:
            axes: axes names to vectorize
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def parallelize(self, axes: list[str], root: str = DEFAULT_ROOT) -> None:
        """Apply parallelization on the given axes names.

        The axes names must given must all be outer axes and parallel axes.

        Args:
            axes: axes names to parallelize
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def unroll(self, unrolls: dict[str, int], root: str = DEFAULT_ROOT) -> None:
        """Apply unrolling on the given axes names.

        Each given axes name is unrolled with the specified unroll
        factor. The unroll factors must be greater or equal to 1.

        Args:
            unrolls: dict of axes names and unroll factor
            root: the parent split (or the operator's absolute root)
        """

    @abstractmethod
    def buffer_at(
        self, axis: str, mtype: str | None = None, root: str = DEFAULT_ROOT
    ) -> None:
        """Create a write buffer at a given level.

        A write buffer is created for the output under the given
        axis, The buffer memory type can be specified or defaults
        to the local memory at this level.

        Args:
            axis: localisation of the write buffer
            mtype: buffer memory type for the allocation
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def pack_at(
        self,
        axis: str,
        input_idx: int,
        mtype: str | None = None,
        pad: bool = False,
        root: str = DEFAULT_ROOT,
    ) -> None:
        """Create a packed read buffer at a given level.

        A packed read buffer is created for the given input buffer index.
        The buffer memory type can be specified or defaults
        to the local memory at this level.
        When pad is true, a padding strategy is applied in order to reduce
        sets/banks conflicts.

        Args:
            axis: localisation of the write buffer
            input_idx: input buffer index for the scheduled computation
            mtype: buffer memory type for the allocation
            pad: whether to add padding or not
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def fuse_producer_at(
        self, axis: str, input_idx: int, root: str = DEFAULT_ROOT
    ) -> None:
        """Fuse producer computation at the given consumer location.

        Given the input index identifying the producer of the input buffer,
        fuse the computation at the given scheduled consumer axis.
        The necessary input slices reads and computations will be inserted
        for computing the output tile at the given axis location.

        Args:
            axis: localisation of the fusion in the consumer
            input_idx: input index of the consumer
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def define_memory_mesh(self, axes: dict[str, int]) -> None:
        """Define a memory mesh.

        Args:
            axes: dictionary where keys are axes names and values are the number of memories along each axis
        """
        ...

    @abstractmethod
    def define_processor_mesh(self, axes: dict[str, int]) -> None:
        """Define a processor mesh. It must be a superset of the memory mesh.

        Args:
            axes: dictionary where keys are axes names and values are the number of processors along each axis
        """
        ...

    @abstractmethod
    def distribute(
        self, axis: str, processor_axis: str, root: str = DEFAULT_ROOT
    ) -> None:
        """Distribute computation across processors along a given axis.

        This method distributes the computation of the specified axis across
        multiple processors or cores. The processor_axis parameter defines
        the axis that represents the processor dimension for this distribution.

        Args:
            axis: the axis to distribute across processors
            processor_axis: the axis representing the processor dimension
            root: the parent split (or the operator's absolute root)
        """
        ...

    @abstractmethod
    def distributed_buffer_at(
        self,
        axis: str,
        input_idx: int,
        memory_axes: list[str],
        root: str = DEFAULT_ROOT,
    ) -> None:
        """Create a distributed buffer at a given level across multiple memory axes.

        This method creates a distributed buffer for the given input buffer index
        at the specified axis level. The buffer is distributed across the provided
        memory axes, enabling distributed memory management and access patterns
        for improved performance in distributed computing environments.

        Args:
            axis: the axis level where the distributed buffer should be created
            input_idx: input buffer index for the scheduled computation
            memory_axes: list of memory axes across which to distribute the buffer
            root: the parent split (or the operator's absolute root)
        """
        ...
xtc.itf.schd.scheduler.Scheduler.backend abstractmethod property

Returns the backend associated with this scheduler.

Returns:

Name Type Description
Backend Backend

The backend-specific implementation this scheduler applies transformations to

xtc.itf.schd.scheduler.Scheduler.buffer_at(axis, mtype=None, root=DEFAULT_ROOT) abstractmethod

Create a write buffer at a given level.

A write buffer is created for the output under the given axis, The buffer memory type can be specified or defaults to the local memory at this level.

Parameters:

Name Type Description Default
axis str

localisation of the write buffer

required
mtype str | None

buffer memory type for the allocation

None
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
@abstractmethod
def buffer_at(
    self, axis: str, mtype: str | None = None, root: str = DEFAULT_ROOT
) -> None:
    """Create a write buffer at a given level.

    A write buffer is created for the output under the given
    axis, The buffer memory type can be specified or defaults
    to the local memory at this level.

    Args:
        axis: localisation of the write buffer
        mtype: buffer memory type for the allocation
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.define_memory_mesh(axes) abstractmethod

Define a memory mesh.

Parameters:

Name Type Description Default
axes dict[str, int]

dictionary where keys are axes names and values are the number of memories along each axis

required
Source code in xtc/itf/schd/scheduler.py
234
235
236
237
238
239
240
241
@abstractmethod
def define_memory_mesh(self, axes: dict[str, int]) -> None:
    """Define a memory mesh.

    Args:
        axes: dictionary where keys are axes names and values are the number of memories along each axis
    """
    ...
xtc.itf.schd.scheduler.Scheduler.define_processor_mesh(axes) abstractmethod

Define a processor mesh. It must be a superset of the memory mesh.

Parameters:

Name Type Description Default
axes dict[str, int]

dictionary where keys are axes names and values are the number of processors along each axis

required
Source code in xtc/itf/schd/scheduler.py
243
244
245
246
247
248
249
250
@abstractmethod
def define_processor_mesh(self, axes: dict[str, int]) -> None:
    """Define a processor mesh. It must be a superset of the memory mesh.

    Args:
        axes: dictionary where keys are axes names and values are the number of processors along each axis
    """
    ...
xtc.itf.schd.scheduler.Scheduler.distribute(axis, processor_axis, root=DEFAULT_ROOT) abstractmethod

Distribute computation across processors along a given axis.

This method distributes the computation of the specified axis across multiple processors or cores. The processor_axis parameter defines the axis that represents the processor dimension for this distribution.

Parameters:

Name Type Description Default
axis str

the axis to distribute across processors

required
processor_axis str

the axis representing the processor dimension

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
@abstractmethod
def distribute(
    self, axis: str, processor_axis: str, root: str = DEFAULT_ROOT
) -> None:
    """Distribute computation across processors along a given axis.

    This method distributes the computation of the specified axis across
    multiple processors or cores. The processor_axis parameter defines
    the axis that represents the processor dimension for this distribution.

    Args:
        axis: the axis to distribute across processors
        processor_axis: the axis representing the processor dimension
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.distributed_buffer_at(axis, input_idx, memory_axes, root=DEFAULT_ROOT) abstractmethod

Create a distributed buffer at a given level across multiple memory axes.

This method creates a distributed buffer for the given input buffer index at the specified axis level. The buffer is distributed across the provided memory axes, enabling distributed memory management and access patterns for improved performance in distributed computing environments.

Parameters:

Name Type Description Default
axis str

the axis level where the distributed buffer should be created

required
input_idx int

input buffer index for the scheduled computation

required
memory_axes list[str]

list of memory axes across which to distribute the buffer

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
@abstractmethod
def distributed_buffer_at(
    self,
    axis: str,
    input_idx: int,
    memory_axes: list[str],
    root: str = DEFAULT_ROOT,
) -> None:
    """Create a distributed buffer at a given level across multiple memory axes.

    This method creates a distributed buffer for the given input buffer index
    at the specified axis level. The buffer is distributed across the provided
    memory axes, enabling distributed memory management and access patterns
    for improved performance in distributed computing environments.

    Args:
        axis: the axis level where the distributed buffer should be created
        input_idx: input buffer index for the scheduled computation
        memory_axes: list of memory axes across which to distribute the buffer
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.fuse_producer_at(axis, input_idx, root=DEFAULT_ROOT) abstractmethod

Fuse producer computation at the given consumer location.

Given the input index identifying the producer of the input buffer, fuse the computation at the given scheduled consumer axis. The necessary input slices reads and computations will be inserted for computing the output tile at the given axis location.

Parameters:

Name Type Description Default
axis str

localisation of the fusion in the consumer

required
input_idx int

input index of the consumer

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
@abstractmethod
def fuse_producer_at(
    self, axis: str, input_idx: int, root: str = DEFAULT_ROOT
) -> None:
    """Fuse producer computation at the given consumer location.

    Given the input index identifying the producer of the input buffer,
    fuse the computation at the given scheduled consumer axis.
    The necessary input slices reads and computations will be inserted
    for computing the output tile at the given axis location.

    Args:
        axis: localisation of the fusion in the consumer
        input_idx: input index of the consumer
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.interchange(permutation, root=DEFAULT_ROOT) abstractmethod

Apply interchange over all axes.

The given permutation of axes names is interpreted outer to inner and must have the same size as the number of axes after tiling.

Parameters:

Name Type Description Default
permutation list[str]

outer to inner axes names permutation

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
122
123
124
125
126
127
128
129
130
131
132
133
134
@abstractmethod
def interchange(self, permutation: list[str], root: str = DEFAULT_ROOT) -> None:
    """Apply interchange over all axes.

    The given permutation of axes names is interpreted
    outer to inner and must have the same size as the
    number of axes after tiling.

    Args:
        permutation: outer to inner axes names permutation
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.pack_at(axis, input_idx, mtype=None, pad=False, root=DEFAULT_ROOT) abstractmethod

Create a packed read buffer at a given level.

A packed read buffer is created for the given input buffer index. The buffer memory type can be specified or defaults to the local memory at this level. When pad is true, a padding strategy is applied in order to reduce sets/banks conflicts.

Parameters:

Name Type Description Default
axis str

localisation of the write buffer

required
input_idx int

input buffer index for the scheduled computation

required
mtype str | None

buffer memory type for the allocation

None
pad bool

whether to add padding or not

False
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
@abstractmethod
def pack_at(
    self,
    axis: str,
    input_idx: int,
    mtype: str | None = None,
    pad: bool = False,
    root: str = DEFAULT_ROOT,
) -> None:
    """Create a packed read buffer at a given level.

    A packed read buffer is created for the given input buffer index.
    The buffer memory type can be specified or defaults
    to the local memory at this level.
    When pad is true, a padding strategy is applied in order to reduce
    sets/banks conflicts.

    Args:
        axis: localisation of the write buffer
        input_idx: input buffer index for the scheduled computation
        mtype: buffer memory type for the allocation
        pad: whether to add padding or not
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.parallelize(axes, root=DEFAULT_ROOT) abstractmethod

Apply parallelization on the given axes names.

The axes names must given must all be outer axes and parallel axes.

Parameters:

Name Type Description Default
axes list[str]

axes names to parallelize

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
149
150
151
152
153
154
155
156
157
158
159
@abstractmethod
def parallelize(self, axes: list[str], root: str = DEFAULT_ROOT) -> None:
    """Apply parallelization on the given axes names.

    The axes names must given must all be outer axes and parallel axes.

    Args:
        axes: axes names to parallelize
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.schedule() abstractmethod

Creates a Schedule from the applied transformations.

Returns a Schedule object that captures all the transformations and optimizations that have been applied to the implementation. This Schedule can then be used by a Compiler to generate executable code.

Returns:

Name Type Description
Schedule Schedule

The resulting schedule containing all applied transformations

Source code in xtc/itf/schd/scheduler.py
25
26
27
28
29
30
31
32
33
34
35
36
@abstractmethod
def schedule(self) -> Schedule:
    """Creates a Schedule from the applied transformations.

    Returns a Schedule object that captures all the transformations and
    optimizations that have been applied to the implementation. This
    Schedule can then be used by a Compiler to generate executable code.

    Returns:
        Schedule: The resulting schedule containing all applied transformations
    """
    ...
xtc.itf.schd.scheduler.Scheduler.set_dims(dims) abstractmethod

Redefines dimensions names.

Use provided abstract dimensions names for the scheduler transformantions instead of the default operation dimensions names.

This should be set before applying the transformations

Parameters:

Name Type Description Default
dims list[str]

list of dimensions names

required
Source code in xtc/itf/schd/scheduler.py
49
50
51
52
53
54
55
56
57
58
59
60
61
@abstractmethod
def set_dims(self, dims: list[str]) -> None:
    """Redefines dimensions names.

    Use provided abstract dimensions names for the scheduler
    transformantions instead of the default operation dimensions names.

    This should be set before applying the transformations

    Args:
        dims: list of dimensions names
    """
    ...
xtc.itf.schd.scheduler.Scheduler.split(dim, segments, root=DEFAULT_ROOT) abstractmethod

Split a dimension into len(segments) segments.

Each segment is characterized by a starting/cutting point, which is also the endpoint of the previous segment, and by the name of the new axis created by the cut. The segments items must be provided in ascending order of the cut points on the axis.

Parameters:

Name Type Description Default
dim str

name of the dimension to split

required
segments dict[str, int]

ordered dict of new root name and segment starting point

required
Source code in xtc/itf/schd/scheduler.py
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
@abstractmethod
def split(
    self, dim: str, segments: dict[str, int], root: str = DEFAULT_ROOT
) -> None:
    """Split a dimension into `len(segments)` segments.

    Each segment is characterized by a starting/cutting point,
    which is also the endpoint of the previous segment, and by
    the name of the new axis created by the cut. The segments
    items must be provided in ascending order of the cut points
    on the axis.

    Args:
        dim: name of the dimension to split
        segments: ordered dict of new root name and segment
                  starting point
    """
    ...
xtc.itf.schd.scheduler.Scheduler.strip_mine(dim, tiles, root=DEFAULT_ROOT)

Apply a multi level strip mining transformation on the given dimension.

The strip mining can be seen as a multi level 1D tiling where the given tile sizes are interpreter outer to inner. After this transformation, the number of axis for the given initial dimension is 1 + len(tiles) where the first axis inherits the name of the dimension, and the remaining axis names are given by the given tiles keys. Each 1D tile size must be greater or equal to the inner tile sizes. Some backend may not support non-divisible tile sizes, in which case an assertion is raised.

Parameters:

Name Type Description Default
dim str

name of the dimension to strip mine

required
tiles dict[str, int]

dict outer to inner of axis name and tile size

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
def strip_mine(
    self, dim: str, tiles: dict[str, int], root: str = DEFAULT_ROOT
) -> None:
    """Apply a multi level strip mining transformation on the given dimension.

    The strip mining can be seen as a multi level 1D tiling where the
    given tile sizes are interpreter outer to inner.
    After this transformation, the number of axis for the given initial
    dimension is `1 + len(tiles)` where the first axis inherits
    the name of the dimension, and the remaining axis names are
    given by the given tiles keys.
    Each 1D tile size must be greater or equal to the inner tile sizes.
    Some backend may not support non-divisible tile sizes, in which
    case an assertion is raised.

    Args:
        dim: name of the dimension to strip mine
        tiles: dict outer to inner of axis name and tile size
        root: the parent split (or the operator's absolute root)
    """
    self.tile(dim=dim, tiles=tiles, root=root)
xtc.itf.schd.scheduler.Scheduler.tile(dim, tiles, root=DEFAULT_ROOT) abstractmethod

Apply a multi level tiling operation.

As of now the interface is limited to a single dimension tiling, hence it is equivalent to strip mining the given dimension.

In order to create multi dimensional tiles, strip mine each dimension with tile or stip_mine and use interchange to reorder generated axes accordingly.

Parameters:

Name Type Description Default
dim str

name of the dimension to tile

required
tiles dict[str, int]

dict outer to inner of axis name and tile size

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
@abstractmethod
def tile(self, dim: str, tiles: dict[str, int], root: str = DEFAULT_ROOT) -> None:
    """Apply a multi level tiling operation.

    As of now the interface is limited to a single dimension tiling,
    hence it is equivalent to strip mining the given dimension.

    In order to create multi dimensional tiles, strip mine each dimension
    with tile or stip_mine and use interchange to reorder generated axes
    accordingly.

    Args:
        dim: name of the dimension to tile
        tiles: dict outer to inner of axis name and tile size
        root: the parent split (or the operator's absolute root)
    """
    ...
xtc.itf.schd.scheduler.Scheduler.unroll(unrolls, root=DEFAULT_ROOT) abstractmethod

Apply unrolling on the given axes names.

Each given axes name is unrolled with the specified unroll factor. The unroll factors must be greater or equal to 1.

Parameters:

Name Type Description Default
unrolls dict[str, int]

dict of axes names and unroll factor

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
161
162
163
164
165
166
167
168
169
170
171
@abstractmethod
def unroll(self, unrolls: dict[str, int], root: str = DEFAULT_ROOT) -> None:
    """Apply unrolling on the given axes names.

    Each given axes name is unrolled with the specified unroll
    factor. The unroll factors must be greater or equal to 1.

    Args:
        unrolls: dict of axes names and unroll factor
        root: the parent split (or the operator's absolute root)
    """
xtc.itf.schd.scheduler.Scheduler.vectorize(axes, root=DEFAULT_ROOT) abstractmethod

Apply vectorizations on the given axes names.

The axes names given must all be inner axes and parallel axes, full unrolling and vectorization of all given axes is implied.

Parameters:

Name Type Description Default
axes list[str]

axes names to vectorize

required
root str

the parent split (or the operator's absolute root)

DEFAULT_ROOT
Source code in xtc/itf/schd/scheduler.py
136
137
138
139
140
141
142
143
144
145
146
147
@abstractmethod
def vectorize(self, axes: list[str], root: str = DEFAULT_ROOT) -> None:
    """Apply vectorizations on the given axes names.

    The axes names given must all be inner axes and parallel axes, full
    unrolling and vectorization of all given axes is implied.

    Args:
        axes: axes names to vectorize
        root: the parent split (or the operator's absolute root)
    """
    ...

xtc.itf.search

xtc.itf.search.optimizer

xtc.itf.search.optimizer.Optimizer

Base abstract class for implementing an optimizer

An Optimizer is used in iterative evaluation during loop-explore to suggest samples for each batch using observations from previous batch.

Source code in xtc/itf/search/optimizer.py
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
class Optimizer(ABC):
    """Base abstract class for implementing an optimizer

    An Optimizer is used in iterative evaluation during loop-explore to
    suggest samples for each batch using observations from previous batch.
    """

    @abstractmethod
    def suggest(self) -> list[VecSample]:
        """Suggests a new batch of samples to be evaluated.

        It gets a large sample of size batch_candidates, and then from that
        either returns random choices or uses a model to pick the predicted best samples.

        Returns:
            A list of samples representing a new batch of samples.
        """
        ...

    @abstractmethod
    def observe(self, x: list[VecSample], y: list[float]):
        """Observes the result of the batch evaluation and updates the model.

        The model is first fit after update_first samples are observed
        and is subsequently refit every update_period additional samples.

        Args:
            x: the batch of samples that were evaluated
            y: the evaluation result for each sample in the batch
        """
        ...

    @abstractmethod
    def finished(self):
        """Gets called when the evaluation for all iterations has been completed

        Used for cleaner logging
        """
        ...

    @abstractmethod
    def _sample_batch(self) -> list[VecSample]:
        """Uses the sampler to get a large sample from the strategy sampler.

        Used by suggest() to get a sample of candidates to choose from.

        Returns:
            A list of samples of size batch_candidates
        """
        ...
xtc.itf.search.optimizer.Optimizer.finished() abstractmethod

Gets called when the evaluation for all iterations has been completed

Used for cleaner logging

Source code in xtc/itf/search/optimizer.py
43
44
45
46
47
48
49
@abstractmethod
def finished(self):
    """Gets called when the evaluation for all iterations has been completed

    Used for cleaner logging
    """
    ...
xtc.itf.search.optimizer.Optimizer.observe(x, y) abstractmethod

Observes the result of the batch evaluation and updates the model.

The model is first fit after update_first samples are observed and is subsequently refit every update_period additional samples.

Parameters:

Name Type Description Default
x list[VecSample]

the batch of samples that were evaluated

required
y list[float]

the evaluation result for each sample in the batch

required
Source code in xtc/itf/search/optimizer.py
30
31
32
33
34
35
36
37
38
39
40
41
@abstractmethod
def observe(self, x: list[VecSample], y: list[float]):
    """Observes the result of the batch evaluation and updates the model.

    The model is first fit after update_first samples are observed
    and is subsequently refit every update_period additional samples.

    Args:
        x: the batch of samples that were evaluated
        y: the evaluation result for each sample in the batch
    """
    ...
xtc.itf.search.optimizer.Optimizer.suggest() abstractmethod

Suggests a new batch of samples to be evaluated.

It gets a large sample of size batch_candidates, and then from that either returns random choices or uses a model to pick the predicted best samples.

Returns:

Type Description
list[VecSample]

A list of samples representing a new batch of samples.

Source code in xtc/itf/search/optimizer.py
18
19
20
21
22
23
24
25
26
27
28
@abstractmethod
def suggest(self) -> list[VecSample]:
    """Suggests a new batch of samples to be evaluated.

    It gets a large sample of size batch_candidates, and then from that
    either returns random choices or uses a model to pick the predicted best samples.

    Returns:
        A list of samples representing a new batch of samples.
    """
    ...

xtc.itf.search.strategy

xtc.itf.search.strategy.Strategy

Base abstract class for implementing a strategy.

A strategy provides a predefined template for scheduling some operations embedded in a Graph object.

From a strategy, one can: - generate an exhaustive list of samples in the search space - randomly sample the search space - get a default sample depending on some optimization level - actually schedule a Scheduler object for the given graph.

Source code in xtc/itf/search/strategy.py
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
class Strategy(ABC):
    """Base abstract class for implementing a strategy.

    A strategy provides a predefined template for scheduling some operations
    embedded in a Graph object.

    From a strategy, one can:
    - generate an exhaustive list of samples in the search space
    - randomly sample the search space
    - get a default sample depending on some optimization level
    - actually schedule a Scheduler object for the given graph.
    """

    @property
    @abstractmethod
    def graph(self) -> Graph:
        """The graph associated with this strategy.

        This graph nust be the same object as the scheduler's graph
        when calling the generate method.

        Returns:
            The graph object
        """
        ...

    @abstractmethod
    def generate(self, scheduler: Scheduler, sample: Sample) -> None:
        """Generate and execute scheduling operations for the sample.

        This method applies the passed sample from the strategy sample
        space to the given scheduler.

        Note that the scheduler state is changed.
        In order to get the final schedule, after calling this method,
        call sch.schedule().

        Args:
            scheduler: The Scheduler object
            sample: The sample to apply
        """
        ...

    @abstractmethod
    def exhaustive(self) -> Iterator[Sample]:
        """Generates the exhaustive space of samples for this strategy.

        The actual space size may be huge, hence it is not recommended
        to convert this output to a list for instance without knowing
        the space size upperbound.

        Note that the returned samples are not randomized, hence the
        order is deterministic, though probably not suitable for
        random exploration unless all samples are retrieved.

        Returns:
            An iterator to the generated samples
        """
        ...

    @abstractmethod
    def sample(self, num: int, seed: int | None = 0) -> Iterator[Sample]:
        """Generates unique random samples from this strategy.

        The implementation should ensure that the search space
        is sampled uniformly, i.e. each distinct point in the
        search space should be equally probable.

        The number of requested samples must be greater than 0.

        If the seed provided is None, the generated sample list
        is not deterministic.

        Note that the returned number of samples may be less than
        the requested number of sample either because:
        - the search space is smaller than the requested number
        - the stop condition for sampling distinct samples is reached

        Args:
            num: number of samples requested
            seed: optional fixed seed, defaults to 0

        Returns:
            An iterator to the generated samples
        """
        ...

    @abstractmethod
    def default_schedule(self, opt_level: int = 2) -> Sample:
        """Generates a default sample for some optimization level.

        The returned sample should be a reasonable schedule given the
        strategy and passed opt_level. There is no rule there, though,
        typically vectorization and tilings are done at opt_level >= 3.

        Args:
            opt_level: The optimization level in [0, 3]

        Returns:
            The selected sample
        """
        ...

    @property
    @abstractmethod
    def sample_names(self) -> list[str]:
        """The names of the sample variables associated with the strategy.

        The order of the names must correspond to the order of the values
        in a sample.

        Returns:
            The list of the names of the sample variables.
        """
        ...

    @abstractmethod
    def dict_to_sample(self, sample: dict[str, Any]) -> Sample:
        """Generates a VecSample from a given Sample.

        The variables in the VecSample are in the order given by self.sample_names.

        Args:
            sample: The Sample to convert

        Returns:
            The equivalent VecSample
        """
        ...

    @abstractmethod
    def sample_to_dict(self, sample: Sample) -> dict[str, int]:
        """Generates a Sample from a given VecSample.

        The variables in the VecSample must be in the order given by self.sample_names.

        Args:
            sample: The VecSample to convert

        Returns:
            The equivalent Sample
        """
        ...
xtc.itf.search.strategy.Strategy.graph abstractmethod property

The graph associated with this strategy.

This graph nust be the same object as the scheduler's graph when calling the generate method.

Returns:

Type Description
Graph

The graph object

xtc.itf.search.strategy.Strategy.sample_names abstractmethod property

The names of the sample variables associated with the strategy.

The order of the names must correspond to the order of the values in a sample.

Returns:

Type Description
list[str]

The list of the names of the sample variables.

xtc.itf.search.strategy.Strategy.default_schedule(opt_level=2) abstractmethod

Generates a default sample for some optimization level.

The returned sample should be a reasonable schedule given the strategy and passed opt_level. There is no rule there, though, typically vectorization and tilings are done at opt_level >= 3.

Parameters:

Name Type Description Default
opt_level int

The optimization level in [0, 3]

2

Returns:

Type Description
Sample

The selected sample

Source code in xtc/itf/search/strategy.py
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
@abstractmethod
def default_schedule(self, opt_level: int = 2) -> Sample:
    """Generates a default sample for some optimization level.

    The returned sample should be a reasonable schedule given the
    strategy and passed opt_level. There is no rule there, though,
    typically vectorization and tilings are done at opt_level >= 3.

    Args:
        opt_level: The optimization level in [0, 3]

    Returns:
        The selected sample
    """
    ...
xtc.itf.search.strategy.Strategy.dict_to_sample(sample) abstractmethod

Generates a VecSample from a given Sample.

The variables in the VecSample are in the order given by self.sample_names.

Parameters:

Name Type Description Default
sample dict[str, Any]

The Sample to convert

required

Returns:

Type Description
Sample

The equivalent VecSample

Source code in xtc/itf/search/strategy.py
138
139
140
141
142
143
144
145
146
147
148
149
150
@abstractmethod
def dict_to_sample(self, sample: dict[str, Any]) -> Sample:
    """Generates a VecSample from a given Sample.

    The variables in the VecSample are in the order given by self.sample_names.

    Args:
        sample: The Sample to convert

    Returns:
        The equivalent VecSample
    """
    ...
xtc.itf.search.strategy.Strategy.exhaustive() abstractmethod

Generates the exhaustive space of samples for this strategy.

The actual space size may be huge, hence it is not recommended to convert this output to a list for instance without knowing the space size upperbound.

Note that the returned samples are not randomized, hence the order is deterministic, though probably not suitable for random exploration unless all samples are retrieved.

Returns:

Type Description
Iterator[Sample]

An iterator to the generated samples

Source code in xtc/itf/search/strategy.py
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
@abstractmethod
def exhaustive(self) -> Iterator[Sample]:
    """Generates the exhaustive space of samples for this strategy.

    The actual space size may be huge, hence it is not recommended
    to convert this output to a list for instance without knowing
    the space size upperbound.

    Note that the returned samples are not randomized, hence the
    order is deterministic, though probably not suitable for
    random exploration unless all samples are retrieved.

    Returns:
        An iterator to the generated samples
    """
    ...
xtc.itf.search.strategy.Strategy.generate(scheduler, sample) abstractmethod

Generate and execute scheduling operations for the sample.

This method applies the passed sample from the strategy sample space to the given scheduler.

Note that the scheduler state is changed. In order to get the final schedule, after calling this method, call sch.schedule().

Parameters:

Name Type Description Default
scheduler Scheduler

The Scheduler object

required
sample Sample

The sample to apply

required
Source code in xtc/itf/search/strategy.py
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
@abstractmethod
def generate(self, scheduler: Scheduler, sample: Sample) -> None:
    """Generate and execute scheduling operations for the sample.

    This method applies the passed sample from the strategy sample
    space to the given scheduler.

    Note that the scheduler state is changed.
    In order to get the final schedule, after calling this method,
    call sch.schedule().

    Args:
        scheduler: The Scheduler object
        sample: The sample to apply
    """
    ...
xtc.itf.search.strategy.Strategy.sample(num, seed=0) abstractmethod

Generates unique random samples from this strategy.

The implementation should ensure that the search space is sampled uniformly, i.e. each distinct point in the search space should be equally probable.

The number of requested samples must be greater than 0.

If the seed provided is None, the generated sample list is not deterministic.

Note that the returned number of samples may be less than the requested number of sample either because: - the search space is smaller than the requested number - the stop condition for sampling distinct samples is reached

Parameters:

Name Type Description Default
num int

number of samples requested

required
seed int | None

optional fixed seed, defaults to 0

0

Returns:

Type Description
Iterator[Sample]

An iterator to the generated samples

Source code in xtc/itf/search/strategy.py
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
@abstractmethod
def sample(self, num: int, seed: int | None = 0) -> Iterator[Sample]:
    """Generates unique random samples from this strategy.

    The implementation should ensure that the search space
    is sampled uniformly, i.e. each distinct point in the
    search space should be equally probable.

    The number of requested samples must be greater than 0.

    If the seed provided is None, the generated sample list
    is not deterministic.

    Note that the returned number of samples may be less than
    the requested number of sample either because:
    - the search space is smaller than the requested number
    - the stop condition for sampling distinct samples is reached

    Args:
        num: number of samples requested
        seed: optional fixed seed, defaults to 0

    Returns:
        An iterator to the generated samples
    """
    ...
xtc.itf.search.strategy.Strategy.sample_to_dict(sample) abstractmethod

Generates a Sample from a given VecSample.

The variables in the VecSample must be in the order given by self.sample_names.

Parameters:

Name Type Description Default
sample Sample

The VecSample to convert

required

Returns:

Type Description
dict[str, int]

The equivalent Sample

Source code in xtc/itf/search/strategy.py
152
153
154
155
156
157
158
159
160
161
162
163
164
@abstractmethod
def sample_to_dict(self, sample: Sample) -> dict[str, int]:
    """Generates a Sample from a given VecSample.

    The variables in the VecSample must be in the order given by self.sample_names.

    Args:
        sample: The VecSample to convert

    Returns:
        The equivalent Sample
    """
    ...