Benchmarks Reference

The following is a reference of the benchmarks available in the top level construe package.

Basic Benchmark

Benchmarks basic dot product torch operators.

See: https://pytorch.org/tutorials/recipes/recipes/benchmark.html

class construe.basic.BasicBenchmark(env=None, saveto=None, num_threads=None, fuzz=False, seed=None)[source]

Bases: object

Methods

fuzzer()

Generates random tensors with 128 to 10000000 elements and sizes k0 and k1 chosen from a loguniform distribution in [1, 10000], 40% of which will be discontiguous on average.

run

static

fuzzer()[source]

Generates random tensors with 128 to 10000000 elements and sizes k0 and k1 chosen from a loguniform distribution in [1, 10000], 40% of which will be discontiguous on average.

run()[source]
static()[source]
construe.basic.batched_dot_bmm(a, b)[source]

Computes batched dot by reducing to bmm

construe.basic.batched_dot_mul_sum(a, b)[source]

Computes batched dot by multiplying and summing

GLiNER Benchmark

GLiNER named entity discovery benchmark runner

class construe.gliner.GLiNER(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.

Lowlight Benchmark

LowLight image enhancement benchmark runner

class construe.lowlight.LowLight(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.

MobileNet Benchmark

MobileNet benchmark runner

class construe.mobilenet.MobileNet(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.

MobileViT Benchmark

MobileViT benchmark runner

class construe.mobilevit.MobileViT(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.

Moondream Benchmark

Moondream is a computer vision model (image to text) that is optimized for use on embedded devices and serves as an example model in content moderation use cases where the image is captioned and then the caption is moderated.

class construe.moondream.MoonDream(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.

NSFW Benchmark

NSFW Image Classification benchmark runner

class construe.nsfw.NSFW(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.

Offensive Benchmark

Offensive speech benchmark runner

class construe.offensive.Offensive(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.

Whisper Benchmark

Whisper benchmark runner

class construe.whisper.Whisper(**kwargs)[source]

Bases: Benchmark

Attributes:
data_home
description
metadata
model_home
options
use_sample

Methods

after([cleanup])

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

inference(instance)

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances([limit])

This method should yield all instances in the dataset at least once.

preprocess(instance)

Any preprocessing that must be performed on an instance is handled with this method.

total(**kwargs)

For progress bar purposes should report the total number of instances in one run of the Benchmark.

after(cleanup=True)[source]

This method is called after the benchamrk is run; if cleanup is True the class should delete any cached datasets or models.

before()[source]

This method is called before the benchmark runs and should cause it to setup any datasets and models needed for the benchmark to run.

property description
inference(instance)[source]

This represents the primary inference of the benchmark and is measured for latency and memory usage to add to the metrics.

instances(limit=None)[source]

This method should yield all instances in the dataset at least once.

preprocess(instance)[source]

Any preprocessing that must be performed on an instance is handled with this method. This method is measured for latency and memory usage as well.

static total(**kwargs)[source]

For progress bar purposes should report the total number of instances in one run of the Benchmark. Generally this should be hard-coded but can also be computed if necessary.