fractopo.analysis package

Submodules

fractopo.analysis.anisotropy module

Anisotropy of connectivity determination utilities.

fractopo.analysis.anisotropy.determine_anisotropy_classification(branch_classification: str) int

Return value based on branch classification.

Only C-C branches have a value, but this can be changed here. Classification can differ from ‘C - C’, ‘C - I’, ‘I - I’ (e.g. ‘C - E’) in which case a value (0) is still returned.

Parameters:

branch_classification – Branch classification string.

Returns:

Classification encoded as integer.

E.g.

>>> determine_anisotropy_classification("C - C")
1
>>> determine_anisotropy_classification("C - E")
0
fractopo.analysis.anisotropy.determine_anisotropy_sum(azimuth_array: ndarray, branch_types: ndarray, length_array: ndarray, sample_intervals: ndarray = array([0, 30, 60, 90, 120, 150])) tuple[ndarray, ndarray]

Determine the sums of branch anisotropies.

Parameters:
  • azimuth_array – Array of branch azimuth values.

  • branch_types – Array of branch type classication strings.

  • length_array – Array of branch lengths.

  • sample_intervals – Array of the sampling intervals.

Returns:

Sums of branch anisotropies.

E.g.

>>> from pprint import pprint
>>> azimuths = np.array([20, 50, 60, 70])
>>> lengths = np.array([2, 5, 6, 7])
>>> branch_types = np.array(["C - C", "C - C", "C - C", "C - I"])
>>> pprint(determine_anisotropy_sum(azimuths, branch_types, lengths))
(array([ 8.09332329, 11.86423103, 12.45612765,  9.71041492,  5.05739707,
        2.15381611]),
 array([  0,  30,  60,  90, 120, 150]))
fractopo.analysis.anisotropy.determine_anisotropy_value(azimuth: Number, branch_type: str, length: Number, sample_intervals: ndarray = array([0, 30, 60, 90, 120, 150])) ndarray

Calculate anisotropy of connectivity for a branch.

Based on azimuth, branch_type and length. Value is calculated for preset angles (sample_intervals = np.arange(0, 179, 30))

E.g.

Anisotropy for a C-C classified branch:

>>> determine_anisotropy_value(50, "C - C", 1)
array([0.64278761, 0.93969262, 0.98480775, 0.76604444, 0.34202014,
       0.17364818])

Other classification for branch:

>>> determine_anisotropy_value(50, "C - I", 1)
array([0, 0, 0, 0, 0, 0])
fractopo.analysis.anisotropy.plot_anisotropy_ax(anisotropy_sum: ndarray, ax: PolarAxes, sample_intervals: ndarray = array([0, 30, 60, 90, 120, 150]))

Plot a styled anisotropy of connectivity to a given PolarAxes.

Spline done with: https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.CubicSpline.html

fractopo.analysis.anisotropy.plot_anisotropy_plot(anisotropy_sum: ndarray, sample_intervals: ndarray) tuple[Figure, Axes]

Plot anisotropy values to new figure.

fractopo.analysis.azimuth module

Functions for plotting rose plots.

class fractopo.analysis.azimuth.AzimuthBins(bin_width: float, bin_locs: ndarray, bin_heights: ndarray)

Bases: object

Dataclass for azimuth rose plot bin data.

bin_heights: ndarray
bin_locs: ndarray
bin_width: float
fractopo.analysis.azimuth.decorate_azimuth_ax(ax: PolarAxes, label: str, length_array: ndarray, set_array: ndarray, set_names: Sequence[str], set_ranges: Sequence[tuple[float | int | Real, float | int | Real]], axial: bool, visualize_sets: bool, append_azimuth_set_text: bool = False, add_abundance_order: bool = False)

Decorate azimuth rose plot ax.

fractopo.analysis.azimuth.determine_azimuth_bins(azimuth_array: ndarray, length_array: ndarray | None = None, bin_multiplier: float | int | Real = 1.0, axial: bool = True) AzimuthBins

Calculate azimuth bins for plotting azimuth rose plots.

E.g.

>>> azimuth_array = np.array([25, 50, 145, 160])
>>> length_array = np.array([5, 5, 10, 60])
>>> azi_bins = determine_azimuth_bins(azimuth_array, length_array)
>>> azi_bins.bin_heights
array([ 5,  5,  0, 70])
>>> azi_bins.bin_locs
array([ 22.5,  67.5, 112.5, 157.5])
>>> azi_bins.bin_width
45.0
fractopo.analysis.azimuth.plot_azimuth_ax(bin_width: float, bin_locs: ndarray, bin_heights: ndarray, bar_color: str, ax: PolarAxes, axial: bool = True) PolarAxes

Plot azimuth rose plot to ax.

fractopo.analysis.azimuth.plot_azimuth_plot(azimuth_array: ndarray, length_array: ndarray, azimuth_set_array: ndarray, azimuth_set_names: Sequence[str], azimuth_set_ranges: Sequence[tuple[float | int | Real, float | int | Real]], label: str, plain: bool, append_azimuth_set_text: bool = False, add_abundance_order: bool = False, axial: bool = True, visualize_sets: bool = False, bar_color: str = 'darkgrey') tuple[AzimuthBins, Figure, PolarAxes]

Plot azimuth rose plot to its own figure.

Returns rose plot bin parameters, figure, ax

fractopo.analysis.contour_grid module

Scripts for creating sample grids for fracture trace, branch and node data.

fractopo.analysis.contour_grid.create_grid(cell_width: float, lines: GeoDataFrame | GeoSeries) GeoDataFrame

Create an empty polygon grid for sampling fracture line data.

Grid is created to always contain all given lines.

E.g.

>>> lines = gpd.GeoSeries(
...     [
...         LineString([(1, 1), (2, 2)]),
...         LineString([(2, 2), (3, 3)]),
...         LineString([(3, 0), (2, 2)]),
...         LineString([(2, 2), (-2, 5)]),
...     ]
... )
>>> create_grid(cell_width=0.1, lines=lines).head(5)
                                            geometry
0   POLYGON ((-2 5, -1.9 5, -1.9 4.9, -2 4.9, -2 5))
1  POLYGON ((-2 4.9, -1.9 4.9, -1.9 4.8, -2 4.8, ...
2  POLYGON ((-2 4.8, -1.9 4.8, -1.9 4.7, -2 4.7, ...
3  POLYGON ((-2 4.7, -1.9 4.7, -1.9 4.6, -2 4.6, ...
4  POLYGON ((-2 4.6, -1.9 4.6, -1.9 4.5, -2 4.5, ...
fractopo.analysis.contour_grid.populate_sample_cell(sample_cell: Polygon, sample_cell_area: float, traces: GeoDataFrame, nodes: GeoDataFrame, branches: GeoDataFrame, snap_threshold: float, resolve_branches_and_nodes: bool, traces_sindex: Any | None = None) dict[str, float]

Take a single grid polygon and populate it with parameters.

Mauldon determination requires that E-nodes are defined for every single sample circle. If correct Mauldon values are wanted resolve_branches_and_nodes must be passed as True. This will result in much longer analysis time.

fractopo.analysis.contour_grid.sample_grid(grid: GeoDataFrame, traces: GeoDataFrame, nodes: GeoDataFrame, branches: GeoDataFrame, snap_threshold: float, resolve_branches_and_nodes: bool = False) GeoDataFrame

Populate a sample polygon grid with geometrical and topological parameters.

fractopo.analysis.length_distributions module

Utilities for analyzing and plotting length distributions for line data.

class fractopo.analysis.length_distributions.Dist(*values)

Bases: Enum

Enums of powerlaw model types.

EXPONENTIAL = 'exponential'
LOGNORMAL = 'lognormal'
POWERLAW = 'power_law'
TRUNCATED_POWERLAW = 'truncated_power_law'
class fractopo.analysis.length_distributions.LengthDistribution(lengths: ndarray, area_value: float, using_branches: bool, name: str = '', _automatic_fit: Fit | None = None)

Bases: object

Dataclass for length distributions.

area_value: float
property automatic_fit: Fit | None

Get automatic powerlaw Fit.

generate_distributions(cut_off: float = 1e-18)

Generate ccdf and truncated length data with cut_off.

lengths: ndarray
manual_fit(cut_off: float)

Get manual powerlaw Fit.

name: str = ''
using_branches: bool
class fractopo.analysis.length_distributions.MultiLengthDistribution(distributions: list[~fractopo.analysis.length_distributions.LengthDistribution], using_branches: bool, fitter: ~collections.abc.Callable[[~numpy.ndarray, ~numpy.ndarray], tuple[float, float]] = <function numpy_polyfit>, cut_offs: list[float] | None = None, _fit_to_multi_scale_lengths: tuple[~numpy.ndarray, float, float] | None = None, _normalized_distributions: tuple[list[~numpy.ndarray], list[~numpy.ndarray]] | None = None, _optimized: bool = False)

Bases: object

Multi length distribution.

cut_offs: list[float] | None = None
distributions: list[LengthDistribution]
fitter(log_ccm: ndarray) tuple[float, float]

Fit numpy polyfit to data.

property names: list[str]

Get length distribution names.

normalized_distributions(automatic_cut_offs: bool) tuple[list[ndarray], list[ndarray], list[ndarray], list[ndarray], list[float]]

Create normalized and truncated lengths and ccms.

optimize_cut_offs(shgo_kwargs: dict[str, ~typing.Any] | None = None, scorer: ~collections.abc.Callable[[~numpy.ndarray, ~numpy.ndarray], float] = <function mean_squared_log_error>) tuple[MultiScaleOptimizationResult, MultiLengthDistribution]

Get cut-off optimized MultiLengthDistribution.

optimized_multi_scale_fit(scorer: Callable[[ndarray, ndarray], float], shgo_kwargs: dict[str, Any]) MultiScaleOptimizationResult

Use scipy.optimize.shgo to optimize fit.

plot_multi_length_distributions(automatic_cut_offs: bool, plot_truncated_data: bool, scorer: ~collections.abc.Callable[[~numpy.ndarray, ~numpy.ndarray], float] = <function mean_squared_log_error>) tuple[Polyfit, Figure, Axes]

Plot multi-scale length distribution.

using_branches: bool
class fractopo.analysis.length_distributions.MultiScaleOptimizationResult(polyfit: Polyfit, cut_offs: ndarray, optimize_result: OptimizeResult, x0: ndarray, bounds: ndarray, proportions_of_data: list[float])

Bases: NamedTuple

Results of scipy.optimize.shgo on length data.

bounds: ndarray

Alias for field number 4

cut_offs: ndarray

Alias for field number 1

optimize_result: OptimizeResult

Alias for field number 2

polyfit: Polyfit

Alias for field number 0

proportions_of_data: list[float]

Alias for field number 5

x0: ndarray

Alias for field number 3

class fractopo.analysis.length_distributions.Polyfit(y_fit: ndarray, m_value: float, constant: float, score: float, scorer: Callable[[ndarray, ndarray], float])

Bases: NamedTuple

Results of a polyfit to length data.

constant: float

Alias for field number 2

m_value: float

Alias for field number 1

score: float

Alias for field number 3

scorer: Callable[[ndarray, ndarray], float]

Alias for field number 4

y_fit: ndarray

Alias for field number 0

class fractopo.analysis.length_distributions.SilentFit(data: ndarray, discrete=False, xmin=None, xmax=None, verbose=True, fit_method='Likelihood', estimate_discrete=True, discrete_approximation='round', sigma_threshold=None, parameter_range=None, fit_optimizer=None, xmin_distance='D', xmin_distribution='power_law', **kwargs)

Bases: Fit

Wrap powerlaw.Fit for the singular purpose of silencing output.

Silences output both to stdout and stderr.

fractopo.analysis.length_distributions.all_fit_attributes_dict(fit: Fit) dict[str, float]

Collect ‘all’ fit attributes into a dict.

fractopo.analysis.length_distributions.apply_cut_off(lengths: ndarray, ccm: ndarray, cut_off: float = 1e-18) tuple[ndarray, ndarray]

Apply cut-off to length data and associated ccm.

>>> lengths = np.array([2, 4, 8, 16, 32])
>>> ccm = np.array([1. , 0.8, 0.6, 0.4, 0.2])
>>> cut_off = 4.5
>>> apply_cut_off(lengths, ccm, cut_off)
(array([ 8, 16, 32]), array([0.6, 0.4, 0.2]))
fractopo.analysis.length_distributions.calculate_critical_distance_value(data_length: int, data_length_minimum: int = 51)

Calculate approximate critical distance value for large (n>50) sample counts.

Assumes significance level of 0.05. If the Kolmogorov-Smirnov distance value is smaller than the critical value, the null hypothesis, i.e., that the distributions are the same, is accepted but not verified.

fractopo.analysis.length_distributions.calculate_exponent(alpha: float)

Calculate exponent from powerlaw.alpha.

fractopo.analysis.length_distributions.calculate_fitted_values(log_lengths: ndarray, m_value: float, constant: float) ndarray

Calculate fitted values of y.

fractopo.analysis.length_distributions.cut_off_proportion_of_data(fit: Fit, length_array: ndarray) float

Get the proportion of data cut off by powerlaw cut off.

If no fit is passed the cut off is the one used in automatic_fit.

fractopo.analysis.length_distributions.describe_powerlaw_fit(fit: Fit, length_array: ndarray, label: str | None = None) dict[str, float]

Compose dict of fit powerlaw attributes and comparisons between fits.

fractopo.analysis.length_distributions.distribution_compare_dict(fit: Fit) dict[str, float]

Compose a dict of length distribution fit comparisons.

fractopo.analysis.length_distributions.fit_to_multi_scale_lengths(ccm: ~numpy.ndarray, lengths: ~numpy.ndarray, fitter: ~collections.abc.Callable[[~numpy.ndarray, ~numpy.ndarray], tuple[float, float]] = <function numpy_polyfit>, scorer: ~collections.abc.Callable[[~numpy.ndarray, ~numpy.ndarray], float] = <function mean_squared_log_error>) Polyfit

Fit np.polyfit to multiscale length distributions.

Returns the fitted values, exponent and constant of fit within a Polyfit instance.

fractopo.analysis.length_distributions.numpy_polyfit(log_lengths: ndarray, log_ccm: ndarray) tuple[float, float]

Fit numpy polyfit to data.

fractopo.analysis.length_distributions.optimize_cut_offs(cut_offs: ndarray, distributions: list[LengthDistribution], fitter: Callable[[ndarray, ndarray], tuple[float, float]], scorer: Callable[[ndarray, ndarray], float], *_) float

Optimize multiscale fit.

Requirements for the optimization function are that the function must take one argument of 1-d array and return a single float. It can take static arguments (distributions, fitter).

fractopo.analysis.length_distributions.plot_distribution_fits(length_array: ndarray, label: str, using_branches: bool, use_probability_density_function: bool, cut_off: float | None = None, fit: Fit | None = None, fig: Figure | None = None, ax: Axes | None = None, fits_to_plot: tuple[Dist, ...] = (Dist.POWERLAW, Dist.LOGNORMAL, Dist.EXPONENTIAL), plain: bool = False) tuple[Fit | None, Figure, Axes]

Plot length distribution and powerlaw fits.

If a powerlaw.Fit is not given it will be automatically determined (using the optionally given cut_off).

fractopo.analysis.length_distributions.plot_fit_on_ax(ax: Axes, fit: Fit, fit_distribution: Dist, use_probability_density_function: bool) None

Plot powerlaw model to ax.

fractopo.analysis.length_distributions.plot_multi_distributions_and_fit(truncated_length_array_all: list[ndarray], ccm_array_normed_all: list[ndarray], full_length_array_all: list[ndarray], full_ccm_array_normed_all: list[ndarray], cut_offs: list[float], names: list[str], polyfit: Polyfit, using_branches: bool, plot_truncated_data: bool) tuple[Figure, Axes]

Plot multi-scale length distribution.

fractopo.analysis.length_distributions.scikit_linear_regression(log_lengths: ndarray, log_ccm: ndarray) tuple[float, float]

Fit using scikit LinearRegression.

fractopo.analysis.length_distributions.setup_ax_for_ld(ax: Axes, using_branches: bool, indiv_fit: bool, use_probability_density_function: bool, plain: bool = False)

Configure ax for length distribution plots.

Parameters:
  • ax – Ax to setup.

  • using_branches – Are the lines in the axis branches or traces.

  • indiv_fit – Is the plot single-scale or multi-scale.

  • use_probability_density_function – Whether to use complementary cumulative distribution function

  • plain – Should the stylizing be kept to a minimum.

fractopo.analysis.length_distributions.setup_length_dist_legend(ax: Axes)

Set up legend for length distribution plots.

Used for both single and multi distribution plots.

fractopo.analysis.length_distributions.sort_and_log_lengths_and_ccm(lengths: ndarray, ccm: ndarray) tuple[ndarray, ndarray]

Preprocess lengths and ccm.

Sorts them and calculates their natural logarithmic.

fractopo.analysis.length_distributions.sorted_lengths_and_ccm(lengths: ndarray, area_value: float | None) tuple[ndarray, ndarray]

Get (normalized) complementary cumulative number array.

Give area_value as None to not normalize.

>>> lengths = np.array([2, 4, 8, 16, 32])
>>> area_value = 10.0
>>> sorted_lengths_and_ccm(lengths, area_value)
(array([ 2,  4,  8, 16, 32]), array([0.1 , 0.08, 0.06, 0.04, 0.02]))
>>> lengths = np.array([2, 4, 8, 16, 32])
>>> area_value = None
>>> sorted_lengths_and_ccm(lengths, area_value)
(array([ 2,  4,  8, 16, 32]), array([1. , 0.8, 0.6, 0.4, 0.2]))

fractopo.analysis.line_data module

Trace and branch data analysis with LineData class abstraction.

class fractopo.analysis.line_data.LineData(_line_gdf: ~geopandas.geodataframe.GeoDataFrame, using_branches: bool, azimuth_set_ranges: ~collections.abc.Sequence[tuple[float | int | ~numbers.Real, float | int | ~numbers.Real]], azimuth_set_names: ~collections.abc.Sequence[str], length_set_ranges: ~collections.abc.Sequence[tuple[float | int | ~numbers.Real, float | int | ~numbers.Real]] = (), length_set_names: ~collections.abc.Sequence[str] = (), area_boundary_intersects: ~numpy.ndarray = <factory>, _automatic_fit: ~powerlaw.Fit | None = None)

Bases: object

Wrapper around the given GeoDataFrame with trace or branch data.

The line_gdf reference is passed and LineData will modify the input line_gdf instead of copying the input frame. This means line_gdf columns are accessible in the passed input reference upstream.

area_boundary_intersects: ndarray
property automatic_fit: Fit | None

Get automatic powerlaw Fit.

property azimuth_array: ndarray

Array of trace or branch azimuths.

property azimuth_set_array: ndarray

Array of trace or branch azimuth set ids.

property azimuth_set_counts: dict[str, int]

Get dictionary of azimuth set counts.

property azimuth_set_length_arrays: dict[str, ndarray]

Get length arrays of each azimuth set.

azimuth_set_names: Sequence[str]
azimuth_set_ranges: Sequence[tuple[float | int | Real, float | int | Real]]
property boundary_intersect_count: dict[str, int]

Get counts of line intersects with boundary.

boundary_intersect_count_desc(label: str) dict[str, int]

Get counts of line intersects with boundary.

describe_fit(label: str | None = None, cut_off: float | None = None)

Return short description of automatic powerlaw fit.

determine_manual_fit(cut_off: float) Fit | None

Get manually determined Fit with set cut off.

property geometry: GeoSeries

Get line geometries.

property length_array: ndarray

Array of trace or branch lengths.

Note: lengths can be 0.0 due to boundary weighting.

property length_array_non_weighted: ndarray

Array of trace or branch lengths not weighted by boundary conditions.

property length_boundary_weights

Array of weights for lines based on intersection count with boundary.

property length_set_array: ndarray

Array of trace or branch length set ids.

property length_set_counts: dict[str, int]

Get dictionary of length set counts.

length_set_names: Sequence[str] = ()
length_set_ranges: Sequence[tuple[float | int | Real, float | int | Real]] = ()
plot_azimuth(label: str, append_azimuth_set_text: bool = False, add_abundance_order: bool = False, visualize_sets: bool = False, bar_color: str = 'darkgrey', plain: bool = False) tuple[AzimuthBins, Figure, PolarAxes]

Plot azimuth data in rose plot.

plot_azimuth_set_count(label: str) tuple[Figure, Axes]

Plot azimuth set counts.

plot_azimuth_set_lengths(use_probability_density_function: bool = False) tuple[list[Fit | None], list[Figure], list[Axes]]

Plot azimuth set length distributions with fits.

plot_length_set_count(label: str) tuple[Figure, Axes]

Plot length set counts.

plot_lengths(label: str, use_probability_density_function: bool, fit: Fit | None = None, plain: bool = False, fits_to_plot: tuple[Dist, ...] = (Dist.POWERLAW, Dist.LOGNORMAL, Dist.EXPONENTIAL)) tuple[Fit | None, Figure, Axes]

Plot length data with powerlaw fit.

using_branches: bool

fractopo.analysis.multi_network module

MultiNetwork implementation for handling multiple network analysis.

class fractopo.analysis.multi_network.MultiNetwork(networks: tuple[Network, ...])

Bases: NamedTuple

Multiple Network analysis.

basic_network_descriptions_df(columns: dict[str, tuple[str | None, type]])

Create DataFrame useful for basic Network characterization.

columns should contain key value pairs where the key is the column name in numerical_network_description dict. Value is a tuple where the first member is a new name for the column or alternatively None in which case the column name isn’t changed. The second member should be the type of the column, typically either str, int or float.

collective_azimuth_sets() tuple[tuple[str, ...], Sequence[tuple[float | int | Real, float | int | Real]]]

Get collective azimuth set names.

Checks that all Networks have the same azimuth sets.

multi_length_distributions(using_branches: bool) MultiLengthDistribution

Get MultiLengthDistribution of all networks.

network_length_distributions(using_branches: bool, using_azimuth_sets: bool) dict[str, dict[str, LengthDistribution]]

Get length distributions of Networks.

networks: tuple[Network, ...]

Alias for field number 0

plot_branch(colors: list[str | None] | None = None)

Plot multi-network ternary branch type plot.

plot_branch_azimuth_set_lengths(automatic_cut_offs: bool, plot_truncated_data: bool) tuple[dict[str, MultiLengthDistribution], list[Polyfit], list[Figure], list[Axes]]

Plot multi-network trace azimuths set lengths with fits.

plot_multi_length_distribution(using_branches: bool, automatic_cut_offs: bool, plot_truncated_data: bool, multi_distribution: MultiLengthDistribution | None = None) tuple[MultiLengthDistribution, Polyfit, Figure, Axes]

Plot multi-length distribution fit.

Use multi_length_distributions() to get most parameters used in fitting the multi-scale distribution.

plot_trace_azimuth_set_lengths(automatic_cut_offs: bool, plot_truncated_data: bool) tuple[dict[str, MultiLengthDistribution], list[Polyfit], list[Figure], list[Axes]]

Plot multi-network trace azimuths set lengths with fits.

plot_xyi(colors: list[str | None] | None = None)

Plot multi-network ternary XYI plot.

set_multi_length_distributions(using_branches: bool) dict[str, MultiLengthDistribution]

Get set-wise multi-length distributions.

subsample(min_radii: float | dict[str, float], random_choice: RandomChoice = RandomChoice.radius, samples: int = 1) list[dict[str, float | int | Real | str] | None]

Subsample all Networks in MultiNetwork.

Parameters:
  • min_radii – The minimum radius for subsamples can be either given as static or as a mapping that maps network names to specific radii.

  • random_choice – Whether to use radius or area as the random choice parameter.

  • samples – How many subsamples to conduct per network.

Returns:

Subsamples

fractopo.analysis.network module

Analyse and plot trace map data with Network.

class fractopo.analysis.network.Network(trace_gdf: ~geopandas.geodataframe.GeoDataFrame, area_gdf: ~geopandas.geodataframe.GeoDataFrame, name: str = 'Network', determine_branches_nodes: bool = False, snap_threshold: float = 0.001, truncate_traces: bool = True, circular_target_area: bool = False, azimuth_set_names: ~typing.Sequence[str] = ('1', '2', '3'), azimuth_set_ranges: ~collections.abc.Sequence[tuple[float | int | ~numbers.Real, float | int | ~numbers.Real]] = ((0, 60), (60, 120), (120, 180)), trace_length_set_names: ~typing.Sequence[str] = (), trace_length_set_ranges: ~collections.abc.Sequence[tuple[float | int | ~numbers.Real, float | int | ~numbers.Real]] = (), branch_length_set_names: ~typing.Sequence[str] = (), branch_length_set_ranges: ~collections.abc.Sequence[tuple[float | int | ~numbers.Real, float | int | ~numbers.Real]] = (), branch_gdf: ~geopandas.geodataframe.GeoDataFrame = <factory>, node_gdf: ~geopandas.geodataframe.GeoDataFrame = <factory>, censoring_area: ~geopandas.geodataframe.GeoDataFrame = <factory>, cache_results: bool = True, remove_z_coordinates_from_inputs: bool = True, _anisotropy: ~typing.Tuple[~numpy.ndarray, ~numpy.ndarray] | None = None, _parameters: ~typing.Dict[str, float] | None = None, _azimuth_set_relationships: ~pandas.core.frame.DataFrame | None = None, _trace_length_set_relationships: ~pandas.core.frame.DataFrame | None = None, _trace_intersects_target_area_boundary: ~numpy.ndarray | None = None, _branch_intersects_target_area_boundary: ~numpy.ndarray | None = None)

Bases: object

Trace network.

Consists of at its simplest of validated traces and a target area that delineates the traces i.e., only trace_gdf and area_gdf parameters are required to run the network analysis but results might not be correct or match your expectations e.g., traces are truncated to target area by default.

Parameters:
  • trace_gdfGeoDataFrame containing trace data i.e. shapely.geometry.LineString geometries.

  • area_gdfGeoDataFrame containing target area data i.e. (Multi)Polygon's.

  • name – Name the Network.

  • determine_branches_nodes – Whether to determine branches and nodes.

  • snap_threshold – The snapping distance threshold to identify snapped traces.

  • truncate_traces – Whether to crop the traces at the target area boundary.

  • circular_target_area – Is the target are a circle.

  • azimuth_set_names – Names of each azimuth set.

  • azimuth_set_ranges – Ranges of each azimuth set.

  • trace_length_set_names – Names of each trace length set.

  • trace_length_set_ranges – Ranges of each trace length set.

  • branch_length_set_names – Names of each branch length set.

  • branch_length_set_ranges – Ranges of each branch length set.

  • branch_gdfGeoDataFrame containing branch data. It is recommended to let fractopo.Network determine both branches and nodes instead of passing them here.

  • node_gdf – GeoDataFrame containing node data. It is recommended to let fractopo.Network determine both branches and nodes instead of passing them here.

  • censoring_area – Polygon(s) in GeoDataFrame that delineate(s) the area in which trace digitization was uncertain due to censoring caused by e.g. vegetation.

  • cache_results – Whether to use joblib memoize to disk-cache computationally expensive results.

property anisotropy: Tuple[ndarray, ndarray]

Determine anisotropy of connectivity.

area_gdf: GeoDataFrame
assign_branches_nodes(branches: GeoDataFrame | None = None, nodes: GeoDataFrame | None = None)

Determine and assign branches and nodes as attributes.

azimuth_set_names: Sequence[str] = ('1', '2', '3')
azimuth_set_ranges: Sequence[tuple[float | int | Real, float | int | Real]] = ((0, 60), (60, 120), (120, 180))
property azimuth_set_relationships: DataFrame

Determine azimuth set relationships.

property branch_azimuth_array: ndarray

Get branch azimuths as array.

property branch_azimuth_set_array: ndarray

Get azimuth set for each branch.

property branch_azimuth_set_counts: Dict[str, int]

Get branch azimuth set counts.

property branch_counts: Dict[str, int]

Get branch counts.

branch_gdf: GeoDataFrame
property branch_intersects_target_area_boundary: ndarray

Get array of E-component count.

property branch_length_array: ndarray

Get branch lengths as array.

property branch_length_array_non_weighted: ndarray

Get non-boundary-weighted branch lengths as array.

branch_length_distribution(azimuth_set: str | None) LengthDistribution

Create structured LengthDistribution instance of branch length data.

property branch_length_set_array: ndarray

Get length set for each branch.

property branch_length_set_counts: Dict[str, int]

Get branch length set counts.

branch_length_set_names: Sequence[str] = ()
branch_length_set_ranges: Sequence[tuple[float | int | Real, float | int | Real]] = ()
branch_lengths_powerlaw_fit(cut_off: float | None = None) Fit | None

Determine powerlaw fit for branch lengths.

property branch_lengths_powerlaw_fit_description: Dict[str, float]

Short numerical description dict of branch length powerlaw fit.

property branch_series: GeoSeries

Get branch geometries as GeoSeries.

property branch_types: ndarray

Get branch type of each branch.

cache_results: bool = True
censoring_area: GeoDataFrame
circular_target_area: bool = False
contour_grid(cell_width: float | None = None, bounds_divider: float = 20.0, precursor_grid: GeoDataFrame | None = None, resolve_branches_nodes: bool = False)

Sample the network with a contour grid.

If cell_width is passed it is used as the cell width. Otherwise a cell width is calculated using the network branch bounds using the passed bounds_divider or its default value.

If precursor_grid is passed it is used as the grid in which each Polygon cell is filled with calculated network parameter values.

determine_branches_nodes: bool = False
estimate_censoring() float

Estimate the amount of censoring as area float value.

Censoring is caused by e.g. vegetation.

Returns np.nan if no censoring_area is passed by the user into Network creation or if the passed GeoDataFrame is empty.

export_network_analysis(output_path: Path, include_contour_grid: bool = True, contour_grid_cell_size: float | None = None, fits_to_plot: Tuple[Dist, ...] = (Dist.EXPONENTIAL, Dist.LOGNORMAL, Dist.POWERLAW))

Export pre-selected Network analysis results to a directory.

The chosen analyses are opionated but should contain at least the basic results of fracture network analysis.

output_path should correspond to a path to an existing or directory or direct path to a non-existing directory where one will be created.

export_network_analysis_topology(save_fig_to_export_path: Callable, write_geodataframe_to_export_path: Callable, include_contour_grid: bool, contour_grid_cell_size: float | None, fits_to_plot: Tuple[Dist, ...])

Export topological network analysis results.

property length_set_relationships: DataFrame

Determine length set relationships.

name: str = 'Network'
property node_counts: Dict[str, int]

Get node counts.

node_gdf: GeoDataFrame
property node_series: GeoSeries

Get node geometries as GeoSeries.

property node_types: ndarray

Get node type of each node.

numerical_network_description(trace_lengths_cut_off: float | None = None, branch_lengths_cut_off: float | None = None) Dict[str, float | int | Real | str]

Collect numerical network attributes and return them as a dictionary.

property parameters: Dict[str, float]

Get numerical geometric and topological parameters.

property plain_name

Get filename friendly name for Network based on name attribute.

plot_anisotropy(label: str | None = None, color: str | None = None) Tuple[Figure, Axes] | None

Plot anisotropy of connectivity plot.

plot_azimuth_crosscut_abutting_relationships() Tuple[List[Figure], List[ndarray]]

Plot azimuth set crosscutting and abutting relationships.

plot_branch(label: str | None = None) Tuple[Figure, Axes, TernaryAxesSubplot]

Plot ternary plot of branch types.

plot_branch_azimuth(label: str | None = None, append_azimuth_set_text: bool = False, add_abundance_order: bool = False, visualize_sets: bool = False, bar_color: str = 'darkgrey', plain: bool = False) Tuple[AzimuthBins, Figure, PolarAxes]

Plot branch azimuth rose plot.

plot_branch_azimuth_set_count(label: str | None = None) Tuple[Figure, Axes]

Plot branch azimuth set counts.

plot_branch_azimuth_set_lengths() Tuple[List[Fit | None], List[Figure], List[Axes]]

Plot branch azimuth set lengths with fits.

plot_branch_length_set_count(label: str | None = None) Tuple[Figure, Axes]

Plot branch length set counts.

plot_branch_lengths(label: str | None = None, fit: Fit | None = None, use_probability_density_function: bool = False, plain: bool = False, fits_to_plot: Tuple[Dist, ...] = (Dist.POWERLAW, Dist.LOGNORMAL, Dist.EXPONENTIAL)) Tuple[Fit | None, Figure, Axes]

Plot branch length distribution with powerlaw fits.

plot_contour(parameter: str, sampled_grid: GeoDataFrame) Tuple[Figure, Axes]

Plot contour plot of a geometric or topological parameter.

Creating the contour grid is expensive so the sampled_grid must be first created with Network.contour_grid method and then passed to this one for plotting.

plot_parameters(label: str | None = None, color: str | None = None) Tuple[Figure, Axes] | None

Plot geometric and topological parameters.

plot_trace_azimuth(label: str | None = None, append_azimuth_set_text: bool = False, add_abundance_order: bool = False, visualize_sets: bool = False, bar_color: str = 'darkgrey', plain: bool = False) Tuple[AzimuthBins, Figure, PolarAxes]

Plot trace azimuth rose plot.

plot_trace_azimuth_set_count(label: str | None = None) Tuple[Figure, Axes]

Plot trace azimuth set counts.

plot_trace_azimuth_set_lengths() Tuple[List[Fit | None], List[Figure], List[Axes]]

Plot trace azimuth set lengths with fits.

plot_trace_length_crosscut_abutting_relationships() Tuple[List[Figure], List[ndarray]]

Plot length set crosscutting and abutting relationships.

plot_trace_length_set_count(label: str | None = None) Tuple[Figure, Axes]

Plot trace length set counts.

plot_trace_lengths(label: str | None = None, fit: Fit | None = None, use_probability_density_function: bool = False, plain: bool = False, fits_to_plot: Tuple[Dist, ...] = (Dist.POWERLAW, Dist.LOGNORMAL, Dist.EXPONENTIAL)) Tuple[Fit | None, Figure, Axes]

Plot trace length distribution with powerlaw fits.

plot_xyi(label: str | None = None) Tuple[Figure, Axes, TernaryAxesSubplot]

Plot ternary plot of node types.

remove_z_coordinates_from_inputs: bool = True
representative_points() List[Point]

Get representative point(s) of target area(s).

reset_length_data()

Reset LineData attributes.

WARNING: Mostly untested.

snap_threshold: float = 0.001
property target_areas: List[Polygon | MultiPolygon]

Get all target areas from area_gdf.

property total_area: float

Get total area.

property trace_azimuth_array: ndarray

Get trace azimuths as array.

property trace_azimuth_set_array: ndarray

Get azimuth set for each trace.

property trace_azimuth_set_counts: Dict[str, int]

Get trace azimuth set counts.

trace_gdf: GeoDataFrame
property trace_intersects_target_area_boundary: ndarray

Check traces for intersection with target area boundaries.

Results are in integers:

  • 0 == No intersections

  • 1 == One intersection

  • 2 == Two intersections

Does not discriminate between which target area (if multiple) the trace intersects. Intersection detection based on snap_threshold.

property trace_length_array: ndarray

Get trace lengths as array.

property trace_length_array_non_weighted: ndarray

Get non-boundary-weighted trace lengths as array.

trace_length_distribution(azimuth_set: str | None) LengthDistribution

Create structured LengthDistribution instance of trace length data.

property trace_length_set_array: ndarray

Get length set for each trace.

property trace_length_set_counts: Dict[str, int]

Get trace length set counts.

trace_length_set_names: Sequence[str] = ()
trace_length_set_ranges: Sequence[tuple[float | int | Real, float | int | Real]] = ()
trace_lengths_powerlaw_fit(cut_off: float | None = None) Fit | None

Determine powerlaw fit for trace lengths.

property trace_lengths_powerlaw_fit_description: Dict[str, float]

Short numerical description dict of trace length powerlaw fit.

property trace_series: GeoSeries

Get trace geometries as GeoSeries.

truncate_traces: bool = True
write_branches_and_nodes(output_dir_path: Path, branches_name: str | None = None, nodes_name: str | None = None)

Write branches and nodes to disk.

Enables reuse of the same data in analysis of the same data to skip topology determination which is computationally expensive.

Writes only with the GeoJSON driver as there are differences between different spatial filetypes. Only GeoJSON is supported to avoid unexpected errors.

fractopo.analysis.network.requires_topology(func: Callable) Callable

Wrap methods that require determined topology.

Raises an error if trying to call them without determined topology.

fractopo.analysis.parameters module

Analysis and plotting of geometric and topological parameters.

fractopo.analysis.parameters.branches_intersect_boundary(branch_types: ndarray) ndarray

Get array of if branches have E-component (intersects target area).

fractopo.analysis.parameters.convert_counts(counts: dict[str, float | int | Real]) dict[str, int]

Convert float and int value in counts to ints only.

Used by branches and nodes count calculators.

fractopo.analysis.parameters.counts_to_point(counts: dict[str, int], is_nodes: bool, scale: int = 100) tuple[float, float, float] | None

Create ternary point from node_counts.

The order is important: for nodes: X, I, Y and for branches: CC, II, CI.

fractopo.analysis.parameters.decorate_branch_ax(ax: Axes, tax: TernaryAxesSubplot, counts: dict[str, int])

Decorate ternary branch plot.

fractopo.analysis.parameters.decorate_count_ax(ax: Axes, tax: TernaryAxesSubplot, label_counts: dict[str, int], is_nodes: bool)

Decorate ternary count plot.

fractopo.analysis.parameters.decorate_xyi_ax(ax: Axes, tax: TernaryAxesSubplot, counts: dict[str, int])

Decorate xyi plot.

fractopo.analysis.parameters.determine_branch_type_counts(branch_types: ndarray, branches_defined: bool) dict[str, float | int | Real]

Determine branch type counts.

fractopo.analysis.parameters.determine_node_type_counts(node_types: ndarray, branches_defined: bool) dict[str, float | int | Real]

Determine node type counts.

fractopo.analysis.parameters.determine_set_counts(set_names: Sequence[str], set_array: ndarray) dict[str, int]

Determine counts in for each set.

fractopo.analysis.parameters.determine_topology_parameters(trace_length_array: ndarray, area: float, branches_defined: bool, correct_mauldon: bool, node_counts: dict[str, float | int | Real] | None, branch_length_array: ndarray | None) dict[str, float]

Determine geometric (and topological) parameters.

Number of traces (and branches) are determined by node counting.

The passed trace_length_array should be non-weighted.

fractopo.analysis.parameters.initialize_ternary_ax(ax: Axes, tax: TernaryAxesSubplot) dict

Decorate ternary ax for both XYI and branch types.

Returns a font dict for use in plotting text.

fractopo.analysis.parameters.initialize_ternary_branches_points(ax, tax)

Initialize ternary branches plot ax and tax.

fractopo.analysis.parameters.initialize_ternary_points(ax: Axes, tax: TernaryAxesSubplot)

Initialize ternary points figure ax and tax.

fractopo.analysis.parameters.plot_branch_plot_ax(counts: dict[str, int], label: str, tax: TernaryAxesSubplot, color: str | None = None)

Plot ternary branch plot to tax.

fractopo.analysis.parameters.plot_parameters_plot(topology_parameters_list: list[dict[str, float]], labels: list[str], colors: list[str] | None = None)

Plot topological parameters.

fractopo.analysis.parameters.plot_set_count(set_counts: dict[str, int], label: str) tuple[Figure, Axes]

Plot set counts.

fractopo.analysis.parameters.plot_ternary_plot(counts_list: list[dict[str, int]], labels: list[str], is_nodes: bool, colors: list[str | None] | None = None) tuple[Figure, Axes, TernaryAxesSubplot]

Plot ternary plot.

Same function is used to plot both XYI and branch type ternary plots.

fractopo.analysis.parameters.plot_xyi_plot_ax(counts: dict[str, int], label: str, tax: TernaryAxesSubplot, color: str | None = None)

Plot XYI pointst to given ternary axis (tax).

fractopo.analysis.parameters.tern_plot_branch_lines(tax)

Plot line of random assignment of nodes to branches to a branch ternary tax.

Line positions taken from NetworkGT open source code. Credit to: https://github.com/BjornNyberg/NetworkGT

Parameters:

tax (ternary.TernaryAxesSubplot) – Ternary axis to plot to

fractopo.analysis.parameters.tern_plot_the_fing_lines(tax: TernaryAxesSubplot, cs_locs: tuple[float, ...] = (1.3, 1.5, 1.7, 1.9))

Plot connections per branch parameter to XYI-plot.

If not using the pre-determined locations the texts will not be correctly placed as they use absolute positions and labels.

Parameters:
  • tax – Ternary axis to plot to.

  • cs_locs – Pre-determined locations for the lines.

fractopo.analysis.parameters.tern_yi_func(c, x)

Plot Connections per branch threshold line to branch ternary plot.

Uses absolute values.

fractopo.analysis.parameters.ternary_heatmapping(x_values: ndarray, y_values: ndarray, i_values: ndarray, number_of_bins: int, scale_divider: float = 1.0, ax: Axes | None = None) tuple[Figure, TernaryAxesSubplot]

Plot ternary heatmap.

Modified from: https://github.com/marcharper/python-ternary/issues/81

fractopo.analysis.parameters.ternary_point_kwargs(alpha=1.0, zorder=4, s: float = 25, marker='X')

Plot point to a ternary figure.

fractopo.analysis.parameters.ternary_text(text: str, ax: Axes)

Add ternary text about counts.

fractopo.analysis.random_sampling module

Utilities for randomly Network sampling traces.

class fractopo.analysis.random_sampling.NetworkRandomSampler(trace_gdf: GeoDataFrame, area_gdf: GeoDataFrame, min_radius: float, snap_threshold: float, random_choice: RandomChoice | str, name: str)

Bases: object

Randomly sample traces inside given target area.

area_gdf: GeoDataFrame
static area_gdf_should_contain_polygon(area_gdf: GeoDataFrame) GeoDataFrame

Check that area_gdf contains one Polygon.

property max_area: float

Calculate maximum area from max_radius.

property max_radius: float

Calculate max radius from given area_gdf.

property min_area: float

Calculate minimum area from min_radius.

min_radius: float
name: str
random_area() float

Calculate random area in area range.

Range is calculated from [min_radius, max_radius[.

random_choice: RandomChoice | str
static random_choice_should_be_enum(random_choice: RandomChoice | str) RandomChoice

Check that random_choice is valid.

random_network_sample(determine_branches_nodes: bool = True) RandomSample

Get random Network sample with a random target area.

Returns the network, the sample circle centroid and circle radius.

classmethod random_network_sampler(network: Network, min_radius: float, random_choice: RandomChoice = RandomChoice.radius)

Initialize NetworkRandomSampler for random sampling.

Assumes that Network target area is a single Polygon circle.

random_radius() float

Calculate random radius in range [min_radius, max_radius[.

random_target_circle() tuple[Polygon, Point, float]

Get random target area and its centroid and radius.

The target area is always within the original target area.

snap_threshold: float
property target_area_centroid: Point

Get target area centroid.

property target_circle: Polygon

Target circle Polygon from area_gdf.

trace_gdf: GeoDataFrame
static trace_gdf_should_contain_traces(trace_gdf: GeoDataFrame) GeoDataFrame

Check that trace_gdf contains LineString traces.

static value_should_be_positive(min_radius: float | int | Real) float | int | Real

Check that value is positive.

class fractopo.analysis.random_sampling.RandomChoice(*values)

Bases: Enum

Choose between random area or radius.

The choice is relevant because area is polynomially correlated with radius.

area = 'area'
radius = 'radius'
class fractopo.analysis.random_sampling.RandomSample(network_maybe: Network | None, target_centroid: Point, radius: float, name: str)

Bases: object

Dataclass for sampling results.

name: str
network_maybe: Network | None
radius: float
target_centroid: Point

fractopo.analysis.relationships module

Functions for plotting cross-cutting and abutting relationships.

fractopo.analysis.relationships.determine_crosscut_abutting_relationships(trace_series: GeoSeries, node_series: GeoSeries, node_types: ndarray, set_array: ndarray, set_names: Sequence[str], buffer_value: float | int | Real, label: str) DataFrame

Determine cross-cutting and abutting relationships between trace sets.

Determines relationships between all inputted sets by using spatial intersects between node and trace data.

E.g.

>>> trace_series = gpd.GeoSeries(
...     [LineString([(0, 0), (1, 0)]), LineString([(0, 1), (0, -1)])]
... )
>>> node_series = gpd.GeoSeries(
...     [Point(0, 0), Point(1, 0), Point(0, 1), Point(0, -1)]
... )
>>> node_types = np.array(["Y", "I", "I", "I"])
>>> set_array = np.array(["1", "2"])
>>> set_names = ("1", "2")
>>> buffer_value = 0.001
>>> label = "title"
>>> determine_crosscut_abutting_relationships(
...     trace_series,
...     node_series,
...     node_types,
...     set_array,
...     set_names,
...     buffer_value,
...     label,
... )
    name    sets  x  y y-reverse error-count
0  title  (1, 2)  0  1         0           0

TODO: No within set relations…..yet… Problem?

fractopo.analysis.relationships.determine_intersect(node: Point, node_class: str, l1: bool, l2: bool, first_set: str, second_set: str, first_setpointtree: PreparedGeometry, buffer_value: float | int | Real) dict[str, Point | str | tuple[str, str] | bool]

Determine what intersection the node represents.

TODO: R0912: Too many branches.

fractopo.analysis.relationships.determine_intersects(trace_series_two_sets: tuple[GeoSeries, GeoSeries], set_names_two_sets: tuple[str, str], node_series_xy_intersects: GeoSeries, node_types_xy_intersects: ndarray, buffer_value: float | int | Real) DataFrame

Determine how abutments and crosscuts occur between two sets.

E.g.

>>> traces = gpd.GeoSeries([LineString([(0, 0), (1, 1)])]), gpd.GeoSeries(
...     [LineString([(0, 1), (0, -1)])]
... )
>>> set_names_two_sets = ("1", "2")
>>> node_series_xy_intersects = gpd.GeoSeries([Point(0, 0)])
>>> node_types_xy_intersects = np.array(["Y"])
>>> buffer_value = 0.001
>>> determine_intersects(
...     traces,
...     set_names_two_sets,
...     node_series_xy_intersects,
...     node_types_xy_intersects,
...     buffer_value,
... )
          node nodeclass    sets  error
0  POINT (0 0)         Y  (1, 2)  False
fractopo.analysis.relationships.determine_nodes_intersecting_sets(trace_series_two_sets: tuple[GeoSeries, GeoSeries], set_names_two_sets: tuple[str, str], node_series_xy: GeoSeries, buffer_value: float | int | Real) list[bool]

Conduct a spatial intersect between nodes and traces.

Node GeoDataFrame contains only X- and Y-nodes and the trace GeoDataFrame only two sets. Returns boolean array of based on intersections.

E.g.

>>> traces = gpd.GeoSeries([LineString([(0, 0), (1, 1)])]), gpd.GeoSeries(
...     [LineString([(0, 1), (0, -1)])]
... )
>>> set_names_two_sets = ("1", "2")
>>> nodes_xy = gpd.GeoSeries([Point(0, 0), Point(1, 1), Point(0, 1), Point(0, -1)])
>>> buffer_value = 0.001
>>> determine_nodes_intersecting_sets(
...     traces, set_names_two_sets, nodes_xy, buffer_value
... )
[True, False, False, False]
fractopo.analysis.relationships.plot_crosscut_abutting_relationships_plot(relations_df: DataFrame, set_array: ndarray, set_names: tuple[str, ...]) tuple[list[Figure], list[ndarray]]

Plot cross-cutting and abutting relationships.

fractopo.analysis.subsampling module

Utilities for Network subsampling.

fractopo.analysis.subsampling.aggregate_chosen(chosen: list[dict[str, float | int | ~numbers.Real | str]], default_aggregator: ~collections.abc.Callable = <function mean_aggregation>) dict[str, Any]

Aggregate a collection of subsampled circles for params.

Weights averages by the area of each subsampled circle.

fractopo.analysis.subsampling.area_weighted_index_choice(idxs: list[int], areas: list[float | int | Real], compressor: list[bool]) int

Make area-weighted choce from list of indexes.

fractopo.analysis.subsampling.choose_sample_from_group(group: list[dict[str, float | int | Real | str]]) dict[str, float | int | Real | str]

Choose single sample from group DataFrame.

fractopo.analysis.subsampling.collect_indexes_of_base_circles(idxs: list[int], how_many: int, areas: list[float | int | Real]) list[int]

Collect indexes of base circles, area-weighted and randomly.

fractopo.analysis.subsampling.create_sample(sampler: NetworkRandomSampler) dict[str, float | int | Real | str] | None

Sample with NetworkRandomSampler and return Network description.

fractopo.analysis.subsampling.gather_subsample_descriptions(subsample_results: list[dict[str, float | int | Real | str] | None]) list[dict[str, float | int | Real | str]]

Gather results from a list of subsampling ProcessResults.

fractopo.analysis.subsampling.group_gathered_subsamples(subsamples: list[dict[str, float | int | Real | str]], groupby_column: str = 'Name') dict[str, list[dict[str, float | int | Real | str]]]

Group gathered subsamples.

By default groups by Name column.

>>> subsamples = [
...     {"param": 2.0, "Name": "myname"},
...     {"param": 2.0, "Name": "myname"},
... ]
>>> group_gathered_subsamples(subsamples)
{'myname': [{'param': 2.0, 'Name': 'myname'}, {'param': 2.0, 'Name': 'myname'}]}
fractopo.analysis.subsampling.groupby_keyfunc(item: dict[str, float | int | Real | str], groupby_column: str = 'Name') str

Use groupby_column to group values.

fractopo.analysis.subsampling.random_sample_of_circles(grouped: dict[str, list[dict[str, float | int | Real | str]]], circle_names_with_diameter: dict[str, float | int | Real], min_circles: int = 1, max_circles: int | None = None) list[dict[str, float | int | Real | str]]

Get a random sample of circles from grouped subsampled data.

Both the amount of overall circles and which circles within each group is random. Data is grouped by target area name.

fractopo.analysis.subsampling.subsample_networks(networks: Sequence[Network], min_radii: float | int | Real | dict[str, float | int | Real], random_choice: RandomChoice = RandomChoice.radius, samples: int = 1) list[dict[str, float | int | Real | str] | None]

Subsample given Sequence of Networks.

Module contents

Contains most analysis utilities and abstractions.