This page provides detailed technical information about the functions available in the Pyplan API. Each function is crafted to perform specific operations essential for managing and interacting with Pyplan services.
To access and utilize these functions, prefix them with pp. in your code. For example, to initiate a file upload, use pp.upload().
The documentation below outlines the purpose of each function, describes the parameters and return types, and provides usage examples. This will help you integrate and leverage Pyplan functionalities efficiently in your workflows.
This section outlines the functions related to different node types in Pyplan. These functions help in creating and configuring various types of nodes, such as selectors, forms, and input cubes.
Creates a UI Pyplan selector for decision nodes.
Union[List[Any], pd.Index]
): List or pandas Index with values that can be selected.Union[int, List[int], List[Union[str, int, float]]]
): Current selected value or values.bool
, optional): Whether to allow multiple selection. Defaults to False
.bool
, optional): If False
(default), saves selected indexes in selected
parameter. Otherwise, saves the actual selected value (string, integer, float).str
, optional): ID of node that holds the definition for the selector. Mandatory if save_selected_labels
is True
. Best option is adding self.identifier
for this param (without quotes).>>> options = ["option1", "option2", "option3"]
>>> selected = 1
>>> selector = pp.selector(options, selected)
>>> options = ["option1", "option2", "option3"]
>>> selected = ["option2", "option3"]
>>> selector = pp.selector(options, selected, multiselect=True, save_selected_labels=True, node_identifier=self.identifier)
Creates a UI Pyplan form with the given table name, columns, and settings.
>>> pp.form(
table_name='node1',
columns=[
FormColumn(field='column1', title='column1', column_type=ColumnType.input, data_type=DataType.string, nested_headers=['header1', 'header2']),
FormColumn(field='column2', title='column2', column_type=ColumnType.input, data_type=DataType.string),
FormColumn(field='column3', title='column3', column_type=ColumnType.input, data_type=DataType.string)
],
settings=FormSettings(database_engine=DatabaseConnectionString.sqlite, database_connection=SQLiteConnection(database_path='forms_data/forms.db'))
)
Dataclass with all accepted parameters for a form column.
>>> from typing import Any, Union
QUANTITIES_THRESHOLD = 300
def _quantities_change_validation_fn(
value: Any, column_name: str, row_id: Union[int, str], df: pd.DataFrame
) -> tuple[bool, str]:
"""Validates the given value against a predefined maximum threshold.
Args:
value (Any): The value to be validated.
column_name (str): The name of the column where the value is located.
row_id (Union[int, str]): The identifier of the row containing the value.
df (pd.DataFrame): The DataFrame with the current state of the form.
Returns:
tuple[bool, str]: A tuple where the first element is a boolean indicating
whether the value is valid, and the second element is a message to display after the validation.
"""
if value > QUANTITIES_THRESHOLD:
return (
False,
f'Invalid value {value}. The maximum value is {QUANTITIES_THRESHOLD}',
)
return True, ''
ColumnType.calc
only.
formula
is a string, it evaluates a string describing operations on form columns. See: pandas.DataFrame.eval.formula
is a function, it passes the form DataFrame to the function as an argument.ColumnType.selector
only. List of values to display in the selector for the entire column.ColumnType.selector
only. When True
, displayed rows of the form will be filtered by the available elements in the values
list parameter.ColumnType.related_selector
only. DataFrame with related columns allocation.ColumnType.related_selector
only. Name of the column in related_map
that has the values to be displayed in the selector.related_map
that will filter the results of the related_map_column_values
column. These columns should be present in the form as field names and in related_map
as columns.['header1', 'header2']
will display a column with header1
as a child and header2
as a parent. It's important to note that this is a nested structure, so children cannot be wider than parents. If so, headers and nested_headers
will not be rendered.ColumnType.selector
and ColumnType.related_selector
only. When true
, it only allows values that are defined as options in the selector.Dataclass with all accepted parameters for form settings.
'sqlite'
, 'postgresql'
, or 'custom'
.>>> QUANTITY_TOTAL_SUM_THRESHOLD = 1200
PRICE_MEAN_THRESHOLD = 15.0
def _confirm_validation_fn(df: pd.DataFrame) -> tuple[bool, str]:
"""Validates whether the total sum of the 'quantity' column and the average of
the 'price' column meet the defined constraints.
Args:
df (pd.DataFrame): The DataFrame with the current state of the form.
Returns:
tuple[bool, str]: A tuple where the first element is a boolean indicating
whether the DataFrame meets the validation criteria, and the second element
is an error message if the validation fails.
"""
quantity_total_sum = df['quantity'].sum().item(0)
if not quantity_total_sum <= QUANTITY_TOTAL_SUM_THRESHOLD:
return (
False,
f'The total sum of Quantity must be {QUANTITY_TOTAL_SUM_THRESHOLD} or less. Current total: {quantity_total_sum}',
)
price_mean = df['price'].mean().item(0)
if not price_mean <= PRICE_MEAN_THRESHOLD:
return (
False,
f'The average Price must be {PRICE_MEAN_THRESHOLD} or less. Current average: {price_mean}',
)
return True, ''
True
, creates new columns in the table if a new column is found after the form was created. Otherwise, raises a ValueError
.True
, every row in the form is originated in a given pandas DataFrame object.from_dataframe
is True
only. Pandas DataFrame object that will be the source of every row in the form.from_dataframe
is True
only. Column name in source_dataframe_node
DataFrame that will be used to uniquely identify rows in the form.from_dataframe
is True
only. When set to True
, it deletes rows where its ID column value was previously stored in the database but is not currently present in the source pandas DataFrame.Creates a UI Pyplan InputCube with the given table name, dimensions, and settings.
>>> pp.input_cube(
table_name='sales',
dimensions=[
InputCubeDimension(field='regions', name=regions.node.identifier),
InputCubeDimension(field='item_types', name=item_types.node.identifier)
],
settings=InputCubeSettings(
data_type=DataType.float,
default_value_type=DefaultValueType.scalar,
default_value=0.0,
database_engine=DatabaseConnectionString.sqlite,
database_connection=SQLiteConnection(database_path='inputs_data/input_cubes.db')
)
)
Dataclass with all accepted parameters for an InputCube dimension.
Dataclass with all accepted parameters for InputCube settings.
'sqlite'
, 'postgresql'
, or 'custom'
.'scalar'
, 'app_node'
.default_value_type
is 'scalar'
, it can be float
, int
, or str
. If it is 'app_node'
, it must be a str
with the node identifier from which its result will be taken as a default value.True
, removes any combination of dimensions that is not present in the actual set of combinations from given dimensions.False
, raises an error when a new dimension is added or removed.Functions related to process management within Pyplan allow you to retrieve and manage processes, including their statuses and updates.
Returns a list of all processes available in the current application.
bool
, optional): Whether to include In progress processes only (True
) or all process statuses (Stopped
, In Progress
, and Completed
). Defaults to True
.[
{
"id": 1,
"name": "Process A",
"description": "",
"status": {
"id": 2,
"name": "In progress",
"color": "#ff7f0e",
"status_type": "in_progress"
},
"start_date": "2023-08-01T14:10:50+0000",
"end_date": None,
"subscribers": [
{
"id": "1b23133b-a005-4a28-9768-7eb3a0ec2555",
"username": "app_administrator",
"first_name": "App",
"last_name": "Administrator"
}
],
"groups": [
{
"id": 2,
"name": "Demand Planning",
"position": 1,
"tasks": [
{
"id": 2,
"name": "Historical Analysis",
"description": "",
"action_type": "open_interface",
"action_data": "d64d9487-83bb-46ce-9dfb-d7e48144fb07",
"responsible": {
"id": "dbc28680-0a42-4aeb-a2cc-ddf5829f3b2c",
"username": "viewer_user",
"first_name": "Viewer",
"last_name": "User"
},
"status": {
"id": 2,
"name": "In progress",
"color": "#ff7f0e",
"status_type": "in_progress"
},
"due_days": 2,
"due_hours": 0,
"due_minutes": 0,
"due_type": {
"id": "2",
"name": "since_blocking_task_completed"
},
"is_delayed": False,
"subscribers": [],
"is_blocked_by": [1],
"reviewers": [
{
"id": "1b23133b-a005-4a28-9768-7eb3a0ec2555",
"username": "app_administrator",
"first_name": "App",
"last_name": "Administrator"
}
],
"comments": [],
"finished": None,
"reviewer_interface": "d64d9487-83bb-46ce-9dfb-d7e48144fb07"
},
{
"id": 4,
"name": "Baseline Forecast",
"description": "",
"action_type": "open_interface",
"action_data": "99721db0-b967-4f99-a39c-40a78e3ed19a",
"responsible": {
"id": "dbc28680-0a42-4aeb-a2cc-ddf5829f3b2c",
"username": "viewer_user",
"first_name": "Viewer",
"last_name": "User"
},
"status": {
"id": 5,
"name": "Not ready to start",
"color": "#e6e6fa",
"status_type": "not_ready_to_start"
},
"due_days": 2,
"due_hours": 0,
"due_minutes": 0,
"due_type": {
"id": "2",
"name": "since_blocking_task_completed"
},
"is_delayed": False,
"subscribers": [],
"is_blocked_by": [2],
"reviewers": [
{
"id": "1b23133b-a005-4a28-9768-7eb3a0ec2555",
"username": "app_administrator",
"first_name": "App",
"last_name": "Administrator"
}
],
"comments": [],
"finished": None,
"reviewer_interface": "5a61034d-98f2-4716-84cb-1ce6eb110982"
},
]
}
],
"is_delayed": False
},
{
"id": 2,
"name": "Process B",
"description": "",
"status": {
"id": 2,
"name": "In progress",
"color": "#ff7f0e",
"status_type": "in_progress"
},
"start_date": "2023-08-16T14:53:07+0000",
"end_date": None,
"subscribers": [
{
"id": "4107a468-ba9a-4843-b3bb-1961ce665313",
"username": "app_administrator2",
"first_name": "App",
"last_name": "Administrator 2"
}
],
"groups": [
{
"id": 3,
"name": "Supply Planning",
"position": 0,
"tasks": [
{
"id": 7,
"name": "Production Plan",
"description": "",
"action_type": "open_interface",
"action_data": "3abf4d4a-ab24-4b29-8e92-2add9bdef422",
"responsible": {
"id": "dbc28680-0a42-4aeb-a2cc-ddf5829f3b2c",
"username": "viewer_user",
"first_name": "Viewer",
"last_name": "User"
},
"status": {
"id": 4,
"name": "Completed",
"color": "#127517",
"status_type": "completed"
},
"due_days": 2,
"due_hours": 0,
"due_minutes": 0,
"due_type": {
"id": "1",
"name": "since_process_starts"
},
"is_delayed": False,
"subscribers": [],
"is_blocked_by": [],
"reviewers": [],
"comments": [],
"finished": "2023-08-23T15:10:50+0000",
"reviewer_interface": ""
},
],
}
],
"is_delayed": False
}
]
result = pp.get_all_processes()
Returns a list of all processes in which the current user is involved. A user is considered involved in a
process if any of the following conditions are met:
a) The user is a subscriber to the process.
b) The user is responsible for a task within the process.
c) The user is a subscriber or reviewer for a task within the process.
If condition a) is met, the function will return all tasks associated with the process. If either condition b) or c) is met, the function will only return tasks for which the user is responsible, a subscriber, or a reviewer.
bool
, optional): Whether to include In progress processes only (True
) or all process statuses (Stopped
, In Progress
, and Completed
). Defaults to True
.[
{
"id": 1,
"name": "Process A",
"description": "",
"status": {
"id": 2,
"name": "In progress",
"color": "#ff7f0e",
"status_type": "in_progress"
},
"start_date": "2023-08-01T14:10:50+0000",
"end_date": None,
"subscribers": [
{
"id": "1b23133b-a005-4a28-9768-7eb3a0ec2555",
"username": "app_administrator",
"first_name": "App",
"last_name": "Administrator"
}
],
"groups": [
{
"id": 2,
"name": "Demand Planning",
"position": 1,
"tasks": [
{
"id": 2,
"name": "Historical Analysis",
"description": "",
"action_type": "open_interface",
"action_data": "d64d9487-83bb-46ce-9dfb-d7e48144fb07",
"responsible": {
"id": "dbc28680-0a42-4aeb-a2cc-ddf5829f3b2c",
"username": "viewer_user",
"first_name": "Viewer",
"last_name": "User"
},
"status": {
"id": 2,
"name": "In progress",
"color": "#ff7f0e",
"status_type": "in_progress"
},
"due_days": 2,
"due_hours": 0,
"due_minutes": 0,
"due_type": {
"id": "2",
"name": "since_blocking_task_completed"
},
"is_delayed": False,
"subscribers": [],
"is_blocked_by": [1],
"reviewers": [
{
"id": "1b23133b-a005-4a28-9768-7eb3a0ec2555",
"username": "app_administrator",
"first_name": "App",
"last_name": "Administrator"
}
],
"comments": [],
"finished": None,
"reviewer_interface": "d64d9487-83bb-46ce-9dfb-d7e48144fb07"
},
{
"id": 4,
"name": "Baseline Forecast",
"description": "",
"action_type": "open_interface",
"action_data": "99721db0-b967-4f99-a39c-40a78e3ed19a",
"responsible": {
"id": "dbc28680-0a42-4aeb-a2cc-ddf5829f3b2c",
"username": "viewer_user",
"first_name": "Viewer",
"last_name": "User"
},
"status": {
"id": 5,
"name": "Not ready to start",
"color": "#e6e6fa",
"status_type": "not_ready_to_start"
},
"due_days": 2,
"due_hours": 0,
"due_minutes": 0,
"due_type": {
"id": "2",
"name": "since_blocking_task_completed"
},
"is_delayed": False,
"subscribers": [],
"is_blocked_by": [2],
"reviewers": [
{
"id": "1b23133b-a005-4a28-9768-7eb3a0ec2555",
"username": "app_administrator",
"first_name": "App",
"last_name": "Administrator"
}
],
"comments": [],
"finished": None,
"reviewer_interface": "5a61034d-98f2-4716-84cb-1ce6eb110982"
},
]
}
],
"is_delayed": False
},
{
"id": 2,
"name": "Process B",
"description": "",
"status": {
"id": 2,
"name": "In progress",
"color": "#ff7f0e",
"status_type": "in_progress"
},
"start_date": "2023-08-16T14:53:07+0000",
"end_date": None,
"subscribers": [
{
"id": "4107a468-ba9a-4843-b3bb-1961ce665313",
"username": "app_administrator2",
"first_name": "App",
"last_name": "Administrator 2"
}
],
"groups": [
{
"id": 3,
"name": "Supply Planning",
"position": 0,
"tasks": [
{
"id": 7,
"name": "Production Plan",
"description": "",
"action_type": "open_interface",
"action_data": "3abf4d4a-ab24-4b29-8e92-2add9bdef422",
"responsible": {
"id": "dbc28680-0a42-4aeb-a2cc-ddf5829f3b2c",
"username": "viewer_user",
"first_name": "Viewer",
"last_name": "User"
},
"status": {
"id": 4,
"name": "Completed",
"color": "#127517",
"status_type": "completed"
},
"due_days": 2,
"due_hours": 0,
"due_minutes": 0,
"due_type": {
"id": "1",
"name": "since_process_starts"
},
"is_delayed": False,
"subscribers": [],
"is_blocked_by": [],
"reviewers": [],
"comments": [],
"finished": "2023-08-23T15:10:50+0000",
"reviewer_interface": ""
},
],
}
],
"is_delayed": False
}
]
result = pp.get_my_processes()
Returns a list of dictionaries containing all available statuses.
[
{'id': 1, 'name': 'Not started', 'color': '#a7a7a7', 'status_type': 'not_started'},
{'id': 2, 'name': 'In progress', 'color': '#ff7f0e', 'status_type': 'in_progress'},
{'id': 3, 'name': 'Pending review', 'color': '#FFC400', 'status_type': 'pending_review'},
{'id': 4, 'name': 'Completed', 'color': '#127517', 'status_type': 'completed'},
{'id': 5, 'name': 'Not ready to start', 'color': '#e6e6fa', 'status_type': 'not_ready_to_start'},
]
Copy code
result = pp.get_task_statuses()
Updates the status of a given task.
pp.get_task_statuses()
for more statuses):
result = pp.change_task_status(task_id=3, new_task_status_id=2)
Updates the properties of a given process.
import datetime
new_data = {
'end_date': datetime.datetime.now(),
'status': {
'id': 3,
'name': 'Completed',
'color': '#127517',
'status_type': 'completed'
}
}
result = pp.update_process(process_id=2, data=new_data)
This subsection contains functions related to the management and monitoring of scheduled tasks within Pyplan. These functions allow you to log task activity, initiate the execution of scheduled tasks, and retrieve logs to monitor task progress and outcomes.
Generates a log entry. Used for scheduled tasks.
str
): State of the task. It can be one of the following: 'PROGRESS', 'INFO', 'WARNING', 'FAILURE',str
): Short description of the task. Example: 'start process'.str
): Additional short description.dict
): JSON containing more information about the task. Keys should be strings and values>>> import requests
>>> pp.log_task(task_state='SUCCESS', task_description='Process finished', task_activity='Data import',
task_info={'imported_rows': 100})
Initiates the execution of a scheduled task.
task_id
is not a positive integer.result = pp.run_scheduled_task(task_id=3)
Retrieves the logs/messages of a scheduled task.
Structure with task_details=True
:
[
{
"id": "32ed40e6-d98b-43f7-a192-581107892f3b",
"state": "INFO",
"params": null,
"task_logs": [
{
"id": "b704722b-77d8-4bf6-8687-46074a201225",
"state": "RECEIVED",
"is_running": false,
"description": "Start Job",
"activity": null,
"created_at": "2024-03-26T17:41:45+0000",
"updated_at": "2024-03-26T17:41:45+0000",
"info": {
"params": {
"param1": "0"
},
"node_id": "a1millonfilas",
"version": null,
"app_path": "pyplan/admin/Analisis de datos 1"
},
"periodic_task_log": "c3d70913-d6a9-4d12-9327-a8a0e2c191cd"
},
{
"id": "06c27e3c-a850-4954-bf88-e6fe26a2eeae",
"state": "PROGRESS",
"is_running": false,
"description": "Application successfully opened",
"activity": null,
"created_at": "2024-03-26T17:41:47+0000",
"updated_at": "2024-03-26T17:41:47+0000",
"info": {},
"periodic_task_log": "c3d70913-d6a9-4d12-9327-a8a0e2c191cd"
},
{
"id": "021d79b5-fb69-43b5-bee7-3b1f56a52caf",
"state": "INFO",
"is_running": false,
"description": "Checking Existence",
"activity": null,
"created_at": "2024-03-26T17:41:47+0000",
"updated_at": "2024-03-26T17:41:47+0000",
"info": {
"Existence": "Node a1millonfilas does exists"
},
"periodic_task_log": "c3d70913-d6a9-4d12-9327-a8a0e2c191cd"
},
{
"id": "060fc7e7-d90b-48aa-af5a-3b9c710e945b",
"state": "INFO",
"is_running": false,
"description": "Node run started",
"activity": null,
"created_at": "2024-03-26T17:41:47+0000",
"updated_at": "2024-03-26T17:41:47+0000",
"info": {},
"periodic_task_log": "c3d70913-d6a9-4d12-9327-a8a0e2c191cd"
}
],
"created_at": "2024-03-26T12:58:42+0000",
"task_id": "0ecea047-bc4b-4129-adf7-802091b5a969",
"periodic_task": 11
},
{
"id": "8e8b5287-057b-4041-a8fe-4ba6a69bfcf0",
"state": "INFO",
"params": {
"param1": "12"
},
"created_at": "2024-03-26T13:06:16+0000",
"task_id": "b8274630-f4fd-41bb-b59e-2c75e579dc8a",
"periodic_task": 11
},
]
Structure with task_details=False:
[
{
"id": "8e8b5287-057b-4041-a8fe-4ba6a69bfcf0",
"state": "INFO",
"params": {
"param1": "12"
},
"created_at": "2024-03-26T13:06:16+0000",
"task_id": "b8274630-f4fd-41bb-b59e-2c75e579dc8a",
"periodic_task": 11
},
]
logs = pp.get_scheduled_task_logs(task_id=3, result_size=20, task_details=True)
The Assistant Bot functions allow you to interact with and configure a virtual assistant within your application. You can update the assistant's parameters, such as its name, description, and behavior, or initiate conversations by sending messages to it. This functionality is designed to enhance the user experience by providing customizable AI-driven assistance within your applications.
Updates the parameters of the currently selected assistant bot for the application.
result = pp.update_application_assistant(
assistant_id='asst_i4....',
engine_type='openai_assistant',
reindex_documents=True
)
Chat with an assistant.
[
{'role': 'user', 'content': 'Hello'},
{'role': 'assistant', 'content': 'Hello, how can I help you?'},
{'role': 'user', 'content': 'I need help with...'}
]
result = pp.assistant_chat(
assistant_id='asst_i4...',
engine_type='openai_assistant',
message='Analyze the following data and...'
)
The Encryption functions provide secure methods for managing sensitive data within your application. These functions enable you to retrieve encrypted secrets, manage your secret keys, and decrypt encrypted SQLite databases. This ensures that your data remains protected while allowing you to access and manage it as needed.
Returns the value of a secret key.
None
.result = pp.get_secret_value(secret_key='my_secret_key', department_code='my_department_code')
Returns a list of all the secret keys the user has access to.
result = pp.get_my_secret_keys()
Decrypts an existing cyphered SQLite database and creates a new SQLite database without encryption with its original data.
import os
result = pp.decrypt_sqlite_database(
encrypted_db_path=os.path.join(current_forms_data_path, 'encrypted_forms.db'),
decrypted_db_path=os.path.join(current_forms_data_path, 'decrypted_forms.db'),
password=pp.get_secret('MY_FORMS_DB_PASSWORD'),
remove_encrypted_db_when_done=True
)
This section covers functions related to handling files and managing messages within the Pyplan environment.
Download file from Pyplan UI.
str
): The path of the file to download. It can be absolute, relative, or from temp.# absolute
pp.download('/company/Public/data/sample.csv')
# relative to current application
pp.download('output.csv')
# from temp directory
pp.download('/temp/uuid.csv')
# download folder as .zip file
pp.download('folder1')
Display upload wizard in Pyplan UI.
str
): Upload target folder.Union[str, None]
, optional): Always assign this name to the uploaded file. Generate a copy if the file already exists.List[str]
, optional): File extensions allowed for upload.str
, optional): Identifier of the node that will execute the validation. The node must return a function (see in the examples).str
, optional): Identifier of the node to invoke once the file has been uploaded. The node must return a function (see in the examples).List[str]
, optional): List of node identifiers to be invalidated after file uploading.str
, optional): Text to be displayed in the upload dialog box.bool
, optional): If true, allows uploading more than one file at a single time. 'target_filename' parameter is not used when multiple is True.# absolute
pp.upload(target_path='/company/Public/data')
# relative to current application
pp.upload(target_path='data')
# fixed file name
pp.upload(target_path='data', target_filename='input_template.xlsx')
# filter file extensions
pp.upload(target_path='data', allowed_extensions=['.xls','.xlsx','.xlsm'])
# with validation node
pp.upload(target_path='data', validation_node_id='validation_upload')
# example definition of validation_upload node:
def _fn(filename: str) -> bool:
# your validation code here
...
pp.send_message('your message here')
return True
result = _fn
# using callback
pp.upload(target_path='data', callback_node_id='upload_callback')
# example definition of upload_callback node:
def _fn(filename: str):
# your code here
result = _fn
# using invalidate nodes:
pp.upload(target_path='data', list_of_node_ids_to_invalidate=['connection_node_1', 'connection_node_2'])
# upload multiple files
pp.upload(target_path='/company/Public/data', allowed_extensions=['.xls','.xlsx','.xlsm'], multiple=True)
Sends a message to the UI. Only used with Pyplan UI.
str
): The message text to be displayed.Optional[str]
, optional): The message title to be displayed. Defaults to None
.str
, optional): The level of the message. Can be "info", "success", "warning", or "error". Defaults to "info".int
, optional): The time in milliseconds to automatically hide the message.>>> pp.send_message("The process has been completed", "Process complete!", "success", 3000)
Creates a new app notification for a given user.
message = 'The process ended <span style="color: green">SUCCESSFULLY</span>. Click to see the results...'
pp.send_notification(
target_usernames=['analyst_user_1', 'analyst_user_2'],
message=message,
notification_type='interface',
extra_params={'interface_id': '444ef11c-fec3-4ec8-9297-2a7034ec48b2'}
)
Creates and updates a progress bar. Only used with Pyplan UI.
int
): A value between 0 and 100 representing the current progress of the task.str
, optional): A message to display along with the progress bar. Defaults to an empty string.str
, optional): The level of the notification. Can be one of "info", "success", "warning", or "error". Defaults to "info".bool
, optional): Whether to close the progress bar when the task is finished. Defaults to False
.>>> pp.progressbar(20, "Step 1", "info")
>>> pp.progressbar(100, "Complete!", "success")
This collection of functions provides tools for manipulating and managing DataArray and DataFrame objects. They include methods for setting domains, creating new arrays, subscript and index management, data slicing, and filling missing values. Additionally, functions for concatenating rows and converting between various data formats and structures are included. These utilities facilitate effective data handling and transformation within Pyplan.
Reindexes the dataArray
by applying the indices of the domainDic
parameter. This function takes an xarray DataArray
and a dictionary as inputs. The dictionary should have keys as the dimension names and values as indices from Pandas. The function then reindexes the dataArray
along the dimensions specified in the dictionary using the values provided in the dictionary. The reindexed dataArray
is then renamed to the names provided in the dictionary. If a defaultValue
is provided, the function will fill any missing values in the reindexed DataArray
with the provided value. The function then returns the reindexed and renamed DataArray
with the new domain set.
xr.DataArray
): The DataArray
for which the domain is to be set.Dict[str, pd.Index]
): A dictionary with keys as the dimensions of the DataArray
and values as Pandas indexes representing the new domain. The Pandas indexes must have a name.Optional[Union[float, int]]
, optional): A default value to fill in any missing values in the DataArray
after reindexing. If not provided, missing values will not be filled.DataArray
with the new domain set.pp.set_domain(da, {"time": time_index, "products": product_index})
Creates a DataArray
using an atomic value distributed along all dimensions.
Union[float, int]
): The value to fill the DataArray
with.List[List[Any]]
): A list of lists representing the coordinates of the DataArray
. Each list within the outer list should contain the values for a single dimension.Optional[np.dtype]
, optional): The data type of the DataArray
. If not provided, the data type will be inferred from the value. Default is None
.DataArray
.>>> data_array = create_dataarray(5, [[1, 2, 3], [4, 5, 6]], np.int64)
>>> print(data_array)
<xarray.DataArray (dim_0: 3, dim_1: 3)>
array([[5, 5, 5],
[5, 5, 5],
[5, 5, 5]], dtype=int64)
Coordinates:
* dim_0 (dim_0) int64 1 2 3
* dim_1 (dim_1) int64 4 5 6
Filters a DataArray using the specified indexes and values.
xr.DataArray
): The DataArray to be filtered.Union[Index, List[Index]]
): The index or list of indexes to filter by. The indexes must be an index node or have a name.Union[List[str], str]
): The value or list of values to filter by.>>> data = xr.DataArray(np.random.rand(3,3), coords=[("x",[1,2,3]),("y",[4,5,6])])
>>> data
<xarray.DataArray (x: 3, y: 3)>
array([[0.90468629, 0.83137329, 0.16108201],
[0.41570222, 0.52236005, 0.83363554],
[0.70427948, 0.49809961, 0.06619599]])
Coordinates:
* x (x) int64 1 2 3
* y (y) int64 4 5 6
>>> indexes = pd.Index(["x"],name='x')
>>> values = [2]
>>> pp.subscript(data, indexes, values)
<xarray.DataArray (y: 3)>
array([0.41570222, 0.52236005, 0.83363554])
Coordinates:
* y (y) int64 4 5 6
Returns a Pandas index with the elements of the index for which the condition cube
is true. The function is used to create a new index that is a subset of an existing index.
xr.DataArray
): The data array which is used as a condition to select elements of the index. The condition should be in the form of a boolean DataArray
.>>> cube = xr.DataArray([1, 2, 3, 4, 5]) > 2
>>> pp.subset(cube)
Int64Index([2, 3, 4], dtype='int64')
Changes the index of a DataArray object.
xr.DataArray
): The DataArray to change the index of.pd.Index
): The old index to be replaced.pd.Index
): The new index to replace the old one.int
, optional): 1: by Value (default), 2: by pos.float
, optional): The default value to fill NaN values with. Defaults to None
.>>> pp.change_index(dataArray, oldIndex, newIndex)
Filters a DataArray by integer position along the specified index.
xr.DataArray
): The DataArray to be filtered.pd.Index
): The index used to filter the DataArray. Must have a name.int
): Integer position along the specified index.Fills np.inf
values in an xarray.DataArray
with a specified value.
np.inf
. Defaults to 0.inf
values.>>> pp.fill_inf(dataArray, 0)
Fills np.inf
and np.nan
values in a DataArray with the specified value.
xr.DataArray
): The DataArray to fill.Union[float, int]
): Value to replace np.inf
and np.nan
with. Default is 0
.np.inf
and np.nan
replaced by the specified value.Flattens array_param
by creating a new index that includes all combinations of values from
index_param
. The new index will have the same name as the original index.
xr.DataArray
): The data array to flatten.str
): The name of the index dimension of array_param
to use to create the new index.>>> import xarray as xr
>>> dataArray = xr.DataArray([[1, 2], [3, 4]], coords=[('x', ['A', 'B']), ('y', [1, 2])])
>>> new_index = pp.concat_rows(dataArray, 'x')
>>> new_index
Index(['A', 'B'], dtype='object')
Returns a pandas DataFrame from an xlsb file.
str
): The path of the xlsb file to be opened.None
if the file cannot be read.>>> _df = pp.pandas_from_xlsb_file('path/to/xlsb/file')
Create a pandas DataFrame from an xarray DataArray with n dimensions.
xr.DataArray
): The DataArray to convert to a DataFrame.>>> import xarray as xr
>>> data = xr.DataArray([1, 2, 3], dims=["x"], coords={"x": [1, 2, 3]})
>>> pp.pandas_from_dataarray(data)
x value
0 1 1
1 2 2
2 3 3
Class to manage access databases.
Returns a Pandas Index from a column of a Pandas DataFrame.
pd.DataFrame
): Pandas DataFrame.Optional[str]
, optional): DataFrame column name used to create the pd.Index
. By default, it is created using the first column.bool
, optional): True to remove empty rows.>>> import pandas as pd
>>> df = pd.DataFrame({'A':[1,2,3], 'B':[4,5,6]})
>>> pp.index_from_pandas(df)
Int64Index([1, 2, 3], dtype='int64')
>>> pp.index_from_pandas(df, "B")
Int64Index([4, 5, 6], dtype='int64')
Returns a pandas.Index
from an Excel file.
Union[str, Workbook]
): Excel object.Optional[str]
, optional): Sheet name to be read.Optional[str]
, optional): Name of the range to be read.Optional[str]
, optional): Used with sheetName
, for reading from a simple range.Optional[str]
, optional): DataFrame column name used to create the index. By default, it is created using the first column.bool
, optional): True to remove empty rows.>>> pp.index_from_excel(excelNode, "Sheet 1")
>>> pp.index_from_excel(excelNode, namedRange="name_range")
>>> pp.index_from_excel(excelNode, "Sheet 1", cellRange="A1:H10", columnName="column10")
Converts a Pandas DataFrame into an xr.DataArray
or xr.Dataset
by applying the set_domain
function.
pd.DataFrame
): Pandas DataFrame with no index columns.Dict[str, pd.Index]
): A dictionary where keys are the dimensions of the resulting DataArray and values are Pandas indices representing the new domain. Example: {'Column Name': index_name}
.Union[str, List[str], pd.Index]
): DataFrame's value columns.Optional[Any]
): Default value when applying the set_domain
function.bool
): If True, valueColumns
becomes a dimension of the resulting DataArray. If False, each value column becomes a variable of the resulting Dataset.bool
): If True, sums identical rows. Otherwise, removes duplicates (except the first one).valueColumns
and valueColumnsAsDim
.>>> pp.dataarray_from_pandas(sales_dataframe, {'Sales Channel': sales_channels, 'Month': time}, 'Sales', 0.0)
Creates an xr.DataArray
from an Excel file by reading the specified sheet, named range, or cell range.
Union[str, Workbook]
): excel_connection
object.Optional[str]
, optional): Sheet name to be read.Optional[str]
, optional): Name of the range to be read.Optional[str]
, optional): Used with sheetName
to read from a simple range.Optional[List[pd.Index]]
, optional): List of pd.Index
objects to perform a change_index
operation.Optional[Union[str, pd.Index]]
, optional): String with the column name of the DataFrame that contains the values. pd.Index
with column names to convert columns to an index.Optional[List[str]]
, optional): Column names in Pandas to parse with indexes. Used if the header on the DataFrame is not equal to index identifiers.Optional[pd.Index]
, optional): Replace the index used in valueColumns
with this index (using change_index
).float
, optional): Default value for the created DataArray. Defaults to 0.>>> pp.dataarray_from_excel(excelNode, "Sheet 1", indexes=[indicadores], valueColumns="descuentos")
>>> pp.dataarray_from_excel(excelNode, namedRange="nombre_rango", indexes=[indicadores], valueColumns=time)
Converts an index into a DataArray indexed by the input index and with its values.
pd.Index
): Index to be used as the DataArray's index.>>> pp.to_dataarray(time_index)
Converts dataArray
, originally indexed by sourceIndex
, to a dataArray
indexed by targetIndex
, aggregating according to the mapInfo
's allocation of targetIndex: sourceIndex
.
xr.DataArray
): The DataArray to be aggregated.Union[xr.DataArray, List[xr.DataArray]]
): DataArray or list of DataArrays that gives the value of targetIndex
for each element of sourceIndex
. If the map does not match, then the element will not be set into the target index, and information will be lost.pd.Index
): The original index of dataArray
.Union[pd.Index, List[pd.Index]]
): Index or list of pd.Index objects that will be used as the new index of dataArray
.str
, optional): Specifies the function to be used when grouping data (sum
, mean
, min
, max
, median
). Default is 'sum'
.Optional[bool]
, optional): numeric_only
parameter for the aggregation function.targetIndex
, aggregated according to the mapInfo
.# Example: Aggregating time information into an annual index
pp.aggregate(dataArray, timeToYearsMap, time, years)
Performs cyclic calculations between nodes.
xr.DataArray
): The DataArray to perform the cyclic dependency calculation on.pd.Index
): Index from dataArray
to shift. Must have a name.int
): Number of elements to shift. Can be positive or negative.Union[xr.DataArray, float, int]
, optional): Initial values to apply to the first "shift" elements. Defaults to None
.bool
, optional): Boolean to apply subscript to every input in the dynamic loop. If one of the nodes in the dynamic loop is set to True
, it applies it to every other node in the loop. Defaults to True
.Returns the value of dataArray
indexed by the index of dataMap
.
xr.DataArray
): DataArray indexed by sharedIndex
whose values correspond to elements of sharedIndex
.xr.DataArray
): DataArray containing the index to lookup in dataArray
.pd.Index
): The shared index between dataArray
and dataMap
.Union[int, float]
, optional): The value to fill when the index lookup fails. Defaults to 0
.dataArray
indexed by dataMap
.# Example: Let's say you have a dataArray with an estimated inflation rate by Country ("inflation_rate"
# is the name of the dataArray; "country" is the name of the index) and you want to assign it to the
# corresponding Company depending on its location. On the other hand, there's a many-to-one map where
# each Company is allocated to a single Country ("country_to_company_allocation"). The sharedIndex,
# in this case, is Country ("country").
pp.lookup(inflation_rate, country_to_company_allocation, country)
# This will return the estimated inflation rate by Company.
Returns elements chosen from x
or y
depending on the condition. This function is a wrapper for xarray's where
function that provides extended condition options and passes additional positional and keyword arguments.
where
function.# Example with Pandas Index
countries = pd.Index(["Latvia", "Argentina", "Chile"])
result1 = pp.where(countries, '!=', ['Latvia', 'Argentina'], 1, 0, some_arg)
result2 = pp.where(countries, '!=', 'Latvia', 1, 0, another_arg, yet_another_arg)
# Example with xarray DataArray
time = xr.DataArray(['2023', '2024', '2025'])
result3 = pp.where(time, '>', '2024', '2023', '2022', some_kwarg=some_value)
result4 = pp.where(time, '=', '2024', '2023', '2022')
Calculates the maximum between two xarray DataArrays, pandas DataFrames, or pandas Index objects.
np.maximum
.x
and y
containing the maximum values.data_array_1 = xr.DataArray([1, 2, 3, 4], dims=['x'])
data_array_2 = xr.DataArray([3, 4, 2, 5], dims=['x'])
max_value = pp.maximum(data_array_1, data_array_2)
print(max_value)
# Output: [3 4 3 5]
Calculates the minimum between two xarray DataArrays, pandas DataFrames, or pandas Index objects.
np.minimum
.x
and y
containing the minimum values.data_array_1 = xr.DataArray([1, 2, 3, 4], dims=['x'])
data_array_2 = xr.DataArray([3, 4, 2, 5], dims=['x'])
min_value = pp.minimum(data_array_1, data_array_2)
print(min_value)
# Output: [1 2 2 4]
Generates a pd.Index
with the unique values of the dataArray
.
xr.DataArray
): The DataArray from which the index will be copied.bool
, optional): Whether to sort the unique values in ascending order. Defaults to True
.dataArray
.>>> data_array = xr.DataArray([1, 2, 2, 3, 4], dims=['x'])
>>> pp.copy_index(data_array, sortValues=True)
Index([1, 2, 3, 4], dtype='int64')
Returns a pd.Index
with the sequence between _start
and _end
parameters. Both limits are inclusive. Values are
converted to string.
int
): The start of the sequence.int
): The end of the sequence.int
, optional): The step between each element of the sequence. Defaults to 1
._start
and _end
.>>> pp.sequence_index(0, 10)
Index(['0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '10'], dtype='object')
Returns a dataArray
containing the value of targetIndex
for which dataArray
(indexed by targetIndex
) is equal
to targetValue
.
xr.DataArray
): Xarray DataArray
.Union[int, float, str]
): Target value to search in dataArray
.pd.Index
): Pandas Index
.str
, optional): Method to select the index. "Last"
returns the last occurrence of targetIndex
fordataArray
is equal to targetValue
. "First"
returns the first occurrence. Defaults to "Last"
.DataArray
containing the value of targetIndex
for which dataArray
is equal to targetValue
.>>> import xarray as xr
>>> import pandas as pd
>>> dataArray = xr.DataArray([[1, 2, 3], [4, 5, 6], [7, 8, 9]], coords=[['A', 'B', 'C'], [1, 2, 3]], dims=['x', 'y'])
>>> targetIndex = pd.Index([1, 2, 3])
>>> pp.subindex(dataArray, 8, targetIndex)
<DataArray (y: 1)>
array([2])
Coordinates:
* y (y) int64 2
Concatenates the reportItems
data arrays along the reportIndex
dimension.
Union[Dict[str, xr.DataArray], List[xr.DataArray]]
): A dictionary or list of data arrays to concatenate. All data arrays must have the same structure.Optional[str]
): The name of the new ReportIndex
dimension that will be created. Defaults to "Report index".Optional[pd.Index]
): The index to overwrite the reportIndex
dimension. Defaults to None
.reportIndex
dimension.>>> reportItems = {"Demand": demand, "Product Stock": stock}
>>> pp.create_report(reportItems, reportIndexName="New Report")
Concatenates Xarray DataArrays along one or two new dimensions, broadcasting by all possible dimensions.
valuesList
.>>> pp.concat_dataarrays(valuesList=[node1, node2, node3], dim=three_items_index)
<xarray.DataArray (dim: 3, ...)>
array([...])
Coordinates:
* dim (dim) object ...
>>> pp.concat_dataarrays(valuesList=['String Example', node2, 0], dim=three_items_index)
<xarray.DataArray (dim: 3, ...)>
array([...])
Coordinates:
* dim (dim) object ...
>>> pp.concat_dataarrays(valuesList=[[node1, node2, node3], [node4, node5, node6]], dim=[two_items_index, three_items_index])
<xarray.DataArray (dim1: 2, dim2: 3, ...)>
array([...])
Coordinates:
* dim1 (dim1) object ...
* dim2 (dim2) object ...
This section includes specialized functions designed for operations on DataFrames, DataArrays, and indexes, along with a selection of mathematical operations. These functions are optimized for handling and manipulating data structures efficiently, providing essential tools for data analysis and transformation.
Finds the occurrences of param1
in param2
using the specified comparison type.
Union[str, int, pd.Index]
): The value or index to compare.Union[str, int, pd.Index]
): The value or index to compare against.int
, optional): The type of comparison to use. The options are:
bool
, optional): Whether the comparison should be case-sensitive. Default is True
.DataArray
with the comparison results.# If param1 is a scalar (numeric or str) and param2 is an index:
# return a DataArray indexed by param2 with True on occurrences of param2.
>>> pp.find("te", region, cp.end_with)
# If param1 is an index and param2 is an index too:
# return a DataArray indexed by param1 and param2 with True on occurrences of param1 on param2.
>>> pp.find(subregion, region, cp.contain)
Applies a function to a DataArray
or Index
.
Union[xr.DataArray, pd.Index]
): The DataArray
or Index
to which the function should be applied.Callable
): The function to be applied.DataArray
or Index
after the function has been applied.obj
is not of type xr.DataArray
or pd.Index
.>>> data_array = xr.DataArray(np.random.rand(3, 3), dims=('x', 'y'))
>>> new_array = apply_fn(data_array, np.square)
>>> print(new_array)
<xarray.DataArray (x: 3, y: 3)>
array([[0.19290767, 0.10126844, 0.09544723],
[0.17547571, 0.01341797, 0.1707827 ],
[0.15894374, 0.07128388, 0.16331167]])
Coordinates:
* x (x) int64 0 1 2
* y (y) int64 0 1 2
Returns a DataArray
object with text values formed by splitting the elements of param1
text values at each occurrence of the separator separator
. The DataArray
will have the original dimension plus a new dimension 'Parts' of length (number of separators + 1). All text values must have the same number of separators separator
.
xr.DataArray
): The DataArray
containing the text values to be split.str
): The separator used to split the text values.int
, optional): The part of the split text to return. If not provided, all parts will be returned.DataArray
with the split text values, with a new dimension 'Parts' of length (number of separators + 1).>>> data_array = xr.DataArray(["hello,world", "hi,world", "hey,world"], dims=('x'))
>>> new_array = split_text(data_array, ",")
>>> print(new_array)
<xarray.DataArray (x: 3, Parts: 2)>
array([['hello', 'world'],
['hi', 'world'],
['hey', 'world']], dtype=object)
Coordinates:
* x (x) int64 0 1 2
* Parts (Parts) object 'Part 1' 'Part 2'
Returns a DataArray
with the positions of an index as values and indexed by the given index.
pd.Index
): The index for which the positions are to be returned.DataArray
with the positions of the index as values and indexed by the given index.>>> index = pd.Index(['a', 'b', 'c'])
>>> pos_array = get_pos(index)
>>> print(pos_array)
<xarray.DataArray (index: 3)>
array([0, 1, 2])
Coordinates:
* index (index) object 'a' 'b' 'c'
Concatenates two or more indexes and/or atomic values and returns a single new index.
pd.Index
or atomic value): Two or more indexes and/or atomic values to be concatenated.>>> index1 = pd.Index(['a', 'b', 'c'])
>>> index2 = pd.Index(['d', 'e', 'f'])
>>> pp.concat_index(index1, index2, 'g', 'h')
Index(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'], dtype='object')
Returns the straight-line depreciation of the given investments DataArray
over its useful life.
xr.DataArray
): A DataArray
containing the investments.xr.DataArray
or int
): A DataArray
or integer with the number of years of life expectancy.pd.Index
): The time dimension of the DataArray
. Must be a Pandas Index
from an Index node.bool
, optional): Whether to start depreciating in the current month (t
) or the next month (t+1
). Default is False
.str
, optional): Format of the time index. e.g., for '2016.01' use '%Y.%m'. Default is '%Y.%m'
.DataArray
with the straight-line depreciation of the investments over its useful life. >>> product = pd.Index(['prod1','prod2'])
>>> time_list = pp.Index(pp.create_time('2021.01', '2021.06', freq='M', format='%Y.%m').values)
>>> data = xr.DataArray(np.random.rand(2, 2, len(time_list)),
dims=('fabrica', 'product', 'time_list'),
coords={'fabrica': ['fab1', 'fab2'], 'product': ['prod1', 'prod2'], 'time_list': time_list})
>>> vida_util = xr.DataArray([4, 10],
dims=('product'),
coords={'product': product})
>>> pp.linear_depreciation(investments=data, usefulLife=vida_util, timeIndex=time_list, includeInCurrentMonth=False)
<xarray.DataArray (fabrica: 2, product: 2, time_list: 6)>
array([[[0. , 0.00729475, 0.00947053, 0.01165097, 0.03016374,
0.0456418 ],
[0. , 0.00751886, 0.01090068, 0.01772 , 0.01858931,
0.01974675]],
[[0. , 0.01736181, 0.02019148, 0.02518252, 0.0285973 ,
0.04491098],
[0. , 0.00784025, 0.01253238, 0.02069702, 0.02174192,
0.02660816]]])
Coordinates:
* fabrica (fabrica) <U4 'fab1' 'fab2'
* product (product) <U5 'prod1' 'prod2'
* time_list (time_list) object '2021.01' '2021.02' ... '2021.05' '2021.06'
Returns the Internal Rate of Return (IRR) of a series of periodic payments (negative values) and inflows (positive values). The IRR is the discount rate at which the Net Present Value (NPV) of the flows equals zero. The variable flow
must be indexed by time_index
.
If the cash flow never changes sign, pp.irr()
has no solution and returns NAN
(Not A Number).
xr.DataArray
): A DataArray
containing the cash flows.pd.Index
): The index that corresponds to the time dimension of the cash flows. The time_index
must be defined in another node.DataArray
with the IRR value.>>> # The Index time must be defined in another node.
>>> time = pd.date_range(start='1/1/2020', end='12/31/2020', freq='M')
>>> flow = xr.DataArray([-1000, 200, 300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200], [('time', time)])
>>> pp.irr(flow, time)
0.4089653663721178
Copy values of an object source
into a new object with id targetId
. This function alters the definition of the object with the targetId
identifier.
xr.DataArray
, pd.DataFrame
, pd.Series
, pd.Index
, float
, int
, str
): DataArray/DataFrame/Index to copy values from. Can also be a string with the id of a node.str
): Identifier of the target node.True
if successful.>>> source = xr.DataArray([1, 2, 3], dims=['x'])
>>> targetId = 'node1'
>>> pp.copy_as_values(source, targetId)
True
Creates an Excel object from the given filepath
.
str
): Path to the Excel file.bool
, optional): Whether to use openpyxl to open the file. Defaults to False
.bool
, optional): Whether to only get values, not the formulas. Defaults to True
.bool
, optional): Whether to open the Excel file in read-only mode. Defaults to True
.>>> excel_obj = pp.excel_connection("/path/to/the/excelfile.xlsx")
Convert a Numpy dtype "kind" to a human-readable string representation.
str
): A single character code indicating the general kind of data.>>> pp.kind_to_string('U')
'string'
Finds the value of nodeIdX
that makes nodeIdObjective
equal to goal
. This function implements scipy.optimize.newton
function. Returns a dictionary if matrixIndex
is not None
; otherwise, returns the root of the problem.
str
): Identifier of node X.str
): Identifier of node Objective.Union[int, float]
): Goal value to be reached. Defaults to 0.Union[int, float]
): An initial estimate of the zero that should be somewhere near the actual zero. Defaults to 1.Union[pd.Index, List[pd.Index]]
, optional): For multidimensional goal seek. Defaults to None
.int
): Maximum number of iterations. Defaults to 50.float
): The allowable error of the zero value. Defaults to 1.48e-08
.bool
): If False
(default), the root is returned. If True
and startValue
is scalar, the return value is (x, r)
, where x
is the root and r
is a RootResults
object. If True
and startValue
is non-scalar, the return value is (x, converged, zero_der)
. Defaults to False
.Union[float, dict]
): If matrixIndex
is None
, returns the root of the problem. Otherwise, returns a dictionary containing the results of each combination of matrixIndex
values.# To find the value of node1 that makes node2 equal to 2, we can use the goal_seek function as follows:
>>> node1 = 0
>>> node2 = node1 ** 2
>>> goal = 2
>>> pp.goal_seek('node18', 'node21', goal)
1.414213562373095
Creates a time index using start and end dates and a specified frequency. The resulting index is formatted with the provided format.
str
): The starting date of the index.str
): The end date of the index.str
, optional): The frequency of the index. Defaults to 'M' (monthly).str
, optional): The format to use for the resulting index. Defaults to '%Y.%m'.pp.create_time('2021.01', '2021.06')
# Output: Index(['2021.01', '2021.02', '2021.03', '2021.04', '2021.05', '2021.06'], dtype='object')
pp.create_time('2021.01.01', '2021.01.10', freq='D', format='%d/%m/%Y')
# Output: Index(['01/01/2021', '02/01/2021', '03/01/2021', '04/01/2021', '05/01/2021',
# '06/01/2021', '07/01/2021', '08/01/2021', '09/01/2021', '10/01/2021'], dtype='object')
Add a given number of periods to a starting date. Allows setting the frequency and output format of the resulting date.
str
): Starting date in the format of 'yyyy.mm' or 'yyyy-mm'.int
): Number of periods to add. Can be positive or negative.str
, optional): Frequency of the periods. Defaults to 'M' for months.str
, optional): Output format for the resulting date. Defaults to '%Y.%m'.>>> pp.add_periods('2016.01', 6)
'2016.07'
>>> pp.add_periods('2016.01', -6)
'2015.07'
>>> pp.add_periods('2016.01', 6, freq='D', format='%d/%m/%Y')
'07/01/2016'
Returns the Net Present Value (NPV) of a cash flow with equally spaced periods. The flow
parameter must contain a series of periodic payments (negative values) and inflows (positive values), indexed by time_index
. The optional offset
parameter specifies the offset of the first value relative to the current time period. By default, offset
is set to 1, indicating that the first value is discounted as if it is one step in the future.
float
): The rate used to discount the cash flows.xr.DataArray
): The cash flows.pd.Index
): The time index used to calculate the NPV. Must have a name.int
, optional): The offset of the first value relative to the current time period. Defaults to 1.>>> rate = 0.03
>>> cash_flows = xr.DataArray([-1000, 200, 300, 400], coords=[('time', ['2020','2021','2022','2023'])], dims=['time'])
>>> time_index = pd.Index(['2020', '2021', '2022', '2023'], name='time') # Example time index
>>> pp.npv(rate, cash_flows, time_index)
<xarray.DataArray ()>
array(-156.98980623705643)
Compute the net present value (NPV) of a non-periodic cash flow with a constant discount rate.
Union[float, int, xr.DataArray]
): The annual discount rate for a 365-day year.xr.DataArray
): Cash flow with time_index
as the dimension.pd.Index
): The time index of the cash flow.Union[str, xr.DataArray]
): The initial date of the cash flow.xr.DataArray
): Number of days for each period of time_index
.>>> rate = 0.05
>>> cash_flows = xr.DataArray([-1000, 200, 300, 400], coords=[('time', ['2020-01-01', '2020-06-01', '2021-01-01', '2021-06-01'])], dims=['time'])
>>> time_index = pd.Index(['2020-01-01', '2020-06-01', '2021-01-01', '2021-06-01'], name='time')
>>> initial_dates = '2020-01-01'
>>> calendar = xr.DataArray([182, 183, 182, 182], coords=[('time', time_index)], dims=['time'])
>>> pp.xnpv(rate, cash_flows, time_index, initial_dates, calendar)
-132.99
Returns a tuple with the shape of nested lists similarly to numpy's shape.
>>> nested_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
>>> pp.get_nested_lists_shape(nested_list)
(3, 3)
Creates a UI Pyplan Report with the given parameters.
>>> pp.report(
items_allocation=[
ReportItem(name='Sales', item_source=ReportItemSource.app_node_id, value=fin_sales.node.identifier),
ReportItem(name='Gross Margin', item_source=ReportItemSource.app_node_id, value=fin_gross_margin.node.identifier),
ReportItem(name='Depreciation', item_source=ReportItemSource.input, value=-100.)
],
settings=ReportSettings(
output_type=ReportOutputType.dataarray,
items_node_id=report_concepts.node.identifier,
default_value=0
)
)
>>> pp.report(
items_allocation=[
ReportItem(name='Sales', item_source=ReportItemSource.app_node_id, value=fin_sales.node.identifier),
ReportItem(name='Costs', item_source=ReportItemSource.app_node_id, value=fin_costs.node.identifier, as_negative=True),
ReportItem(name='Gross Margin', item_source=ReportItemSource.app_node_id, value=fin_gross_margin.node.identifier),
ReportItem(name='Depreciation', item_source=ReportItemSource.input, value=-100.)
],
settings=ReportSettings(
output_type=ReportOutputType.dataframe,
dimension_name='report_concepts',
default_value=0
)
)
Get the specific range from an Excel file with defined name ranges.
usecols
, skiprows
, and nrows
keys.
usecols
is a string with the range of columns to use.skiprows
is an int with the number of rows to skip before reading.nrows
is an int with the number of rows to read.>>> pp.get_specific_range(specific_range='MyRange', file_path='my_file.xlsx')
{'usecols': 'A:C', 'skiprows': 1, 'nrows': 10}
Set range_to_read_excel
with range_value
given.
str
): The range value of the cells to be read.usecols
, skiprows
, and nrows
containing the information for the range of cells to be read.>>> range_value = 'A2:C11'
>>> pp.get_new_range(range_value)
{'usecols': 'A:C', 'skiprows': 1, 'nrows': 10}
Removes duplicate words from a string.
result = pp.remove_duplicates_words("hello hello world world")
print(result)
# "hello world"
This section includes general-purpose functions that support various operations within the application. These functions provide useful information about Pyplan users, help navigate different sections of the interface, and handle system dependencies, providing essential utilities for seamless interaction with the platform.
Gets the list of all users in the company.
[
{
'id': 'abc',
'username': 'user_name',
'first_name': 'User',
'last_name': 'Lastname',
'email': 'user@company.com',
'departments': [
{
'id': 1,
'code': 'pyplan-default',
'name': 'Default'
}
],
'is_active': True,
'last_login': '2023-07-11T21:27:52+0000'
},
{
'id': 'xyz',
'username': 'user_name',
'first_name': 'User firstname',
'last_name': 'User lastname',
'email': 'user@company.com',
'departments': [
{
'id': 1,
'code': 'pyplan-default',
'name': 'Default'
}
],
'is_active': True,
'last_login': '2023-07-10T20:12:31+0000'
}
]
result = pp.get_user_list()
Sends message to command UI to open an application.
pp.open_app(
folder='my_company/Public/Demand Forecast',
version='v2.3',
)
pp.open_app(
folder='my_company/Public/Demand Forecast',
read_only=True,
show_versions_on_open=True
)
pp.open_app(
folder='my_company/Public/Demand Forecast',
open_on_new_instance=True,
resources={
'metadata': {'cpu_architecture': 'x86'},
'pod_definition': {'spec': {'nodeSelector': {'env': 'PyplanEngines'}}},
'container_params': {'resources': {'limits': {'cpu': '0.95', 'memory': '7.4Gi'}, 'requests': {'cpu': '0.95', 'memory': '7.4Gi'}}}
}
)
Sends message to command UI to go to the selected section.
pp.navigate_to_pyplan_section(
section='code',
params={
'node_id': 'cdm_demand',
'code_layout_id': 1,
'run_node': True
}
)
pp.navigate_to_pyplan_section(
section='interface',
params={
'interface_id': '0026f30b-b167-4da4-b19c-48b0ba9a37ff'
}
)
pp.navigate_to_pyplan_section(
section='files',
params={
'folder_path': 'pyplan/Public/My app/data'
}
)
pp.navigate_to_pyplan_section(
section='versions'
)
DEPRECATED. Use the Lib manager instead.
str
): The name of the package to be installed from PyPI.Optional[str]
): The name used to import the package. Defaults to pypi_name
.True
if the installation is successful.# Example usage is not recommended as this function is deprecated.