Adaptive AI-Driven Workflow Execution
Overview of Using ai_execute
for Dynamic Workflow Automation
Fiscus introduces ai_execute
, a powerful feature designed to transform natural language inputs into actionable workflows by orchestrating API integrations through user-defined connectors. Acting as an AI Integration Engineer, ai_execute
dynamically interprets user requests and autonomously manages workflows, routing tasks and API calls based on real-time input. This approach is ideal for applications that demand adaptable, intelligent integrations.
What is ai_execute
?
ai_execute
empowers AI agents to interpret complex instructions, making decisions about which APIs to call and when, all without hardcoded logic. It supports a variety of language models, allowing developers to achieve flexible and dynamic orchestration.
Core Benefits:
- Dynamic AI Orchestration:
- Converts natural language inputs into actionable workflows, eliminating the need for manual API routing and enhancing adaptability.
- Multi-LLM Compatibility:
- Works with multiple language models (OpenAI, Anthropic, Gemini, and more) to provide flexibility based on specific task requirements.
- Stateful and Stateless Memory Management:
- Configures memory retrieval and storage, allowing context-aware responses across sessions.
- Callback-Driven Monitoring:
- Integrates deep customizability at each stage of the AI process, enabling fine-grained control over event handling, from task selection to error management.
- Flexible Execution Modes:
- Supports sequential and conditional execution flows, accommodating both structured and adaptive workflows.
- Python
Example Usage:
from fiscus import FiscusClient, FiscusUser, FiscusLLMType, FiscusExecutionType
client = FiscusClient(api_key='YOUR_FISCUS_API_KEY')
user = FiscusUser(user_id='user_123', client=client)
# Define custom callbacks for various stages
def on_task_creation(info):
print("Task creation process:", info)
def on_error(info):
print("Error encountered:", info)
callbacks = {
'AI_TASK_CREATION': on_task_creation,
'ON_ERROR': on_error
}
# Execute AI-driven workflow
response = client.ai_execute(
input="Schedule a project review meeting for Monday at 10 AM and send an email to participants.",
llm_type=FiscusLLMType.OPENAI,
user=user,
callbacks=callbacks,
execution_mode=FiscusExecutionType.SEQUENTIAL
)
# Print the response
if response.success:
print("Workflow executed successfully:", response.result)
else:
print("Execution failed with error:", response.error_message)
In this example, ai_execute
interprets the instruction and orchestrates the scheduling and email tasks, while custom callbacks monitor progress and errors.
Essential Parameters and Configurations
Each parameter in ai_execute
offers distinct options for customizing the behavior, format, and output of AI-driven tasks. Below are the key parameters to tailor ai_execute
to meet specific needs:
-
llm_type (FiscusLLMType): Specifies the language model used for interpreting the input. Options include popular models such as OPENAI, GEMINI, LLAMA, among others, allowing developers to select based on language processing capabilities or specific NLP requirements.
-
memory: Allows storage and retrieval of conversation history or context, which can be passed between interactions for stateful memory management. If omitted, stateless memory is used, meaning each call to
ai_execute
operates independently of prior interactions. -
callbacks: A dictionary of callback functions triggered at various stages in the AI process. These include callbacks for success (on_success), error (on_error), and intermediate steps such as AI-driven task creation, connector selection, or conditional evaluation. The Fiscus SDK enables the use of pre-defined FiscusCallbackType values to identify and assign specific functions, enabling extensive customization.
-
execution_mode (FiscusExecutionType): Determines the flow of task execution. The SEQUENTIAL mode executes tasks in order, while decision_logic_override (if specified) can introduce custom logic for non-linear, conditional task execution.
-
custom_prompt_template: A template string that adjusts the format of the user prompt before sending it to the LLM. This is useful for ensuring that certain context or task requirements are included in the prompt, enhancing the relevance of the generated responses.
-
decision_logic_override: A custom function that determines the flow or branching of tasks based on the input or prior responses, allowing conditional or complex decision paths within the workflow. This parameter enables advanced, rule-based orchestration within AI-generated workflows.
-
few_shot_examples: A dictionary of labeled examples provided as context to the LLM. These examples improve the accuracy and contextual relevance of the responses by demonstrating expected input-output pairs to the model.
-
embedding_model: Optional parameter to specify a custom embedding model for tasks involving similarity comparisons or semantic matching, often beneficial in memory retrieval or advanced query handling.
-
retrieval_strategy (FiscusMemoryRetrievalType): Controls how the system retrieves memory or historical context, with options such as SEMANTIC_SEARCH for similarity-based retrieval or EXACT_MATCH for strict matching.
-
storage_strategy (FiscusMemoryStorageType): Configures how new information or responses are stored, such as APPEND to add to the existing memory or OVERWRITE to replace it.
Usage Example: Task Orchestration with Custom Callbacks and Execution Logic
The following example demonstrates ai_execute
in a scenario where a user requests to create a scheduled task and send a confirmation email. The workflow includes a callback to monitor task creation and custom error handling:
from fiscus import FiscusClient, FiscusUser, FiscusLLMType, FiscusExecutionType
# Initialize the Fiscus client and user instance
client = FiscusClient(api_key='YOUR_FISCUS_API_KEY')
user = FiscusUser(user_id='user_123', client=client)
# Define custom callbacks for various stages
def on_task_creation(info):
print("Task creation process:", info)
def on_error(info):
print("Error encountered:", info)
callbacks = {
'AI_TASK_CREATION': on_task_creation,
'ON_ERROR': on_error
}
# Execute AI-driven workflow
response = client.ai_execute(
input="Schedule a project review meeting for Monday at 10 AM and send an email to participants.",
llm_type=FiscusLLMType.OPENAI,
user=user,
callbacks=callbacks,
execution_mode=FiscusExecutionType.SEQUENTIAL
)
# Print the response
if response.success:
print("Workflow executed successfully:", response.result)
else:
print("Execution failed with error:", response.error_message)
In this example:
- AI_TASK_CREATION callback tracks each task creation, useful for real-time logging or UI updates.
- ON_ERROR callback handles errors in task execution, enabling customized error processing.
Advanced Configuration Options
Memory Management and Retrieval Logic
Memory settings in ai_execute
allow the SDK to handle user context dynamically, accommodating both short-term and long-term contextual data.
Example: Custom Memory Retrieval Logic
def custom_memory_retrieval(query):
# Custom logic to retrieve relevant memory for the current task
return "Previous interaction context relevant to the current task."
response = client.ai_execute(
input="Please summarize recent activities for my team review.",
llm_type=FiscusLLMType.LLAMA,
memory_retrieval_logic=custom_memory_retrieval,
user=user
)
Conditional Logic with decision_logic_override
The decision_logic_override parameter allows the insertion of conditional branching within the task sequence, based on the results or status of prior tasks.
def custom_decision_logic(input_text):
# Define decision logic based on conditions or input patterns
return [
{'connector': 'CalendarAPI', 'operation': 'create_event'},
{'connector': 'EmailService', 'operation': 'send_notification'}
]
response = client.ai_execute(
input="Arrange a follow-up call and send a confirmation email.",
llm_type=FiscusLLMType.COHERE,
decision_logic_override=custom_decision_logic,
user=user
)
In this setup:
- custom_decision_logic dynamically decides task order and dependencies based on input conditions, adding flexibility to handle complex, branching workflows.
Best Practices for Using ai_execute
- Optimize Task Execution Mode: Use SEQUENTIAL mode for predictable workflows or custom decision logic for more adaptive flows.
- Define callbacks Thoughtfully: Set up appropriate callbacks, especially for error handling (ON_ERROR) and task-specific processes (e.g., AI_TASK_CREATION), to monitor and respond to workflow progress.
- Leverage few_shot_examples for Accuracy: Use few-shot examples to guide the LLM towards specific outcomes, especially in tasks that require precise interpretations.
- Consider Memory Retrieval and Storage Strategies: Tailor the memory strategy (SEMANTIC_SEARCH, APPEND, etc.) based on how the AI should contextualize and recall past interactions.
- Use custom_prompt_template for Customization: Adjust the prompt template to control language or style, especially for applications that require specific formatting or tone.
Handling Asynchronous AI Execution with ai_execute_async
For tasks that benefit from non-blocking behavior, ai_execute_async
provides asynchronous execution, allowing the application to continue running while waiting for results. This is beneficial for high-traffic environments or when managing multiple concurrent workflows.
Example: Asynchronous Workflow Execution
import asyncio
async def main():
response = await client.ai_execute_async(
input="Generate a weekly summary of all completed tasks and email it to the team.",
llm_type=FiscusLLMType.ANTHROPIC,
user=user
)
if response.success:
print("Asynchronous workflow completed:", response.result)
else:
print("Execution failed with error:", response.error_message)
# Run the async function
asyncio.run(main())
By leveraging ai_execute
, Fiscus SDK users can build intelligent, self-guided workflows that respond dynamically to user input, reducing the need for manual orchestration and enhancing the adaptability of API integrations. The function’s extensive configurability makes it suitable for applications requiring both structure and flexibility, allowing Fiscus to serve as the backbone of advanced AI integration and orchestration.