This page is to detail the process of integrating the Conva.AI assistant in headless mode to an app.
Conva.AI supports SDKs for the following platforms
Android
Python
iOS
Flutter
Typescript
Rust - Coming soon
Let's walk through the integration process.
Setting up the build environment
First, you need to add the maven repository URL to your project-level build.gradle file. Open the build.gradle file and add the following code inside the allprojects > repositories block
# Add this to your podfile
source 'https://github.com/CocoaPods/Specs.git'
Next, open your podspec file and add the ConvaAICoredependency.
s.dependency 'ConvaAICore', '1.0.3'
Once done, run the command 'pod install'
npminstallconva-ai
Initializing the Assistant
First, initialize Conva.AI using the credentials of the Assistant created via Magic Studio.
ConvaAI.init( id ="replace this string with your_assistant_id", key ="replace this string with your_api_key", version ="LATEST", // this is a special tag to indicate // the latest version of the Assistant application = applicationContext);
client =AsyncConvaAI( assistant_id="replace this string with your_assistant_id", assistant_version="LATEST", # this is a special tag to indicate # the latest version of the Assistant api_key="replace this string with your_api_key")
Open the "Integration" tab of your Assistant to get the keys.
Invoking a Agent
Next, pass unstructured text input to Conva.AI and let it invoke the appropriate Agent and return the response.
// Pass an input string to Conva.AI and let it determine the // relevant Agent to invoke and return the output params// after that Agent is executedval response = ConvaAI.invokeCapability( input ="book a bus ticket from bangalore to chennai tomorrow at 2pm",)// response = {// "capability_name" : "ticket_booking",// "message" : "Showing you buses from Bangalore to Chennai for tommorrow around 2pm"// "params" : {// "source": "BLR",// "destination" : "MAS",// "date" : "6/7/2024",// "time" : "14:00"// "mode" : "bus"// }// }
# Pass an input string to Conva.AI and let it determine the # relevant Capability to invoke and return the output params# after that Capability is executedresponse =await client.invoke_capability("book a bus ticket from Bangalore to chennai tommorrow at 2pm")return response# response = {# "capability_name" : "ticket_booking",# "message" : "Showing you buses from Bangalore to Chennai for tommorrow around 2pm"# "params" : {# "source": "BLR",# "destination" : "MAS",# "date" : "6/7/2024",# "time" : "14:00"# "mode" : "bus"# }# }
// Pass an input string to Conva.AI and let it determine the // relevant Capability to invoke and return the output params// after that Capability is executedResponse completion =awaitConvaAI.invokeCapability(input:"<input_query>");// completion = {// "capability_name" : "ticket_booking",// "message" : "Showing you buses from Bangalore to Chennai for tommorrow around 2pm"// "params" : {// "source": "BLR",// "destination" : "MAS",// "date" : "6/7/2024",// "time" : "14:00"// "mode" : "bus"// }// }
Note
Proper error handling should be implemented to manage any potential issues during the invocation.
// Pass an input string to Conva.AI and let it determine the // relevant Capability to invoke and return the output params// after that Capability is executed.// Invoke a capability (non-streaming)let response: ConvaAICapability?=tryawait ConvaAI.invokeCapability( with:"<input_query>")
Notes
Ensure that this method is called from an async context.
Proper error handling should be implemented to manage any potential issues during the invocation.
// Pass an input string to Conva AI and let it determine the// capability name and the respective parametersclient.invokeCapability({ query:'Hi, how are you doing today?', stream:false }).then(response => {console.log('ConvaAI Response:', response); // TODO: Add application logic here }).catch(error => {console.error('Error:', error); });//Example response://{inputQuery: 'Hi, how are you doing today?',//message: "Hi there! I'm doing well, thank you for asking. How about you? How can I help you today?",//messageType: 'question',//parameters: { interaction_type: 'greeting' },//relatedQueries: [//'Ask for a joke',//'Get fashion tips',//'Inquire about the latest trends'// ],//toolName: 'small_talk'//}
Response object
The response object contains the following key fields
capability_name: The name of the Agent that was triggered.
message: The string containing either the answer to the original query or a status message. Agent creators can update this via Magic Studio.
related_queries: An array of strings with queries related to the current query and app context. Agent creators can update this field's description if needed.
params: This object contains custom parameters specific to the invoked Agent. For example, if a Agent is created with task_name, date, and time (as shown in the image below), these fields will be in the params object.
history: The current conversation history. This can be used by the app to continue the current conversation further (more on this later).
Developers should check if a specific custom parameters exists before accessing its value, as it might not be marked as required in Magic Studio.
Invoking a specific Agent
To invoke a specific Agent, you can refer to it by name and pass the input string. You can use this in streaming (async) or non-streaming (sync) mode.
// Invoke a specific Agentval response = ConvaAI.invokeCapabilityWithName( input ="book a bus ticket from Bangalore to chennai tommorrow at 2pm", capability ="ticket_booking");
# Invoke a specific Capabilityresponse = await client.invoke_capability_with_name("book a bus ticket from Bangalore to chennai tommorrow at 2pm", capability_name="ticket_booking")
return response
// Invoke a specific capabilityResponse completion =awaitConvaAI.invokeCapabilityWithName( input:"book a bus ticket from Bangalore to chennai tommorrow at 2pm", capability:"ticket_booking");
Note
Proper error handling should be implemented to manage any potential issues during the invocation.
// Invoke a capability with name (non-streaming)let completion: ConvaAICapability?=tryawait ConvaAI.invokeCapability( with:<"input_query">, capabilityName:<"capability_name">)
Notes
Ensure that this method is called from an async context.
Proper error handling should be implemented to manage any potential issues during the invocation.
client.invokeCapabilityName({ query:'What are the latest shopping trends?', stream:false, capabilityName:'domain_faq' }).then(response => {console.log('ConvaAI Response:', response); // TODO: Add application logic here }).catch(error => {console.error('Error:', error); });//Example response://{inputQuery: 'What are the latest shopping trends?',//message: 'The latest shopping trends include increased online shopping, personalized shopping experiences, and the use of AI in retail.',
//messageType: 'statement',//parameters: { query: 'What are the latest shopping trends?' },//relatedQueries: [//'Explore popular products this season',//'Learn about AI in retail',//'Discover personalized shopping experiences'//],//toolName: 'domain_faq'}
Handling Streaming Response
Conva.AI supports a streaming version of these APIs. The response is sent back incrementally, with the listener invoked multiple times. Use the is_final flag in the Response object to check if all responses are received.
// Invoke a Agent asynchronously with a streaming responseConvaAI.invokeCapability( input ="Hello, how are you?", listener =object : ResponseListener {overridefunonResponse(response: Response, isFinal: Boolean) {// Check for is_finalif (isFinal) {// Handle the final response } }overridefunonError(error: Throwable) {// Handle the error } });
# streaming versionresponse =await client.invoke_capability_stream(query)out =""asyncfor res in response: out = res# At this point the "is_final" flag in the out object will be setreturn out
// Invoke a capability asynchronously with a streaming responseStream<Response> completionStream = ConvaAI.invokeCapabilityStream(input:"Hello, how are you?");completionStream.listen( (event) {// Use the response object }, onError: (error) {// Handle error },);
// Invoke a capability (streaming)fortryawait response in ConvaAI.invokeCapabilityStream( with:"<input_query>") {// Handle the streaming response here}
Notes
Ensure that this method is called from an async context.
Proper error handling should be implemented to manage any potential issues during the invocation.
Invoking a Agent Group
When creating Agents via Magic Studio, it places all Agents into the "default" group. But users can create their own groups and put specific Agents into it. If you want to limit the Agent invocation to specific groups, you can pass the group name to the APIs.
// Invoke a Agent group synchronously (non-streaming)val response = ConvaAI.invokeCapability( input ="Hello, how are you?", capabilityGroup ="default");// Invoke a Agent group asynchronously to get a streaming responseConvaAI.invokeCapability( input ="Hello, how are you?", capabilityGroup ="general_conversation", listener =object : ResponseListener {overridefunonResponse(response: Response, isFinal: Boolean) {// Handle the response }overridefunonError(error: Throwable) {// Handle the error } });
# 'client' is the initialized ConvaAI object# Streamingresponse =await client.invoke_capability_stream(query, capability_group="default")out =""asyncfor res in response: out = resreturn out# Non-streamingresponse =await client.invoke_capability(query, capability_group="default")return response
// Invoke a capability group synchronously (non-streaming)Response completion =awaitConvaAI.invokeCapability( input:"Hello, how are you?", capabilityGroup:"default");// Invoke a capability group asynchronously to get a streaming responseStream<Response> completionStream =ConvaAI.invokeCapabilityStream( input:"Hello, how are you?", capabilityGroup:"general_conversation");completionStream.listen( (event) {// Use the response object }, onError: (error) {// Handle error },);
Note
Proper error handling should be implemented to manage any potential issues during the invocation.
// Invoke a capability with name (non-streaming)let response: ConvaAICapability?=tryawait ConvaAI.invokeCapability( with:<"input_query">, capabilityGroup:<"capability_group">)
Notes
Ensure that this method is called from an async context.
Proper error handling should be implemented to manage any potential issues during the invocation.
Handling Conversational Context
Conva.AI supports maintaining context across different conversations, allowing the AI to build upon prior interactions for more relevant responses.
Parameters
history (Optional): Available after the first response. Pass this from the previous response if you want to maintain conversational context in subsequent calls.
capabilityContext (Optional): Context related to a specific Agent, refining AI responses for that particular Agent.
Usage
On the first invocation, call the API without history.
On subsequent invocations, pass the history from the previous response if you want to maintain the conversation's context.
You can also pass the capabilityGroup or capabilityName along with the context to refine the AI’s responses further.
// First Responseval response1 = ConvaAI.invokeCapability( input ="Blue jeans",);// Second response. Pass the history from the first response// to the second one to maintain conversational contextval reponse2 = ConvaAI.invokeCapability( input ="show me in red", context =ConvaAIContext( history = response1.history, capabilityContext =<"Map of String and Any"> ));
(Any): The values can only include the following types:
String: For textual data.
Int: For integer numbers.
Double: For floating-point numbers.
Boolean: For true/false values.
Array: For lists of items (e.g., Array<String>, Array<Int>, etc.).
Map: For nested maps, allowing for hierarchical data structures.
val items =listOf("green tea", "freshly squeezed juice", "herbal tea", "smoothies", "coconut water", "coffee")// Example for passing capability contextval capabilityContext =mapOf("product_search" to mapOf("best beverages" to items ))// Note: Here <product_search> is the Agent name.// Use your own Agent name and describe Agent data according to it// Optional Context: CapabilityContext is optional.// If omitted, the AI will function based on the default // or previously provided context.
# First Responseresponse1 =await client.invoke_capability("blue jeans")return response# Second response. Pass the history from the first response# to the second one to maintain conversational contextresponse2 =await client.invoke_capability("show me in red", history=response1.conversation_history, capability_context={ <capability_name>: <knowledge> })return response
(Knowledge): The values can only include the following types:
String: For textual data.
Int: For integer numbers.
Double: For floating-point numbers.
Boolean: For true/false values.
Array: For lists of items (e.g., Array<String>, Array<Int>, etc.).
Map: For nested maps, allowing for hierarchical data structures.
items = ["green tea","freshly squeezed juice","herbal tea","smoothies","coconut water","coffee"]# Example for passing capability contextcapabilityContext ={"product_search":{"best beverages": items}}# Note: Here <product_search> is the capability name. # Use your own capability name and describe capability data according to it
// First Responsevar firstResponse =ConvaAI.invokeCapability( input:"Blue jeans",);// Second response. Pass the history from the first response// to the second one to maintain conversational contextvar response =ConvaAI.invokeCapability( input:"show me in red", context:ConvaAIContext( history: firstResponse.history, capabilityContext:<"Map of String and Any"> ),);
(Any): The values can only include the following types:
String: For textual data.
Int: For integer numbers.
Double: For floating-point numbers.
Boolean: For true/false values.
Array: For lists of items (e.g., Array<String>, Array<Int>, etc.).
Map: For nested maps, allowing for hierarchical data structures.
var items = ["green tea", "freshly squeezed juice", "herbal tea", "smoothies", "coconut water", "coffee"];// Example for passing capability contextvar capabilityContext = {"product_search": {"best beverages": items }};// Note: Here <product_search> is the capability name.// Use your own capability name and describe capability data according to it// Optional Context: CapabilityContext is optional.// If omitted, the AI will function based on the default // or previously provided context.
// First Responselet firstResponse: ConvaAICapability?=tryawait ConvaAI.invokeCapability( with:<"input_query">)let response: ConvaAICapability?=tryawait ConvaAI.invokeCapability( with:<"input_query">, context: ConvaAIContext( history: firstResponse.history, capabilityContext:<"Map of String and Any">))
(Any): The values can only include the following types:
String: For textual data.
Int: For integer numbers.
Double: For floating-point numbers.
Boolean: For true/false values.
Array: For lists of items (e.g., Array<String>, Array<Int>, etc.).
Map: For nested maps, allowing for hierarchical data structures.
let items = ["green tea", "freshly squeezed juice", "herbal tea", "smoothies", "coconut water", "coffee"]// Example for passing capability contextcapabilityContext: ["product_search": ["best beverages": items ]]// Note :- Here <product_search> is the capability name. // Use your own capability name and describe capability data according to it// Optional Context: CapabilityContext is optional.// If omitted, the AI will function based on the default // or previously provided context.
Notes
Ensure that this method is called from an async context.
Proper error handling should be implemented to manage any potential issues during the invocation.
client.invokeCapability({ query:'What can I wear for my graduation party?', stream:false, history: conversationHistory}).then(response => {console.log('ConvaAI Response (First Query):', response);if (response &&'conversationHistory'in response) { conversationHistory =JSON.stringify(response.conversationHistory); }returnclient.invokeCapability({ query:'Can you tell me what you do?', stream:false, history: conversationHistory });}).then(response => {console.log('ConvaAI Response (Second Query):', response);});//ConvaAI Response (First Query): ConvaAIResponse {//inputQuery: ‘What can I wear for my graduation party?’,//message: ‘Searching for graduation party outfit ideas’,//parameters: {//search_term: ‘graduation party outfit’,//category: ‘clothing’,//filter_price_range: ‘’//},//relatedQueries: [// ‘Explore formal dresses’, ‘Check out party accessories’, ‘View trending outfits for events’, ‘Browse new arrivals in clothing’, ‘Filter by color or style’ ],
// “conversationHistory”: { “items”: [ { “user_input”: “What can I wear for my graduation party?”, “assistant_response”: { “thought”: “Let’s think about this step by step. The user is looking for outfit ideas for a graduation party”, “category”: “clothing”
// … } }// … ] }//toolName: ‘product_search’ }//ConvaAI Response (Second Query): ConvaAIResponse {//inputQuery: ‘Can you tell me what you do?’,//message: “I am here to help you explore the latest fashion trends and find stylish clothing and beauty products on Myntra! How can I assist you today?”,
//parameters: { interaction_type: ‘greeting’ },//relatedQueries: [//‘Ask about the latest fashion trends’, ‘Inquire about beauty products’, ‘Get style advice’ ],// “conversationHistory”: { “items”: [ { “user_input”: “Can you tell me what you do?”, “assistant_response”: { “thought”: “Let’s think about this step by step. The user is asking about my role and what I do. I should provide a friendly and informative response without going into task-oriented details.”,
// “message”: “I’m here to help you explore the latest fashion trends and find stylish clothing and beauty products on Myntra! How can I assist you today?”
// … } }// … ] }//toolName: ‘small_talk’ }
Using the Copilot UI
The Copilot experience includes a well designed bottom sheet that appears on top of the app and provides the following elements:
Text box: For users to type their queries.
Mic icon: To trigger Conva.AI's highly accurate speech recognition feature
Message Area: Displays the message (the message field in the Response object)
Voice Feedback: Speak back the message, can be optionally muted by the user
Mute icon: To allow end-users to enable or disable the voice feedback
Feedback Icons: Automatically appear after each response.
Dynamic Suggestions: Buttons showing related queries (the related_queries field in the Response object) or custom suggestions provided by the app via an API.
Here are the high level steps to use the Copilot:
Setup the Copilot.
Theme the Copilot
Register handlers for Agent handling and Suggestion handling
Agent Handling: Handle responses generated by the Copilot
Suggestion Handling: Handle user interactions with suggestion buttons directed to the app