Integrating the Assistant

This page is to detail the process of integrating the Conva.AI assistant to an app

Conva.AI supports SDKs for the following platforms

  • Native Android (Kotlin)

  • Python

  • iOS - Coming soon

  • Flutter (Android only for now) - Coming soon

  • Rust - Coming soon

Let's walk through the process to integrate

Setting up the build environment

First, you need to add the maven repository URL to your project-level build.gradle file. Open the build.gradle file and add the following code inside the allprojects > repositories block

allprojects {
    repositories {
        maven {
            url "https://gitlab.com/api/v4/projects/25890120/packages/maven"
        }
    }
}

Next, open your app-level build.gradle file and add the conva-ai-core dependency inside the dependencies block

dependencies {
    implementation 'in.slanglabs.conva:conva-ai-core:2.3.2-beta'
}

Initializing the Assistant

Inside the application, the first thing to do is to initialize Conva.AI using the credentials of the Assistant created via Magic Studio

ConvaAI.init(
    id = "replace this string with your_assistant_id",
    key = "replace this string with your_api_key",
    version = "LATEST", // this is a special tag to indicate 
                        // the latest version of the Assistant
    application = applicationContext
);

Open the "Integration" tab of your Assistant to get the keys

Invoking a Capability

Next you can pass unstructured text input to Conva.AI and ask it to invoke the appropriate Capability and return the response back

// Pass an input string to Conva.AI and let it determine the 
// relevant Capability to invoke and return the output params
// after that Capability is executed
val response = ConvaAI.invokeCapability(
    input = "book a bus ticket from bangalore to chennai tomorrow at 2pm",
)

// response = {
//    "capability_name" : "ticket_booking",
//    "message" : "Showing you buses from Bangalore to Chennai for tommorrow around 2pm"
//    "params" : {
//         "source": "BLR",
//         "destination" : "MAS",
//         "date" : "6/7/2024",
//         "time" : "14:00"
//         "mode" : "bus"
//     }
// }

Response object

The response object contains the following key fields

  • capability_name - The name of the Capabiity that was triggered

  • message - The string that contains either the answer to the original query or is a status message. Capabiity creators can update the description of this message via Magic Studio

  • related_queries - An array of strings that contains a set of queries that is related to the current query and in context to the app. Again the Capabiity creator can update the description of this field if needed

  • params - This object contains the custom parameters specific to the Capability that got invoked. For eg if a Capability got created as shown in the image below, then the params object will contain 3 fields - task_name, date and time

Note that developers should check if a specific custom param exists before getting its value, as it might not be marked as required in Magic Studio.

Invoking a specific Capability

To invoke a specific Capability, you can refer to it by name and pass the input string. You can use this in streaming (async) or non-streaming (sync) mode.

// Invoke a specific capability
val response = ConvaAI.invokeCapabilityWithName(
    input = ""book a bus ticket from Bangalore to chennai tommorrow at 2pm", 
    capability_name="ticket_booking"
);

Handling Streaming Response

Conva.AI supports a streaming version of these APIs. The response will be sent back in a streaming mode, ie the listener will be invoked multiple times, each time with some incrementally additional information.

To check if all the responses are received, you can use the "is_final" flag in the Response object that is passed to the listener

// Invoke a capability asynchronously with a streaming response
ConvaAI.invokeCapability(
    input = "Hello, how are you?",
    listener = object : ResponseListener {
        override fun onResponse(isFinal: Boolean, response: Response) {
            // Check for is_final
            if (isFinal) {
                // Handle the final response
            }
        }
        override fun onError(error: Throwable) {
            // Handle the error
        }
    }
);

Invoking a Capability Group

When creating Capabilities via Magic Studio, it places all Capabilities into the "default" group. But users can create their own groups and put specific Capabilities into it. If you want to limit the Capability invocation to specific groups, you can pass the group name to the APIs.

// Invoke a capability group synchronously (non-streaming)
val response = ConvaAI.invokeCapability(
    input = "Hello, how are you?",
    capabilityGroup = "default"
);

// Invoke a capability group asynchronously to get a streaming response
ConvaAI.invokeCapabilityGroup(
    input = "Hello, how are you?",
    capabilityGroup = "general_conversation",
    listener = object : ResponseListener {
        override fun onResponse(isFinal: Boolean, response: Response) {
            // Handle the response
        }
        override fun onError(error: Throwable) {
            // Handle the error
        }
    }
);

Handling Conversational Context

Conva.AI allows users to maintain context across different conversations.

To enable this, you need to pass the Conversation History from the previous response to the next invoke method

// First Response
val response1 = ConvaAI.invokeCapability(
    input = "Blue jeans",
);

// Second response. Pass the history from the first response
// to the second one to maintain conversational context
val reponse2 = ConvaAI.invokeCapability(
    input = "show me in red",
    context = ConvaAIContext(response1.history)
);

Using the Copilot UI

Note that Copilot UI is only supported on Mobile platforms and currently only on Android

If you want to use the Conva.AI Assistant in all its glory, we recommend using the Copilot experience that comes built in.

The Copilot experience includes a well designed bottom sheet that appears on top of the app and provides the following elements

  • A text box for users to type their queries to

  • A mic icon to trigger Conva.AI's highly accurate speech recognition experience

  • The message area which shows the response (the message field in the Response object)

  • Ability to speak back the message

  • A mute icon to allow end-users to disable or enable speaking out by the Assistant

  • Feedback icons that show up automatically after every response

  • Dynamic Suggestion buttons that appear above the input text box and below the message area. By default it will show the strings that come in the related_queries field in the Response object. But developers can use the setSuggestionStrings in the InteractionData object (that is passed to the Capability handler) to modify these Suggestions.

Here are the high level steps to use the Copilot

  • Setup the Copilot.

    • You can theme the Copilot

    • You can register handlers for Capability handling and Suggestion handling

      • Capability handling is to handle the Capability response that is generated based on what the user talks to the Copilot

      • Suggestion Handling is when the user clicks on a Suggestion that is supposed to be handled by the app

  • Attach the Copilot to an Activity

  • Start the Copilot

  • Handle the Capabilities and Suggestion clicks

Copilot Build Dependencies

To use the Copilot UI, you need to first bring the right dependencies in your app-level build.gradle file

dependencies {
    implementation 'in.slanglabs.conva:conva-ai-copilot:2.3.0-beta'
}

Setup the Copilot

To use the Copilot UI, you need to do the following -

  • Register handlers for Capabilities that will be triggered as part of the user talking to the Copilot

  • Register handlers for handling clicks on the Suggestions that appear as a response back from the Capability. Note that there are two types of Suggestions

    • Suggestions that are meant to be sent back to the Assistant (e.g. follow on queries). When the user clicks on these Suggestions, the Suggestion handler WONT be invoked.

    • Suggestions that are meant to be handled by the app (e.g. suggestions that are typically search terms inside the app and so clicking on it should result in a search listing page opening up). Clicks on these Suggestions will trigger the handlers

To begin using ConvaAICopilot, you need to set it up by calling the setup API. This method takes a ConvaAIOptions object as a parameter, which allows you to configure various aspects of the assistant.

val options = ConvaAIOptions.Builder()
    .setCapabilityHandler(object : ConvaAIHandler {
        override fun onCapability(
            response: ConvaAIResponse, 
            interactionData: ConvaAIInteraction, 
            isFinal: Boolean) {
            // Handle the response from the assistant
        }
    })
    .setSuggestionHandler(object : ConvaAISuggestionHandler {
        override fun onSuggestion(suggestion: ConvaAISuggestion) {
            // Handle the selected suggestion
        }
    })
    .build()
    
ConvaAICopilot.setup(options)

Attach the Copilot to the current Activity

To prepare the Copilot UI, you need to call the attach API of the ConvaAICopilot, passing the current activity as a parameter. This prepares the Copilot to show up on top of the current activity but is still not visible. To bring up the Copilot, you need to call "startConversation".

Note that the Conva.AI starts tracking the Activities after this initial attach and so you dont need to call "attach" for every activity (unless detached programmatically or closed by the user).

And even if the activity behind changes, the Copilot will still stick around.

ConvaAICopilot.attach(getActivity())

Start the Copilot

To bring up the Copilot experience, call the startConversation method of the ConvaAICopilot. This will display the Copilot on top of the current Activity, allowing the user to interact with it.

ConvaAICopilot.startConversation()

Handle Capability Responses

When the Copilot is visible and the user either types or speaks to it, the Copilot will send the request to the Conva.AI backend orchestration service, which will then figure out the right Capability to fire and return it back to the SDK.

To handle the Capability, you need to register a handler (as shown in the setup step above).

Lets see how the handler looks like

val options = ConvaAIOptions.Builder()
    .setCapabilityHandler(object : ConvaAIHandler {
        override fun onCapability(
            response: ConvaAIResponse, 
            interactionData: ConvaAIInteraction, 
            isFinal: Boolean) {
            // Handle the response from the assistant
        }
    })
    .build()

When using the Copilot, the handler is getting streaming responses. That means that this handler will be called multiple times until the "isFinal" parameter is set to True.

The "ConvaAIResponse" object contains three main fields -

  • message - This is the field that either contains the status message or the answer to the question. This is the string that is displayed in the message area and also spoken out (if speak back is not disabled by the user)

  • related_queries - This is the field that controls the list of Suggestions that show up after every user query.

  • params - This is the HashMap that contains the list of custom parameters configured in the Capability Blueprint

The 'ConvaAIInteraction' object contains the array of Suggestion objects. By default this will include the suggestions that is automatically generated by the Capability (ie the "related_queries" parameter). Developers can set the Suggestions as shown below

Modifying Interaction Data

Inside the onCapability callback, the app can use the ConvaAIInteraction instance for more advanced functionality. Using the instance, the app can customize

  • the message that is displayed to the user,

  • as well as set the suggestions that should be shown on the surface.

val options = ConvaAIOptions.Builder()
    .setCapabilityHandler(object : ConvaAIHandler {
        override fun onCapability(response: ConvaAIResponse, interactionData: ConvaAIInteraction, isFinal: Boolean) {
            // Update the message that is to be displayed to the user
            interactionData.setMessage("Display this message to the user")
        
            // Update the list of suggestions that are shown to the user
            // NOTE: By default these suggestions would always be handled by the app via the ConvaAISuggestionAction instance provided to the copilot during setup()
            interactionData.setSuggestionStrings(mutableListOf("Potato", "Tomato", "Onion"))
        
            // Set the suggestion index that should be highlighted on the copilot surface UI, here we set it to 0
            interactionData.setSelectedSuggestionIndex(0)
        }
    })
);

Last updated