Headless Mode

This page is to detail the process of integrating the Conva.AI assistant in headless mode to an app.

Conva.AI supports SDKs for the following platforms

  • Android

  • Python

  • iOS

  • Flutter

  • Rust - Coming soon

Let's walk through the integration process.

Setting up the build environment

First, you need to add the maven repository URL to your project-level build.gradle file. Open the build.gradle file and add the following code inside the allprojects > repositories block

allprojects {
    repositories {
        maven {
            url "https://gitlab.com/api/v4/projects/25706948/packages/maven"
        }
    }
}

Next, open your app-level build.gradle file and add the conva-ai-core dependency inside the dependencies block

dependencies {
    implementation 'in.slanglabs.conva:conva-ai-core:1.0.1-beta'
}

Initializing the Assistant

First, initialize Conva.AI using the credentials of the Assistant created via Magic Studio.

ConvaAI.init(
    id = "replace this string with your_assistant_id",
    key = "replace this string with your_api_key",
    version = "LATEST", // this is a special tag to indicate 
                        // the latest version of the Assistant
    application = applicationContext
);

Open the "Integration" tab of your Assistant to get the keys.

Invoking a Capability

Next, pass unstructured text input to Conva.AI and let it invoke the appropriate Capability and return the response.

// Pass an input string to Conva.AI and let it determine the 
// relevant Capability to invoke and return the output params
// after that Capability is executed
val response = ConvaAI.invokeCapability(
    input = "book a bus ticket from bangalore to chennai tomorrow at 2pm",
)

// response = {
//    "capability_name" : "ticket_booking",
//    "message" : "Showing you buses from Bangalore to Chennai for tommorrow around 2pm"
//    "params" : {
//         "source": "BLR",
//         "destination" : "MAS",
//         "date" : "6/7/2024",
//         "time" : "14:00"
//         "mode" : "bus"
//     }
// }

Response object

The response object contains the following key fields

  • capability_name: The name of the Capabiity that was triggered.

  • message: The string containing either the answer to the original query or a status message. Capability creators can update this via Magic Studio.

  • related_queries: An array of strings with queries related to the current query and app context. Capability creators can update this field's description if needed.

  • params: This object contains custom parameters specific to the invoked Capability. For example, if a Capability is created with task_name, date, and time (as shown in the image below), these fields will be in the params object.

  • history: The current conversation history. This can be used by the app to continue the current conversation further (more on this later).

Developers should check if a specific custom param exists before accessing its value, as it might not be marked as required in Magic Studio.

Invoking a specific Capability

To invoke a specific Capability, you can refer to it by name and pass the input string. You can use this in streaming (async) or non-streaming (sync) mode.

// Invoke a specific capability
val response = ConvaAI.invokeCapabilityWithName(
    input = "book a bus ticket from Bangalore to chennai tommorrow at 2pm", 
    capability = "ticket_booking"
);

Handling Streaming Response

Conva.AI supports a streaming version of these APIs. The response is sent back incrementally, with the listener invoked multiple times. Use the is_final flag in the Response object to check if all responses are received.

// Invoke a capability asynchronously with a streaming response
ConvaAI.invokeCapability(
    input = "Hello, how are you?",
    listener = object : ResponseListener {
        override fun onResponse(response: Response, isFinal: Boolean) {
            // Check for is_final
            if (isFinal) {
                // Handle the final response
            }
        }
        override fun onError(error: Throwable) {
            // Handle the error
        }
    }
);

Invoking a Capability Group

When creating Capabilities via Magic Studio, it places all Capabilities into the "default" group. But users can create their own groups and put specific Capabilities into it. If you want to limit the Capability invocation to specific groups, you can pass the group name to the APIs.

// Invoke a capability group synchronously (non-streaming)
val response = ConvaAI.invokeCapability(
    input = "Hello, how are you?",
    capabilityGroup = "default"
);

// Invoke a capability group asynchronously to get a streaming response
ConvaAI.invokeCapability(
    input = "Hello, how are you?",
    capabilityGroup = "general_conversation",
    listener = object : ResponseListener {
        override fun onResponse(response: Response, isFinal: Boolean) {
            // Handle the response
        }
        override fun onError(error: Throwable) {
            // Handle the error
        }
    }
);

Handling Conversational Context

Conva.AI supports maintaining context across different conversations, allowing the AI to build upon prior interactions for more relevant responses.

Parameters

  • history (Optional): Available after the first response. Pass this from the previous response if you want to maintain conversational context in subsequent calls.

  • capabilityContext (Optional): Context related to a specific capability, refining AI responses for that particular capability.

Usage

  • On the first invocation, call the API without history.

  • On subsequent invocations, pass the history from the previous response if you want to maintain the conversation's context.

  • You can also pass the capabilityGroup or capabilityName along with the context to refine the AI’s responses further.

// First Response
val response1 = ConvaAI.invokeCapability(
    input = "Blue jeans",
);

// Second response. Pass the history from the first response
// to the second one to maintain conversational context
val reponse2 = ConvaAI.invokeCapability(
    input = "show me in red",
    context = ConvaAIContext(
        history = response1.history,
        capabilityContext = <"Map of String and Any">
    )
);
(Any): The values can only include the following types:
String: For textual data.
Int: For integer numbers.
Double: For floating-point numbers.
Boolean: For true/false values.
Array: For lists of items (e.g., Array<String>, Array<Int>, etc.).
Map: For nested maps, allowing for hierarchical data structures.
val items = listOf("green tea", "freshly squeezed juice", "herbal tea", "smoothies", "coconut water", "coffee")

// Example for passing capability context
val capabilityContext = mapOf(
    "product_search" to mapOf(
        "best beverages" to items
    )
)

// Note: Here <product_search> is the capability name.
// Use your own capability name and describe capability data according to it

// Optional Context: CapabilityContext is optional.
// If omitted, the AI will function based on the default 
// or previously provided context.

Using the Copilot UI

The Copilot experience includes a well designed bottom sheet that appears on top of the app and provides the following elements:

  • Text box: For users to type their queries.

  • Mic icon: To trigger Conva.AI's highly accurate speech recognition feature

  • Message Area: Displays the message (the message field in the Response object)

  • Voice Feedback: Speak back the message, can be optionally muted by the user

  • Mute icon: To allow end-users to enable or disable the voice feedback

  • Feedback Icons: Automatically appear after each response.

  • Dynamic Suggestions: Buttons showing related queries (the related_queries field in the Response object) or custom suggestions provided by the app via an API.

Here are the high level steps to use the Copilot:

  • Setup the Copilot.

    • Theme the Copilot

    • Register handlers for Capability handling and Suggestion handling

      • Capability Handling: Handle responses generated by the Copilot

      • Suggestion Handling: Handle user interactions with suggestion buttons directed to the app

  • Attach the Copilot to an Activity

  • Start the Copilot

  • Handle the Capabilities and Suggestion clicks

Last updated