CONVA Plus
Adding advanced multi-turn Voice capabilities into your app
By now you must have configured and published your Assistant via the Slang Console and also maybe customized it as required. Congratulations! :) If you have not already done that, you can do so by following the instructions here.
Let's start coding!
For testing, we recommend using a physical Android device instead of an emulator because most emulators don't work well with microphones.
1. Configure the build system
The first step is to update the app's build system to include Slang's Retail Assistant SDK.
Add the Slang dependency to your Gradle files
Add the path to the Slang maven repository to your top-level Gradle file
# Add this to your top level Gradle file
allprojects {
repositories {
…
maven { url "http://maven.slanglabs.in:8080/artifactory/gradle-release" }
}
}Add the Slang Retails Assistant dependency to your app's Gradle file
# Add this to your app's Gradle file
dependencies {
…
implementation 'in.slanglabs.assistants:slang-retail-assistant:5.0.5'
}Install the Slang Retail Assistant package
The next step is to install the required packages inside your code repo
yarn setup
If you use yarn for install packages, run the below command
yarn add @slanglabs/slang-conva-react-native-retail-assistant The default package is built and packaged with androidx framework, if your app does not support androidx, please use the below
yarn add @slanglabs/slang-conva-react-native-retail-assistant@non-androidx npm setup
If you use npm for managing your packages, run the below command
npm install @slanglabs/slang-conva-react-native-retail-assistant --saveBecause Slang uses native libraries, you need to link the package to your codebase to run the automatic linking steps
react-native link @slanglabs/slang-conva-react-native-retail-assistantFinally, add the path to the Slang maven repository (to download native library dependencies) to your top-level gradle file
# Add this to your top level gradle file
allprojects {
repositories {
…
maven { url "http://maven.slanglabs.in:8080/artifactory/gradle-release" }
}
}2. Code integration
2.1 Initialization
The next step is to initialize the SDK with the keys you obtained after creating the Assistant in the Slang console.
The recommendation is to perform the initialization in the onCreate method of the Application class. If the app does not use an Application class, the next best place would be the onCreate method of the primary Activity class.
This should ideally be done in the componentDidMount of your main app component
This should ideally be done inside the main method.
If you're using async/await, then -
If you use Promise callback, then -
2.2 Show the Assistant Trigger (microphone icon)
Once the Assistant is initialized, the next step is to show the Assistant Trigger (ie the microphone button) that the app's users can click on to invoke the Assistant and speak to it.
Add the below line to the onResume method of the Activities where you want the Assistant to be enabled.
One can call "show" and "hide" methods as required to control the visibility of the Assistant
Use "showTrigger" and "hideTrigger" APIs to control the visibility of the Assistant as shown below.
One can call "show" and "hide" methods as required to control the visibility of the Assistant
By default, the trigger is sticky, which means that it will show up on all Activities after it is made visible. To prevent the trigger from showing up on specific activities, you will need to call:
2.3 Implement Actions
Now if the actions (basically the visual change that the app should do) corresponding to the various User Journeys have not been already defined in the console, the app needs to do that via code and implement the Actions associated with the various User Journeys supported by the Assistant. This can be done as shown below
The following user journeys are currently supported by the Slang Retail Assistant:
Voice Search
Voice Order Management
Voice Offers
Voice Checkout
Voice Navigation
The Action Handler interface has an explicit callback for each of the supported user journeys. Whenever the Assistant detects the user's journey (based on what they spoke), it invokes the callback associated with that user journey.
When these callbacks are invoked, the Assistant also passes the parametric data corresponding to the user journey that the Assistant was able to gather. The app is then expected to:
Consume the parametric data as needed
Optionally launch appropriate UI actions
Set appropriate conditions in the Assistant based on the app's internal state
Return the
AppStatethat the app transitioned to
2.4 Return the AppState and Condition
AppState and ConditionAn AppState indicates which state the app transitioned to, based on the user-journey and parametric data that was passed to the app. The list ofAppStates that are supported depends on the user journey.
Conditions represent more detailed states of the app within a particular app state. For example, the search might have failed when performing the search or the items might be out of stock. The app can use Conditions to indicate the correct condition to the Assistant. The condition controls the message that the Assistant speaks up after the call-back returns.
2.4.1 Assistant Prompts
Based on the AppState returned and the conditions that were set, the Assistant will speak out an appropriate message to the user.
That's it! These are the basic set of steps required to add Slang's In-App Voice Assistant into your app.
Beyond this integration, Slang Voice Assistants provide a lot more power and flexibility to cater to the more advanced needs of the apps. Please refer to the Advanced Concepts section for more details.
Last updated
Was this helpful?
