NAV Navbar
JavaScript cURL

Introduction

API Endpoint

https://api.rammer.ai/v1

Rammer.ai's Language Insights Platform is aimed at addressing the need of understanding and analyzing human conversations. Based on such conversations, actionable insights are generated that can further be used to generate business outcomes. The purpose is to enable conversational intelligence on communication or collaboration platforms, such that these platforms can focus on providing an enhanced user experience using the technology provided by rammer.ai.

You can contact us for volume pricing or any inquiries.

You can also join our Slack channel here!

Getting Started

This tutorial demonstrates how to add voice integration to an exisiting application.

Explore sample apps 5 minutes
Browse our demo library and look at sample code on how to integrate voice intelligence into existing applications.


System requirements: NodeJS 7+

To play around with our API's, simply tap the button below to import a pre-made collection of requests.

Run in Postman

Authentication

If you don't already have your app id or app secret, log in to platform to get your credentials.

To invoke any API call, you must have a valid Access Token generated using the valid application credentials.

To generate the token using the appId and appSecret, the HTTP POST request needs to be made with these details.

POST https://api.rammer.ai/oauth2/token:generate
{
  "type": "application",
  "appId": "your_appId",
  "appSecret": "your_appSecret"
}


curl -k -X POST "https://api.rammer.ai/oauth2/token:generate" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d "{ \"type\": \"application\", \"appId\": \"<appId>\", \"appSecret\": \"<appSecret>\"}"
 const request = require('request');

 const authOptions = {
   method: 'post',
   url: "https://api.rammer.ai/oauth2/token:generate",
   body: {
       type: "application",
       appId: "<appId>",
       appSecret: "<appSecret>"
   },
   json: true
 };

 request(authOptions, (err, res, body) => {
   if (err) {
     console.error('error posting json: ', err);
     throw err
   }

   console.log(JSON.stringify(body, null, 2));
 });

JavaScript code to generate the Access Token. The code should work with NodeJS 7+ and browsers. You will need to install request for this sample code.

npm i request

For a valid appId and appSecret combination, the success response will be returned like this.

 {
   "accessToken": "your_accessToken",
   "expiresIn": 3600
 }


For any invalid appId and appSecret combination, HTTP 401 Unauthorized response code will be returned.

Initialize the Client SDK

To initialize with default API endpoints.

 sdk.init({
   appId: 'yourAppId',
   appSecret: 'yourAppSecret'
 })
 .then(() => console.log('SDK Initialized.'))
 .catch(err => console.error('Error in initialization.', err));

If you have a custom API domain, use the basePath option in init(). If not, you can omit this field and by default, basePath is set to api.rammer.ai

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret',
  basePath: 'https://api.rammer.ai'
})
.then(() => console.log('SDK Initialized.'))
.catch(err => console.error('Error in initialization.', err));

ES5

Javascript for SDK referencing ES5 way.

var sdk = require('@rammerai/language-insights-client-sdk').sdk;

ES6

Javascript for SDK referencing ES65 way.

import {sdk} from '@rammerai/language-insights-client-sdk';

Connect to Endpoints

This SDK supports dialing through PSTN and SIP endpoints:

PSTN (Public Switched Telephone Networks)

The code snippet below dials in using PSTN and hangs up after 60 seconds.

const { sdk } = require('@rammerai/language-insights-client-sdk');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret'
}).then(() => {
  sdk.startEndpoint({
    endpoint: {
      type: 'pstn', // This can be pstn or sip
      phoneNumber: '<number_to_call>',
      dtmf: '<code>'
    }
  }).then(connection => {
    console.log('Successfully connected.', connection.connectionId);

    // Scheduling stop endpoint call after 60 seconds for demonstration purposes

    // In real adoption, sdk.stopEndpoint() should be called when the meeting or call actually ends
    setTimeout(() => {
      sdk.stopEndpoint({
        connectionId: connection.connectionId
      }).then(() => {
        console.log('Stopped the connection');
      }).catch(err => console.error('Error while stopping the connection', err));
    }, 60000);
  }).catch(err => console.error('Error while starting the connection', err));

}).catch(err => console.error('Error in SDK initialization.', err));

We recommend using SIP whenever possible instead of PSTN as it provides higher audio quality options as compared to PSTN. SIP endpoint provides an optional audio configuration as well. Contact us for your specific requirements.

The Publicly Switched Telephone Network (PSTN) is the network that carries your calls when you dial in from a landline or cell phone. It refers to the worldwide network of voice-carrying telephone infrastructure, including privately-owned and government-owned infrastrucure.

endpoint: {
  type: 'pstn',
  phoneNumber: '14083380682', // Phone number to dial in
  dtmf: '6155774313#' // Joining code
}

SIP (Session Initiation Protocol)

Session Initiation Protocol (SIP) is a standardized communications protocol that has been widely adopted for managing multimedia communication sessions for voice and video calls. SIP may be used to establish connectivity between your communications infrastructures and Rammer's communications platform.

endpoint: {
  type: 'sip',
  uri: 'sip:555@<your_sip_domain>', // SIP URI to dial in
  audioConfig: { // Optionally any audio configuration
    sampleRate: 16000,
    encoding: 'PCMU',
    sampleSize: '16'
  }
}

Push Events

The example below shows how to connect to a PSTN endpoint, create a speakerEvent instance and push events on connection

const { sdk, SpeakerEvent } = require('@rammerai/language-insights-client-sdk');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret'
}).then(() => {

  sdk.startEndpoint({
    endpoint: {
      type: 'pstn',
      phoneNumber: '<number_to_call>',
      dtmf: '<code>'
    }
  }).then(connection => {
    const connectionId = connection.connectionId;
    console.log('Successfully connected.', connectionId);
    const speakerEvent = new SpeakerEvent();
    speakerEvent.type = SpeakerEvent.types.startedSpeaking;
    speakerEvent.user = {
      userId: 'john@example.com',
      name: 'John'
    };
    speakerEvent.timestamp = new Date().toISOString();
    sdk.pushEventOnConnection(
      connectionId,
      speakerEvent.toJSON(),
      (err) => {
        if (err) {
        console.error('Error during push event.', err);
        } else {
        console.log('Event pushed!');
        }
      }
    );
    // Scheduling stop endpoint call after 60 seconds for demonstration purposes
    // In real adoption, sdk.stopEndpoint() should be called when the meeting or call actually ends
    setTimeout(() => {
      sdk.stopEndpoint(
        { connectionId: connection.connectionId }
      ).then(() => {
        console.log('Stopped the connection');
      }).catch(err => console.error('Error while stopping the connection.', err));
    }, 60000);
  }).catch(err => console.error('Error while starting the connection', err));

}).catch(err => console.error('Error in SDK initialization.', err));

Events can be pushed to an on-going connection to have them processed. The code snippet to the right shows a simple example.

Every event must have a type to define the purpose of the event at a more granular level, usually to indicate different activities associated with the event resource. For example - A "speaker" event can have type as started_speaking. An event may have additional fields specific to the event.

Currently, Rammer only supports the speaker event which is described below.

Speaker Event

The speaker event is associated with different individual attendees in the meeting or session. An example of a speaker event is shown below.

In the code example the user needs to have userId field to uniquely identify the user.

Speaker Event has the following types:

started_speaking

This event contains the details of the user who started speaking with the timestamp in ISO 8601 format when he started speaking.

const speakerEvent = new SpeakerEvent({
  type: SpeakerEvent.types.startedSpeaking,
  timestamp: new Date().toISOString(),
  user: {
    userId: 'john@example.com',
    name: 'John'
  }
});

stopped_speaking

This event contains the details of the user who stopped speaking with the timestamp in ISO 8601 format when he stopped speaking.

const speakerEvent = new SpeakerEvent({
  type: SpeakerEvent.types.stoppedSpeaking,
  timestamp: new Date().toISOString(),
  user: {
    userId: 'john@example.com',
    name: 'John'
  }
});


As shown in the above examples, it's ok to reuse the same speakerEvent instance per user, by changing the event's type to optimize by reducing the number of instances for SpeakerEvent.

A startedSpeaking event is pushed on the on-going connection. You can use pushEventOnConnection() method from the SDK to push the events.

Outbound Integrations

Rammer.ai currently offers Email and Calendar as out of the box integrations. However, this can be extended to any work tool where the actionable insights need to be pushed to enhance productivity and reduce the time taken by users to manually enter information from conversations.

Some of the examples of these work tools could be:

Check out our blog post to learn about how you can populate your SalesForce dashboard with action items from Rammer.

Complete Example

const { sdk, SpeakerEvent } = require('@rammerai/language-insights-client-sdk');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret',
  basePath: 'https://api.rammer.ai'
}).then(() => {

  console.log('SDK Initialized');
  sdk.startEndpoint({
    endpoint: {
      type: 'pstn',
      phoneNumber: '14087407256',
      dtmf: '6327668#'
    }
  }).then(connection => {

    const connectionId = connection.connectionId;
    console.log('Successfully connected.', connectionId);
    const speakerEvent = new SpeakerEvent({
      type: SpeakerEvent.types.startedSpeaking,
      user: {
        userId: 'john@example.com',
        name: 'John'
      }
    });

    setTimeout(() => {
      speakerEvent.timestamp = new Date().toISOString();
      sdk.pushEventOnConnection(
        connectionId,
        speakerEvent.toJSON(),
        (err) => {
          if (err) {
          console.error('Error during push event.', err);
          } else {
          console.log('Event pushed!');
          }
        }
      );
    }, 2000);

    setTimeout(() => {
      speakerEvent.type = SpeakerEvent.types.stoppedSpeaking;
      speakerEvent.timestamp = new Date().toISOString();

      sdk.pushEventOnConnection(
        connectionId,
        speakerEvent.toJSON(),
        (err) => {
          if (err) {
          console.error('Error during push event.', err);
          } else {
          console.log('Event pushed!');
          }
        }
      );
    }, 12000);

    // Scheduling stop endpoint call after 60 seconds
    setTimeout(() => {
      sdk.stopEndpoint({
        connectionId: connection.connectionId
      }).then(() => {
        console.log('Stopped the connection');
      }).catch(err => console.error('Error while stopping the connection.', err));
    }, 90000);

  }).catch(err => console.error('Error while starting the connection', err));

}).catch(err => console.error('Error in SDK initialization.', err));

Below is a quick simulated speaker event example that

  1. Initializes the SDK with custom basePath
  2. Initiates a connection with an endpoint
  3. Sends a speaker event of type startedSpeaking for user John
  4. Sends a speaker event of type stoppedSpeaking for user John
  5. Ends the connection with the endpoint

Strictly for the illustration and understanding purposes, the code to the right pushes events by simply using setTimeout() method periodically, but in real usage, they should be pushed as they occur.

Send Summary Email

const { sdk, SpeakerEvent } = require('@rammerai/language-insights-client-sdk');

sdk.init({
  appId: 'yourAppId',
  appSecret: 'yourAppSecret',
  basePath: 'https://api.rammer.ai'
}).then(() => {
  console.log('SDK Initialized');
  sdk.startEndpoint({
    endpoint: {
      type: 'sip',
      uri: 'sip:someuser@somedomain.com'
    },
    actions: [{
      "invokeOn": "stop",
      "name": "sendSummaryEmail",
      "parameters": {
        "emails": [
          "john@exmaple.com",
          "mary@example.com",
          "jennifer@example.com"
        ]
      }
    }],
    data: {
      session: {
       name: 'My Meeting Name' // Title of the Meeting, this will be reflected in summary email if applicable.
      },

An action sendSummaryEmail can be passed at the time of making the startEndpoint() call to send the summary email to specified email addresses passed in the parameters.emails array. The email will be sent as soon as all the pending processing is finished after the stopEndpoint() is executed. Below code snippet shows the use of actions to send a summary email on stop.

Optionally, you can send the title of the Meeting and the participants in the meeting which will also be present in the Summary Email.

To send the title of the meeting populate the data.session.name field with meeting title.

To send the list of meeting attendees, populate the list of attendees in the user objects in data.session.users field as shown in the example. To indicate the Organizer or Host of the meeting set the role field in the corresponding user object.

Setting the timestamp for speakerEvent is optional but it's recommended to provide accurate timestamps in the events when they occurred to get more precision.

    users: [
      {
        user: {
          name: "John",
          userId: "john@example.com",
          role: "organizer"
        }
      },
      {
        user: {
          name: "Mary",
          userId: "mary@example.com"
        }
      },
      {
        user: {
          name: "John",
          userId: "jennifer@example.com"
        }
      }
    ]
    }
  }).then((connection) => {
    console.log('Successfully connected.');

    // Events pushed in between
    setTimeout(() => {
      // After successful stop endpoint, an email with summary will be sent to "john@example.com" and "jane@example.com"
      sdk.stopEndpoint({
        connectionId: connection.connectionId
      }).then(() => {
        console.log('Stopped the connection');
      }).catch(err => console.error('Error while stopping the connection.', err));
    }, 30000);

  }).catch(err => console.error('Error while starting the connection',
  err));

}).catch(err => console.error('Error in SDK initialization.', err));

This is an example of the summary page you can expect to recieve at the end of your call

Summary Page

Tuning your Summary Page

You can choose to tune your Summary Page with the help of query parameters to play with different configurations and see how the results look.

Query Parameters

You can configure the summary page by passing in the configuration through query parameters in the summary page URL that gets generated at the end of your meeting. See the end of the URL in this example:

https://meetinginsights.rammer.ai/meeting/#/eyJ1...I0Nz?insights.minScore=0.95&topics.orderBy=position

Query Parameter Default Value Supported Values Description
insights.minScore 0.8 0.5 to 1.0 Minimum score that the summary page should use to render the insights
insights.enableAssignee false [true, false] Enable to disable rending of the assignee and due date ofthe insight
insights.enableAddToCalendarSuggestion true [true, false] Enable to disable add to calendar suggestion whenapplicable on insights
insights.enableInsightTitle true [true, false] Enable or disable the title of an insight. The title indicates theoriginating person of the insight and if assignee of the insight.
topics.enabled true [true, false] Enable or disable the summary topics in the summary page
topics.orderBy 'score' ['score', 'position'] Ordering of the topics.

score - order topics by the topic importance score.

position - order the topics by the position in the transcript they surfaced for the first time

Voice API

The Voice API provides the the REST interface for adding Rammer to your call, processing audio and generating actionable insights from your conversations.

POST Authentication

If you don't already have your app id or app secret, log in to platform to get your credentials.

To invoke any API call, you must have a valid Access Token generated using the valid application credentials.

To generate the token using the appId and appSecret, the HTTP POST request needs to be made with these details.

POST https://api.rammer.ai/oauth2/token:generate
{
  "type": "application",
  "appId": "your_appId",
  "appSecret": "your_appSecret"
}


curl -k -X POST "https://api.rammer.ai/oauth2/token:generate" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d "{ \"type\": \"application\", \"appId\": \"<appId>\", \"appSecret\": \"<appSecret>\"}"
 const request = require('request');

 const authOptions = {
   method: 'post',
   url: "https://api.rammer.ai/oauth2/token:generate",
   body: {
       type: "application",
       appId: "<appId>",
       appSecret: "<appSecret>"
   },
   json: true
 };

 request(authOptions, (err, res, body) => {
   if (err) {
     console.error('error posting json: ', err);
     throw err
   }

   console.log(JSON.stringify(body, null, 2));
 });

JavaScript code to generate the Access Token. The code should work with NodeJS 7+ and browsers. You will need to install request for this sample code.

npm i request

For a valid appId and appSecret combination, the success response will be returned like this.

 {
   "accessToken": "your_accessToken",
   "expiresIn": 3600
 }


For any invalid appId and appSecret combination, HTTP 401 Unauthorized response code will be returned.

POST Voice API: Telephony

Example API Call

curl -k -X POST "https://api.rammer.ai/v1/endpoint:connect:" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -H "x-api-key: <your_auth_token>" \
     -d @location_of_fileName_with_request_payload
  const request = require('request');

  const payload = {
    "operation": "start",
    "endpoint": {
      "type" : "pstn",
      "phoneNumber": "<number_to_call>",
      "dtmf": "<code>"
    },
    "actions": [{
      "invokeOn": "stop",
      "name": "sendSummaryEmail",
      "parameters": {
        "emails": [
          "joe.rammer@example.com"
        ]
      }
    }],
    "data" : {
        "session": {
            "name" : "My Meeting"
        }
    } 
  }

  request.post({
      url: 'https://api.rammer.ai/v1/endpoint:connect:',
      headers: {'x-api-key': 'your_auth_token'},
      body: payload,
      method: 'POST',
      json: true
  }, (err, response, body) => {
    console.log(body);
  });

};

The above command returns an object structured like this:

{
    "eventUrl": "https://api.rammer.ai/v1/event/771a8757-eff8-4b6c-97cd-64132a7bfc6e",
    "resultWebSocketUrl": "wss://api.rammer.ai/events/771a8757-eff8-4b6c-97cd-64132a7bfc6e",
    "connectionId": "771a8757-eff8-4b6c-97cd-64132a7bfc6e"
}

The Telephony Voice API allows you to easily use Rammer's Language Insights capabilities.

It exposes the functionality of Rammer to dial-in to the conference. Supported endpoints are given below. Additionally, events can be passed for further processing. The supported types of events are discussed in detail in the section below.

POST https://api.rammer.ai/v1/endpoint:connect:

Request Parameters

Parameter Type Description
operation string enum([start, stop]) - Start or Stop connection
endpoint object Object containing Type of the session - either pstn or sip, phoneNumber which is the meeting number rammer should call and dtmf which is the conference passcode.
actions array actions that should be performed while this connection is active. Currenly only one action is supported - sendSummaryEmail
data object Object containing a session object which has a field name corresponding to the name of the meeting

Response Object

Field Description
eventUrl REST API to push speaker events as the conversation is in progress, to add additional speaker context in the conversation. Example - In an on-going meeting, you can push speaker events
resultWebSocketUrl Same as eventUrl but over WebSocket. The latency of events is lower with a dedicated WebSocket connection.
connectionId Ephemerial connection identifier of the request, to uniquely identify the telephony connection. Once the connection is stopped using “stop” operation, or is closed due to some other reason, the connectionId is no longer valid
conversationId Represents the conversation - this is the ID that needs to be used in conversation api to access the conversation


To play around with a few examples, we recommend a REST client called Postman. Simply tap the button below to import a pre-made collection of examples.

Run in Postman

Try it out

When you have started the connection through the API, try speaking the following sentences and view the summary email that gets generated:

WS Voice API: Realtime Websocket

In the example below, we've used the websocket npm package for WebSocket Client, and mic for getting the raw audio from microphone.

npm i websocket mic

For this example, we are using your mic to stream audio data. You will most likely want to use other inbound sources for this

const WebSocketClient = require('websocket').client;

const mic = require('mic');

const micInstance = mic({
  rate: '44100',
  channels: '1',
  debug: false,
  exitOnSilence: 6
});

// Get input stream from the microphone
const micInputStream = micInstance.getAudioStream();
let connection = undefined;

Create a websocket client instance

const ws = new WebSocketClient();

ws.on('connectFailed', (err) => {
  console.error('Connection Failed.', err);
});

ws.on('connect', (connection) => {

  // Start the microphone
  micInstance.start();

  connection.on('close', () => {
    console.log('WebSocket closed.')
  });

  connection.on('error', (err) => {
    console.log('WebSocket error.', err)
  });

  connection.on('message', (data) => {
    if (data.type === 'utf8') {
      const {
        utf8Data
      } = data;
    console.log(utf8Data);  // Print the data for illustration purposes
    }
  });

  console.log('Connection established.');

  connection.send(JSON.stringify({
    "type": "start_request",
    "insightTypes": ["question", "action_item"],
    "config": {
      "confidenceThreshold": 0.9,
      "timezoneOffset": 480, // Your timezone offset from UTC in minutes
      "languageCode": "en-US",
      "speechRecognition": {
        "encoding": "LINEAR16",
        "sampleRateHertz": 44100 // Make sure the correct sample rate is provided for best results
      },
      "meetingTitle": "Client Meeting"
    },
    "speaker": {
      "userId": "jane.doe@example.com",
      "name": "Jane"
    }
  }));

  micInputStream.on('data', (data) => {
    connection.send(data);
  });

For this example, we timeout our call after 2 minutes but you would most likely want to make the stop_request call when your websocket connection ends

  // Schedule the stop of the client after 2 minutes (120 sec)
  setTimeout(() => {
    micInstance.stop();
    // Send stop request
    connection.sendUTF(JSON.stringify({
      "type": "stop_request"
    }));
    connection.close();
  }, 120000);
});

Generate the token and replace it in the placeholder <accessToken>. If you have a custom domain, replace api.rammer.ai in the last line of the code with your custom domain name. Once the code is running, start speaking and you should see the message_response and insight_response messages getting printed on the console.

ws.connect(
  'wss://api.rammer.ai/v1/realtime/insights/1',
  null,
  null,
  { 'X-API-KEY': '<accessToken>' }
);

Introduction

The WebSocket based real-time API by Rammer provides the direct, fastest and most accurate of all other interfaces to push the audio stream in real-time, and get the results back as soon as they're available.

Connection Establishment

This is a WebSocket endpoint, and hence it starts as an HTTP request that contains HTTP headers that indicate the client's desire to upgrade the connection to a WebSocket instead of using HTTP semantics. The server indicates its willingness to participate in the WebSocket connection by returning an HTTP 101 Switching Protocols response. After the exchange of this handshake, both client and service keep the socket open and begin using a message-based protocol to send and receive information. Please refer to WebSocket Specification RFC 6455 for the more in-depth understanding of the Handshake process.

Message Formats

Client and Server both can send messages after the connection is established. According to RFC 6455, WebSocket messages can have either a text or a binary encoding. The two encodings use different on-the-wire formats. Each format is optimized for efficient encoding, transmission, and decoding of the message payload.

Text Message

Text message over WebSocket must use UTF-8 encoding. Text Message is the serialized JSON message. Every text message has a type field to specify the type or the purpose of the message.

Binary Message

Binary WebSocket messages carry a binary payload. For the Real-time API, audio is transmitted to the service by using binary messages. All other messages are the Text messages.

Client messages

This section describes the messages that are originated from the client and are sent to service. The types of messages sent by the client are start_request, stop_request and binary messages containing audio.

Configuration

Main Message Body

Field Required Supported Values Description
type true start_request, stop_request Type of message
insightTypes false action_item, question Types of insights to return. If not provided, no insights will be returned.
config false Configuration for this request. See the config section below for more details.
speaker false Speaker identity to use for audio in this WebSocket connection. If omitted, no speaker identification will be used for processing. See below.

config

Field Required Supported Values Default Value Description
confidenceThreshold false 0.0 - 1.0 0.5 Minimum Confidence score that should be met for API to consider it as valid insight, if not provided defaults to 0.5 i.e. 50% or more
timezoneOffset false 0 The number of minutes that need to be added or removed from the UTC time. Positive or negative numbers are accepted. For example, for PST timezone, the value should be -480 and for IST timezone, the value should be 330
languageCode false en-US The language code as per the BCP 47 specification
speechRecognition false Speaker identity to use for audio in this WebSocket connection. If omitted, no speaker identification will be used for processing. See below.

speechRecognition

Field Required Supported Values Default Value Description
encoding false LINEAR16, FLAC, MULAW LINEAR16 Audio Encoding in which the audio will be sent over the WebSocket.
sampleRateHertz false 16000 The rate of the incoming audio stream.

speaker

Field Required Description
userId false Any user identifier for the user.
name false Display name of the user.

Messages

Start Request

{
  "type": "start_request",
  "insightTypes": ["question", "action_item"],
  "config": {
    "confidenceThreshold": 0.9,
    "timezoneOffset": 480,
    "languageCode": "en-US",
    "speechRecognition": {
      "encoding": "LINEAR16",
      "sampleRateHertz": 16000
    }
  },
  "speaker": {
    "userId": "jane.doe@example.com",
    "name": "Jane"
  }
}


This is a request to start the processing after the connection is established. Right after this message has been sent, the audio should be streamed, any binary audio streamed before the receipt of this message will be ignored.

Stop Request

{
  "type": "stop_request"
}


This is a request to stop the processing. After the receipt of this message, the service will stop any processing and close the WebSocket connection.

Example of the message_response object

{
  "type": "message_response",
  "messages": [
    {
      "from": {
        "name": "Jane",
        "userId": "jane.doe@example.com"
      },
      "payload": {
        "content": "I was very impressed by your profile, and I am excited to know more about you.",
        "contentType": "text/plain"
      }
    },
    {
      "from": {
        "name": "Jane",
        "userId": "jane.doe@example.com"
      },
      "payload": {
        "content": "So tell me, what is the most important quality that you acquired over all of your professional career?",
        "contentType": "text/plain"
      }
    }
  ]
}

Sending Binary Messages with Audio

The client needs to send the audio to Service by converting the audio stream into a series of audio chunks. Each chunk of audio carries a segment of audio that needs to be processed. The maximum size of a single audio chunk is 8,192 bytes.

Service Messages

This section describes the messages that originate in Service and are sent to the client.

Service sents mainly two types of messages (message_response, insight_response) to the client as soon as they're available.

Message Response

The message_response contains the processed messages at as soon as they're ready and available, in the processing of continuous audio stream. This message does not contain any insights.

Insight Response

Example of the insight_response object

{
  "type": "insight_response",
  "insights": [{
    "type": "question",
    "text": "So tell me, what is the most important quality that you acquired over all of your professional career?",
    "confidence": 0.9997962117195129,
    "hints": [],
    "tags": []
  },
  {
    "type": "action_item",
    "text": "Jane will look into the requirements on the hiring for coming financial year.",
    "confidence": 0.9972074778643447,
    "hints": [],
    "tags": [{
      "type": "person",
      "text": "Jane",
      "beginOffset": 0,
      "value": {
        "value": {
          "name": "Jane",
          "alias": "Jane",
          "userId": "jane.doe@rammer.ai"
        }
      }
    }]
  }]
}

The insight_response contains the insights from the ongoing conversation as soon as they are available. This message does not contain any messages.

Conversation API

The Conversation API provides the the REST API interface for the management and processing of your conversations

POST Authentication

If you don't already have your app id or app secret, log in to platform to get your credentials.

To invoke any API call, you must have a valid Access Token generated using the valid application credentials.

To generate the token using the appId and appSecret, the HTTP POST request needs to be made with these details.

POST https://api.rammer.ai/oauth2/token:generate
{
  "type": "application",
  "appId": "your_appId",
  "appSecret": "your_appSecret"
}


curl -k -X POST "https://api.rammer.ai/oauth2/token:generate" \
     -H "accept: application/json" \
     -H "Content-Type: application/json" \
     -d "{ \"type\": \"application\", \"appId\": \"<appId>\", \"appSecret\": \"<appSecret>\"}"
 const request = require('request');

 const authOptions = {
   method: 'post',
   url: "https://api.rammer.ai/oauth2/token:generate",
   body: {
       type: "application",
       appId: "<appId>",
       appSecret: "<appSecret>"
   },
   json: true
 };

 request(authOptions, (err, res, body) => {
   if (err) {
     console.error('error posting json: ', err);
     throw err
   }

   console.log(JSON.stringify(body, null, 2));
 });

JavaScript code to generate the Access Token. The code should work with NodeJS 7+ and browsers. You will need to install request for this sample code.

npm i request

For a valid appId and appSecret combination, the success response will be returned like this.

 {
   "accessToken": "your_accessToken",
   "expiresIn": 3600
 }


For any invalid appId and appSecret combination, HTTP 401 Unauthorized response code will be returned.

GET conversation

Returns the full conversation details.

API Endpoint

https://api.rammer.ai/v1/conversations/{conversationId}

Example API call

const request = require('request');

request.get({
    url: 'https://api.rammer.ai/v1/conversations/{conversationId}',
    headers: {'x-api-key': 'your_auth_token'},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
    "id": "unique_conversation_id",
    "type": "meeting",
    "name": "Project Meeting #2",
    "startTime": "2020-02-12T11:32:08.000Z",
    "endTime": "2020-02-12T11:37:31.134Z",
    "transcriptId": "unique_transcript_id",
    "members": [
        {
            "name": "John",
            "email": "John@example.com",
        },
        {
            "name": "Mary",
            "email": "Mary@example.com",
        },
        {
            "name": "Roger",
            "email": "Roger@example.com",
        }
    ]
}

HTTP REQUEST

GET https://api.rammer.ai/v1/conversations/{conversationId}

GET messages in a conversation

Returns all the messages in a conversation.

API Endpoint

https://api.rammer.ai/v1/conversations/{conversationId}/messages

Example API call

const request = require('request');

request.get({
    url: 'https://api.rammer.ai/v1/conversations/{conversationId}/messages',
    headers: {'x-api-key': 'your_auth_token'},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
    "messages": [
        {
            "id": "5659996670918656",
            "text": "Sign something a little further.",
            "from": {
                "name": "Mary",
                "email": "Mary@example.com"
            },
            "startTime": "2020-02-12T11:32:21.383Z",
            "endTime": "2020-02-12T11:32:22.983Z",
            "transcriptId": "5694147767828480",
            "conversationId": "5708267674140672"
        },
        {
            "id": "5732040452341760",
            "text": "I guess we won't.",
            "from": {
                "name": "Roger",
                "email": "Roger@example.com"
            },
            "startTime": "2020-02-12T11:32:23.883Z",
            "endTime": "2020-02-12T11:32:24.582Z",
            "transcriptId": "5694147767828480",
            "conversationId": "5708267674140672"
        },
        {
            "id": "5630620503900160",
            "text": "Get too much more info on that.",
            "from": {
                "name": "John",
                "email": "John@example.com"
            },
            "startTime": "2020-02-12T11:32:24.582Z",
            "endTime": "2020-02-12T11:32:26.383Z",
            "transcriptId": "5694147767828480",
            "conversationId": "5708267674140672"
        },
        // ...more messages
    ]
}

HTTP REQUEST

GET https://api.rammer.ai/v1/conversations/{conversationId}/messages

GET all members in a conversation

Returns all the members in a conversation

API Endpoint

https://api.rammer.ai/v1/conversations/{conversationId}/members

Example API call

const request = require('request');


request.get({
    url: 'https://api.rammer.ai/v1/projects',
    headers: {'x-api-key': 'your_auth_token'},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
    "members": [
        {
            "name": "John",
            "email": "John@example.com",
        },
        {
            "name": "Mary",
            "email": "Mary@example.com",
        },
        {
            "name": "Roger",
            "email": "Roger@example.com",
        }
    ]
}

HTTP REQUEST

GET https://api.rammer.ai/v1/conversations/{conversationId}/members

GET insights from a conversation

Returns all the insights in a conversation including Topics, Questions and Action Items

API Endpoint

https://api.rammer.ai/v1/conversations/{conversationId}/insights

Example API call

const request = require('request');

request.get({
    url: 'https://api.rammer.com/v1/conversations/{conversationId}/insights',
    headers: {'x-api-key': 'your_auth_token'},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
    "insights": [
        {
            "id": "5179649407582208",
            "text": "Push them for the two weeks delivery, right?",
            "type": "question",
            "score": 0.9730208796076476,
            "messageIds": [
                "e16d5c97-93ff-4ebf-aff7-8c6bba54747c"
            ],
            "entities": []
        },
        {
            "id": "5633940379402240",
            "text": "Mary thinks we need to go ahead with the TV in Bangalore.",
            "type": "action_item",
            "score": 0.8659442937321238,
            "messageIds": [
                "20c6b55a-4da6-45a5-bbea-b7c5053684c2"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com",
                "phone": ""
            }
        },
        {
            "id": "5642466493464576",
            "text": "I think what is the Bahamas?",
            "type": "question",
            "score": 0.9119608386876195,
            "messageIds": [
                "538f9cec-a495-42cf-8e94-5c95e54f6b7d"
            ],
            "entities": []
        },
        {
            "id": "5644121934921728",
            "text": "Think we need to have a call with UV.",
            "type": "follow_up",
            "score": 0.8660254121940272,
            "messageIds": [
                "c4611a85-5893-40f8-a2f3-22b1f7eadc63"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com",
            }
        },
        // ...more insights
    ]
}

HTTP REQUEST

GET https://api.rammer.ai/v1/conversations/{conversationId}/insights

GET topics from a conversation

Returns all the topics generated from a conversation

API Endpoint

https://api.rammer.ai/v1/conversations/{conversationId}/topics

Example API call

const request = require('request');

request.get({
    url: 'https://api.rammer.ai/v1/conversationId/topics',
    headers: {'x-api-key': 'your_auth_token'},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
    "topics": [
        {
            "id": "5179649407582208",
            "text": "speakers",
            "type": "topics",
            "score": 0.9730208796076476,
            "messageIds": [
                "e16d5c97-93ff-4ebf-aff7-8c6bba54747c"
            ],
            "entities": [
                {
                    "type": "rootWord",
                    "text": "speakers"
                }
            ]
        },
    ]
}

HTTP REQUEST

GET https://api.rammer.ai/v1/conversations/{conversationId}/topics

GET questions from a conversation

Returns all the questions generated from the conversation

API Endpoint

https://api.rammer.ai/v1/conversations/{conversationId}/questions

Example API call

const request = require('request');

request.get({
    url: 'https://api.rammer.com/v1/conversations/{conversationId}/questions',
    headers: {'x-api-key': 'your_auth_token'},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
    "questions": [
        {
            "id": "5179649407582208",
            "text": "Push them for the two weeks delivery, right?",
            "type": "question",
            "score": 0.9730208796076476,
            "messageIds": [
                "e16d5c97-93ff-4ebf-aff7-8c6bba54747c"
            ],
            "entities": []
        },
        {
            "id": "5642466493464576",
            "text": "I think what is the Bahamas?",
            "type": "question",
            "score": 0.9119608386876195,
            "messageIds": [
                "538f9cec-a495-42cf-8e94-5c95e54f6b7d"
            ],
            "entities": []
        },
        {
            "id": "5756718797553664",
            "text": "Okay need be detained, or we can go there in person and support them?",
            "type": "question",
            "score": 0.893303149769215,
            "messageIds": [
                "d382c499-c44f-4459-99f9-d984db1b9058"
            ],
            "entities": []
        },
        {
            "id": "6235991715086336",
            "text": "Why is that holiday in US from 17?",
            "type": "question",
            "score": 0.9998053310511206,
            "messageIds": [
                "ab88b466-1378-4cad-af45-0050e8ef097a"
            ],
            "entities": []
        }
    ]
}

HTTP REQUEST

GET https://api.rammer.ai/v1/conversations/{conversationId}/questions

GET action items from conversation

Returns all the action items generated from the conversation

API Endpoint

https://api.rammer.ai/v1/conversations/{conversationId}/action-items

Example API call

const request = require('request');

request.get({
    url: 'https://api.rammer.ai/v1/conversations/{conversationId}/action-items',
    headers: {'x-api-key': 'your_auth_token'},
    json: true
}, (err, response, body) => {
  console.log(body);
});

The above request returns a response structured like this:

{
    "actionItems": [
        {
            "id": "5633940379402240",
            "text": "Mary thinks we need to go ahead with the TV in Bangalore.",
            "type": "action_item",
            "score": 0.8659442937321238,
            "messageIds": [
                "20c6b55a-4da6-45a5-bbea-b7c5053684c2"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com"
            }
        },
        {
            "id": "5668855401676800",
            "text": "Call and Stephanie also brought up something to check against what Ison is given as so there's one more test that we want to do.",
            "type": "action_item",
            "score": 0.8660254037845785,
            "messageIds": [
                "fc31a51c-5e18-41ea-a868-fa5065ccfa92"
            ],
            "entities": [],
            "assignee": {
                "name": "John",
                "email": "John@example.com"
            }
        },
        {
            "id": "5690029162627072",
            "text": "Checking the nodes with Eisner to make sure we covered everything so that will be x.",
            "type": "action_item",
            "score": 0.8657734634985154,
            "messageIds": [
                "24239f56-b4b3-4244-96db-1943f5978659"
            ],
            "entities": [],
            "assignee": {
                "name": "John",
                "email": "John@example.com"
            }
        },
        {
            "id": "5707174000984064",
            "text": "Roger is going to work with the TV lab and make sure that test is also included, so we are checking to make sure not only with our complaints.",
            "type": "action_item",
            "score": 0.9999962500210938,
            "messageIds": [
                "6ecb11ea-b311-4fd2-b3b5-f0694c809cc3"
            ],
            "entities": [],
            "assignee": {
                "name": "Roger",
                "email": "Roger@example.com"
            }
        },
        {
            "id": "5757280188366848",
            "text": "Mary thinks it really needs to kick start this week which means the call with UV team and our us team needs to happen the next couple of days.",
            "type": "action_item",
            "score": 0.9999992500008438,
            "messageIds": [
                "262534fa-36a8-4645-8d0f-e4b78e608325"
            ],
            "entities": [],
            "assignee": {
                "name": "Mary",
                "email": "Mary@example.com"
            },
            "dueBy": "2020-02-10T07:00:00.000Z"
        }
    ]
}

HTTP REQUEST

GET https://api.rammer.ai/v1/conversations/{conversationId}/action-items

Errors

// example task.response with failed attachment
{
    "error": "One or more attachments could not be downloaded.", // reason for error
    "attachments": [
        {
            "statusCode": 403, // HTTP code received when fetching attachment
            "url": "http://example.com/kitten.png", // attachment URL
        }
    ]
}

Rammer uses the following HTTP codes:

Error Code Meaning
200 OK -- Success.
400 Bad Request -- Your request is invalid.
401 Unauthorized -- Your API key is invalid.
403 Forbidden
404 Not Found -- The specified resource does not exist.
405 Method Not Allowed -- You tried to access an api with an invalid method.
429 Too Many Requests -- Too many requests hit the API too quickly.
500 Internal Server Error -- We had a problem with our server. Try again later.

Sample Flows

What are flows?

The "Flows" product is a visual editor for building conversational intelligence workflows and integrations. The initial release will be limited to the meetings insights flow which connects to your meeting platform and delivers topics, action items, and insights. Subsequent versions will offer customizable nodes, extensibility, and custom flows for any use case. We will also offer other sample flows such as Slack and Salesforce connectors.

Meeting Insight Flow

Reference implementation for connecting to meetings and conference calls. Integrate with common calendar and meeting platforms to generate meeting insights. Currently, the platform supports integration with Zoom, GoToMeeting, Cisco Webex and Google Meet. With the meeting insights flow, you can easily have Rammer join all your Google Calendar meetings in a few easy steps.

Get Started

To have Rammer join all your calendar meetings, pre-register here to get notified when the platform is available. Once available, you log in and navigate to the Flows tab: Flows

Select Create which will take you through the process of connecting your Google Calendar and choosing the meeting platforms that Rammer can join.

Flows2

And thats it! Once you give Rammer access to your calendar, you can go ahead and create a meeting, and you'll see that Rammer joins that meeting and exits when the meeting end. At the end of the meeting, you will recieve an email summary of the meeting with topics and action items that were discussed and identified.

As you continue to use Rammer in your day to day meetings, you can track your balance on the dashboard. If you want to add more minutes to your account, you can refill your balance from the billing tab:

Flows-Billing

To disconnect Rammer from your meetings, just click the Delete button and Rammer will stop joining any further meetings.

Flows-Delete

SalesForce Adapter Flow

Reference implementation for integrating Insights and Action items in Salesforce. Coming soon. Click here to get notified.

Slack App Flow

Reference implementation for integrating Insights and Action items in Slack. Coming soon. Click here to get notified.

Offline File Flow

Reference implementation for generating Insights and Action items from audio files. Coming soon. Click here to get notified.

Create Custom Flow

Reference implementation for generating Insights and Action items from your own custom flows. Coming soon. Click here to get notified.

Resources

Things to consider when integrating Rammer into your flows:

  1. Rammer will only join calls from the calendar of users where the user's email id is the same with the calendar permission email id.

    • Rammer doesn’t have control over selecting associated calendar for getting permission
    • If the user selects another account than the logged in account for calendar permissions, rammer won’t join calls from that calendar.
  2. Rammer won't join password/PIN protected meetings.

  3. Rammer will join the meeting only for the duration scheduled as per the calendar invitation.

  4. Rammer will leave the meeting if there is silence for more than 10 minutes.

  5. Currently, users can create only one flow for meeting insights with one google calendar account. The user cannot connect multiple calendar accounts. However the user can delete a flow and create it again if they want to connect it to a different calendar account.

  6. Rammer require specific GoToMeeting meeting description in the email body invite as below:

  7. Rammer requires a specific format for Webex in the meeting description as below:

Overview

The Rammer.ai platform provides conversation intelligence as a service. The platform enables real-time intelligence in business conversations, recognizing action items, insights, questions, contextual topics, summaries, etc.

What is conversational intelligence ?

In its pure form, conversation intelligence refers to the ability to communicate in ways that create a shared concept of reality. It begins with trust and transparency to remove biases in decisions, enable participants, such as knowledge workers, to be more effective at their core function, eradicate mundane and repetitive parts of their work and empower participants both at work and beyond.

Here at Rammer.ai, we are using methods of artificial intelligence like machine learning and deep learning to augment human capability by analyzing conversations and surface knowledge and actions that matter.

How is Rammer.ai different from chatbot platforms?

In short, very.

Also, in short: chatbots are intent-based, rule-based, often launched by ‘wake words’, and enable short conversations between humans and machines.

Rammer.ai is a developer platform and service capable of understanding context and meaning in natural conversations between humans. It can surface the things that matter in real-time, e.g. questions, action items, insights, contextual topics, signals, etc.

Slightly longer answer:

Chatbots or virtual assistants are commonly command-driven and often referred to as conversation AI systems. They add value to direct human-machine interaction via auditory or textual methods, and attempt to convincingly simulate how a human would behave in a conversation.

You can build chatbots by using existing intent-based systems like RASA, DialogFlow, Watson, Lex, etc. These systems identify intent based on the training data you provide, and these systems enable you to create rule-based conversation workflows between humans and machines.

We are building a platform that can contextually analyze natural conversations between two or more humans based on the meaning as opposed to keywords or wake words. We are also building it zero training, so you can analyze conversations on both audio or text channels to get recommendations of outcomes without needing to train a custom engine for every new intent.

Imagine embedding a passive intelligence in existing products or workflows, natively. Every bit of conversational data flowing through is parsed and used to surface real-time actions and outcomes.

Next: explore supported use cases

Use Cases

Working closely with early customers and their developers, we have received very positive feedback on several use cases. Among these, in highest demand are use cases for meetings, unified communication and collaboration, customer care, sales enablement, workflow management, and recruitment.

Meetings & UCaaS

Applying primarily to unified communication and collaboration platforms (UCaaS), you can add real-time recommendations of action items and next steps as part of your existing workflow. This would meaningfully improve meeting productivity by surfacing the things that matter, as the meeting occurs. Beyond real-time prompts, take advantage of automated meetings summaries delivered to your preferred channel, like email, chat, Slack, calendar, etc.

Use real-time contextual recommendations to enable participants to drive efficiencies in their note-taking, save time and focus more on the meeting itself. Action items are surfaced contextually and in real-time and can be automated to trigger your existing workflows.

Post-meeting summaries are helpful for users that like to get more involved in the conversation as it happens, and prefer re-visiting information and action items post-meeting.

Benefits:

Customer Care & CCaaS

As we understand it, customer care performance can be measured by 3 proxy metrics: customer satisfaction, time spent on call, and the number of calls serviced.

What if the introduction of a real-time passive conversation intelligence service into each call was to improve all 3 metrics at once? Real-time contextual understanding leads to suggested actions that a customer care agent can act upon during the call, enabling the agent to 1) focus on the human connection with the customer, 2) come to a swifter resolution thanks to task automation, and 3) serve more customers with higher satisfaction during a shift.

Further, the Rammer.ai platform is also capable of automating post-call data collection. This enables analysis of support conversations over time, agents, shifts,and groups, which leads to a better understanding of pain-points, topics of customer support conversation, etc.

Benefits: Support Organization

Sales Enablement & CRM

Digital communication platforms used for sales engagements and customer interactions need to capture conversational data for benchmarking performance, improve net sales, and for identifying and replicating the best-performing sales scripts.

Use Rammer.ai to identify top-performing pitches by leveraging real-time insights. Accelerate the sales cycle by automating suggested action items in real-time, such as scheduling tasks and follow-ups via outbound work tool integrations. Keep your CRM up to date by automating the post-call entry with useful summaries.

Benefits: Sales Agent

Benefits: Sales Enablement / VP of Sales

Next: Learn more about the capabilities of the platform

Capabilities

Transcript

The platform provides a searchable transcript with timecodes and speaker information. The transcript is a refined output of the speech-to-text conversion. Our platform does not carry its own speech-to-text capability; it is compatible with a range of ASR APIs including Google, Amazon, Microsoft, etc.

The transcript is one of the easiest ways to navigate through the entire conversation. It can be sorted using speaker-specific or topic-specific filters. Additionally, each insight or action item can also lead to related parts of the transcript.

Transcripts can be in real-time for voice and video conversations. They can also be accessed through the post-conversation summary UI.

The post-conversation summary page enables editing, copying and sharing of transcripts from the conversation.

Speech To Text

Rammer is agnostic to Speech To Text (STT) APIs. Behind the scenes, we use Google STT for transcription but are compatible with others such as Amazon, Microsoft, etc. We recommend a set of best practices and "what to expect" from STT in our blog post here.

You can test out our STT capabilities with the audio samples provided below.

Speakers Separation

Audio streams and transcripts are partitioned into homogenous segments based on speaker identity. This is done with the help of speaker events and the process of Speaker Diarization.

Timeline

The entire conversation is represented in a spatial format using the timeline feature. The timeline can show speakers, topics and specific events in the conversation like action items or insights. The timeline makes navigating around a conversation easier and very intuitive.

The timeline can also help in understanding how the sentiment/emotion has been changing throughout the conversation.

With appropriate work tool integrations, timelines can also help track follow-ups and tasks that were assigned and completed over time.

Summary Topics

Summary topics provide a quick overview of the key things that were talked about in the conversation. IMPORTANT: summary topics are not detected based on the frequency of their occurrences in the conversation, they are instead detected contextually, and each summary topic is an indication of one or more important topics of discussion in the conversation.

Each summary topic has a score that indicates the importance of that topic in the context of the entire meeting. It is not that rare that even less frequently mentioned things are of higher importance in the conversation, and this will be reflected in a higher score for those topics, even if other summary topics have a high number of mentions in the overall conversation.

Contextual hierarchies

The summary topics have contextual hierarchies in them. High-level topics represent various concepts that the conversation is about, while lower-level topics are aspects of those high-level concepts, which provide a more contextual understanding of the high-level concepts discussed in the conversation.

For example, higher-level concepts could be Pricing or Revenue or Production Issues, etc. and lower-level aspects could be like these -

High Level (Concept) Low Level (Aspect)
Pricing Selling Price, Paying Capacity, Cost-based pricing
Revenue Revenue Growth, Higher Margin, Revenue Model
Production Issues Critical Issue, Downtime, Unstable Production

This table simply shows how the “Aspects” provide more information about one or more “Concepts” in the conversation. By way of this table, you can understand what different aspects of each high-level concepts were discussed in any given conversation.

Action Items

An action item is a specific outcome recognized in the conversation that requires one or more people in the conversation to take a specific action, e.g. set up a meeting, share a file, complete a task, etc.

Action Item Features

There are various types of actionable parts in the conversation between people and the platform can recognize these various connotations.

Definitive

A definitive connotation is used to indicate the importance, definitiveness, and predictability of a certain action. Usually, this type of action item indicates the commitment to the task.

Examples:

"We need to fix all the critical issues by tomorrow". Here, there is a definitive requirement for a group of people indicated by "we" to fix the critical issues by tomorrow. "Please make sure that the hall is booked for 25th". Here, even though the tone of the action item is not a command, still the request suggests that this task needs to be completed.

The platform can recognize these types of connotations on top of recognizing the actionable item itself and indicate it in the output.

Non-Definitives

There can be other actionable items that may not be definitive in nature but still, indicate some future action. For example, it can be simply an opinion of someone, to indicate future action.

"I think we should spend more time reviewing the document". Here, to spend more time in review of the document is an opinion of this person but it's not something they are committing to.

Tasks

Definitive action items that are not follow-ups are categorized as tasks.

Example: "I will complete the presentation that needs to be presented to the management by the end of today". Here, a person is really committed to completing the presentation (task) by the end of today.

Follow Ups

The platform can recognize if an action item has a connotation, which requires following up in general or by someone in particular.

Examples:

"I will talk to my manager and find out the agreed dates with the vendor". Here, a person needs to follow up with their manager in order to complete this action.

"Perhaps I can submit the report today". Here, the action of submitting the report is indicated, but the overall connotation of it doesn't indicate the commitment.

Follow-ups can also be non-definitive

Example:

“We’ll need to sync up with the design team to find out more details”. Here, it’s clear that there needs to be a follow-up, but the details on when and how are not defined.

Follow Up Non-Follow Up
Definitive Follow Up(defined data) Task
Non-Definitive Follow Up (non defined) Idea/Opinion

Other Insight Types

Questions

Any explicit question or request for information that comes up during the conversation, whether answered or not, is recognized as a question.

Examples:

“What features are most relevant for our use case?” “How are we planning to design the systems?”

Metadata

The platform supports access of certain metadata on the conversation through the API. These metadata could be one of the following

Suggestive Actions

For each of the Action Item identified from the conversation, certain suggestive actions are recommended based on available worktool integrations.

Example:

Outbound Work Tool Integrations

The platform currently offers email and calendar as out-of-box integrations. However,this can be extended to any work tool where the actionable insights need to be pushed to enhance productivity and reduce the time taken by users to manually enter information from conversations. The same integrations can be enabled as suggestive actions to make this even quicker.

Some of the examples of these work tools can be:

Sentiment Analysis

(Access to this feature is invite-only, contact sales for more information)

The platform has the built-in capability to identify sentiments based on contextual understanding of each conversation. Also, the conversation itself may have a generic sentiment that may vary during the conversation.

For example, with contextual understanding, the platform can analyze what topic/person added a positive or negative sentiment to the conversation.

Emotion Analysis

(Access to this feature is invite-only, contact sales for more information)

Through contextual understanding, the platform can help analyze the emotions of participants specific to topics or questions in the conversation. This can help reveal different emotions that the participant(s) may have had towards a topic as well as in parts of the conversation, e.g. happy, sad, angry, fearful, excited, bored, etc.

Reusable and Customizable UI Components

The pre-built UI components can be widely divided into two areas

  1. Real-time UI components
  2. Summary Page UI

Real-Time UI Components

Real-time UI Components help showcase the transcription, insights and action items during the conversation itself. These are customizable, embeddable components that can be directly used in any product.

Real-time UI components are available for

Summary Page UI

At the end of each conversation, a summary of the conversation is generated and the page url is shared via email to all (or selected) participants.

The Summary page UI includes the following components

The post conversation summary page is also fully customizable, as per the use case or product requirement.