Loading...
Please wait, while we are loading the content...
Approach for Deploying Skills for Cognitive Agents Across Multiple Vendor Platforms
| Content Provider | The Lens |
|---|---|
| Abstract | A cognitive agent system provides a centralized capability for users to configure and deploy cognitive agents across multiple heterogeneous vendor platforms. The cognitive agent system provides a design environment that allows users to define skills, as well as a new conversation construct that supports more complex interactions with users. The cognitive agent system also includes a deployment environment that allows users to register users and cognitive agents, deploy skills and conversations, and monitor the activity of cognitive agents across multiple vendor platforms. These users may use the cognitive agent system to define skills and conversations once and then deploy the skills and conversations to multiple service endpoints across different vendor platforms. In addition, the cognitive agent system allows user to directly manage cognitive agents that are not specific to any particular vendor. |
| Related Links | https://www.lens.org/lens/patent/009-319-633-510-513/frontpage |
| Language | English |
| Publisher Date | 2019-11-21 |
| Access Restriction | Open |
| Content Type | Text |
| Resource Type | Patent |
| Jurisdiction | United States of America |
| Date Applied | 2018-05-16 |
| Applicant | Nelson Steven A Kitada Hiroshi Wong Lana Ricoh Co Ltd |
| Application No. | 201815981062 |
| Claim | An apparatus providing an improvement in cognitive agents in computer networks, the apparatus comprising: one or more processors, one or more memories communicatively coupled to the one or more processors, and a management application executing on the apparatus, the management application being configured to perform: receiving a first user selection of a particular skill to be deployed to a voice-activated service, receiving a second user selection of a first voice-activated service and a second voice-activate service on which the particular skill is to be deployed, wherein the second voice-activated service is different than the first voice-activated service, in response to the first user selection of a particular skill to be deployed to a voice-activated service and the second user selection of a first voice-activated service and a second voice-activate service on which the particular skill is to be deployed: retrieving first source code that implements the particular skill, wherein the first source code is in a first source code format, generating and transmitting, via one or more computer networks to a first computer system that hosts the first voice-activated service, a first set of one or more messages that provide the first source code to the first computer system that hosts the first voice-activated service and cause the particular skill to be made available to cognitive agents executing on service endpoints associated with the first voice-activated service, and generating and transmitting, via one or more computer networks to a second computer system that hosts the second voice-activated service, a second set of one or more messages that provide the first source code to the second computer system that hosts the second voice-activated service and cause the particular skill to be made available to cognitive agents executing on service endpoints associated with the second voice-activated service. The apparatus as recited in claim 1 , wherein providing the provide the first source code to the first computer system that hosts the first voice-activated service includes translating the first source code into a format supported by the first voice-activated service. The apparatus as recited in claim 1 , wherein the user interface is further configured to deploy the particular skill to a cognitive agent that is not supported by the particular voice-activated service. The apparatus as recited in claim 1 , wherein the management application is further configured to: generate and provide to a client device, via the one or more computer networks, a user interface that includes: a first plurality of user interface objects that correspond to a plurality of skills that are available for deployment to a voice-activated service, a second plurality of user interface objects that correspond to a plurality of voice-activated services, and user interface controls that allow a user of the client device to make the first user selection of the particular skill, from the plurality of skills, and the second user selection of the particular voice-activated service, from a plurality of voice-activated services. The apparatus as recited in claim 1 , wherein the management application is further configured to: generate and provide to a client device, via the one or more computer networks, a user interface that includes: a first plurality of user interface objects that correspond to a plurality of cognitive agents that are available to be configured with a voice-activated service, a second plurality of user interface objects that correspond to a plurality of voice-activated services, wherein the plurality of voice-activated services is selected on the basis that they all support the plurality of cognitive agents, and user interface controls that allow a user of the client device to make the first user selection of the particular skill, from the plurality of skills, and the second user selection of the particular voice-activated service, from a plurality of voice-activated services. The apparatus as recited in claim 1 , wherein: the first set of one or more messages conform to a first application program interface supported by the first computer system, and the second set of one or more messages conform to a second application program interface that is both supported by the second computer system and is different than the first application program interface supported by the first computer system. The apparatus as recited in claim 1 , wherein: the first set of one or more messages comprise one or more first Java Script Object Notation (JSON) files, and the second set of one or more messages comprise one or more second JSON files. One or more non-transitory computer-readable media providing an improvement in cognitive agents in computer networks, the one or more non-transitory computer-readable media storing instructions which, when processed by one or more processors, cause a management application executing on an apparatus to perform: receiving a first user selection of a particular skill to be deployed to a voice-activated service, receiving a second user selection of a first voice-activated service and a second voice-activate service on which the particular skill is to be deployed, wherein the second voice-activated service is different than the first voice-activated service, in response to the first user selection of a particular skill to be deployed to a voice-activated service and the second user selection of a first voice-activated service and a second voice-activate service on which the particular skill is to be deployed: retrieving first source code that implements the particular skill, wherein the first source code is in a first source code format, generating and transmitting, via one or more computer networks to a first computer system that hosts the first voice-activated service, a first set of one or more messages that provide the first source code to the first computer system that hosts the first voice-activated service and cause the particular skill to be made available to cognitive agents executing on service endpoints associated with the first voice-activated service, and generating and transmitting, via one or more computer networks to a second computer system that hosts the second voice-activated service, a second set of one or more messages that provide the first source code to the second computer system that hosts the second voice-activated service and cause the particular skill to be made available to cognitive agents executing on service endpoints associated with the second voice-activated service. The one or more non-transitory computer-readable media as recited in claim 8 , wherein providing the provide the first source code to the first computer system that hosts the first voice-activated service includes translating the first source code into a format supported by the first voice-activated service. The one or more non-transitory computer-readable media as recited in claim 8 , wherein the user interface is further configured to deploy the particular skill to a cognitive agent that is not supported by the particular voice-activated service. The one or more non-transitory computer-readable media as recited in claim 8 , further storing additional instructions which, when processed by the one or more processors, cause: generating and providing to a client device, via the one or more computer networks, a user interface that includes: a first plurality of user interface objects that correspond to a plurality of skills that are available for deployment to a voice-activated service, a second plurality of user interface objects that correspond to a plurality of voice-activated services, and user interface controls that allow a user of the client device to make the first user selection of the particular skill, from the plurality of skills, and the second user selection of the particular voice-activated service, from a plurality of voice-activated services. The one or more non-transitory computer-readable media as recited in claim 8 , further storing additional instructions which, when processed by the one or more processors, cause the management application executing on the apparatus to perform: generating and providing to a client device, via the one or more computer networks, a user interface that includes: a first plurality of user interface objects that correspond to a plurality of cognitive agents that are available to be configured with a voice-activated service, a second plurality of user interface objects that correspond to a plurality of voice-activated services, wherein the plurality of voice-activated services is selected on the basis that they all support the plurality of cognitive agents, and user interface controls that allow a user of the client device to make the first user selection of the particular skill, from the plurality of skills, and the second user selection of the particular voice-activated service, from a plurality of voice-activated services. The one or more non-transitory computer-readable media as recited in claim 8 , wherein: the first set of one or more messages conform to a first application program interface supported by the first computer system, and the second set of one or more messages conform to a second application program interface that is both supported by the second computer system and is different than the first application program interface supported by the first computer system. The one or more non-transitory computer-readable media as recited in claim 8 , wherein: the first set of one or more messages comprise one or more first Java Script Object Notation (JSON) files, and the second set of one or more messages comprise one or more second JSON files. A computer-implemented method providing an improvement in cognitive agents in computer networks, the computer-implemented method comprising causing a management application executing on the apparatus to perform: receiving a first user selection of a particular skill to be deployed to a voice-activated service, receiving a second user selection of a first voice-activated service and a second voice-activate service on which the particular skill is to be deployed, wherein the second voice-activated service is different than the first voice-activated service, in response to the first user selection of a particular skill to be deployed to a voice-activated service and the second user selection of a first voice-activated service and a second voice-activate service on which the particular skill is to be deployed: retrieving first source code that implements the particular skill, wherein the first source code is in a first source code format, generating and transmitting, via one or more computer networks to a first computer system that hosts the first voice-activated service, a first set of one or more messages that provide the first source code to the first computer system that hosts the first voice-activated service and cause the particular skill to be made available to cognitive agents executing on service endpoints associated with the first voice-activated service, and generating and transmitting, via one or more computer networks to a second computer system that hosts the second voice-activated service, a second set of one or more messages that provide the first source code to the second computer system that hosts the second voice-activated service and cause the particular skill to be made available to cognitive agents executing on service endpoints associated with the second voice-activated service. The computer-implemented method as recited in claim 15 , wherein providing the provide the first source code to the first computer system that hosts the first voice-activated service includes translating the first source code into a format supported by the first voice-activated service. The computer-implemented method as recited in claim 15 , wherein the user interface is further configured to deploy the particular skill to a cognitive agent that is not supported by the particular voice-activated service. The computer-implemented method as recited in claim 15 , further comprising: generating and providing to a client device, via the one or more computer networks, a user interface that includes: a first plurality of user interface objects that correspond to a plurality of skills that are available for deployment to a voice-activated service, a second plurality of user interface objects that correspond to a plurality of voice-activated services, and user interface controls that allow a user of the client device to make the first user selection of the particular skill, from the plurality of skills, and the second user selection of the particular voice-activated service, from a plurality of voice-activated services. The computer-implemented method as recited in claim 15 , further comprising: generating and providing to a client device, via the one or more computer networks, a user interface that includes: a first plurality of user interface objects that correspond to a plurality of cognitive agents that are available to be configured with a voice-activated service, a second plurality of user interface objects that correspond to a plurality of voice-activated services, wherein the plurality of voice-activated services is selected on the basis that they all support the plurality of cognitive agents, and user interface controls that allow a user of the client device to make the first user selection of the particular skill, from the plurality of skills, and the second user selection of the particular voice-activated service, from a plurality of voice-activated services. The computer-implemented method as recited in claim 15 , wherein: the first set of one or more messages conform to a first application program interface supported by the first computer system, and the second set of one or more messages conform to a second application program interface that is both supported by the second computer system and is different than the first application program interface supported by the first computer system. |
| CPC Classification | ELECTRIC DIGITAL DATA PROCESSING SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS;SPEECH RECOGNITION;SPEECH OR VOICE PROCESSING TECHNIQUES;SPEECH OR AUDIO CODING OR DECODING |
| Extended Family | 009-319-633-510-513 100-365-251-056-71X 107-735-076-616-934 |
| Patent ID | 20190355363 |
| Inventor/Author | Nelson Steven A Kitada Hiroshi Wong Lana |
| IPC | G10L15/26 G06F9/445 G06F9/451 |
| Status | Active |
| Owner | Ricoh Company Ltd |
| Simple Family | 009-319-633-510-513 100-365-251-056-71X 107-735-076-616-934 |
| CPC (with Group) | G06F3/167 G10L2015/088 G06F8/61 G06F16/685 G06F9/44505 G06F9/453 G06F16/243 G10L15/26 |
| Issuing Authority | United States Patent and Trademark Office (USPTO) |
| Kind | Patent Application Publication |