DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/03/2022 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-4, 6, 9-12, and 16-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ito et al. (US 10,592,997).
Regarding Claim 1, Ito et al discloses a computer-implemented method for visualizing viewpoints comprising: receiving, by a computing device, access to a multi-party discussion occurring via a telecommunication system (In step S301, the navigation device 111 acquires conversational speech by a plurality of passengers in the vehicle 110 via the microphone 201) (col. 6, lines 43-45); analyzing, by the computing device, the multi-party discussion using natural language processing (In step S303, the conversation situation analyzing unit 204 analyzes a situation of a conversation held by a plurality of persons. In the present embodiment, a conversation is analyzed to determine whether opinions are being coordinated by the conversation and to determine what kind of opinions are being expressed by each speaker) (col. 7, lines 6-11);; extracting, by the computing device, a plurality of viewpoints of the multi-party discussion based on the analysis (In step S302, the server device 120 extracts respective utterances of each speaker from the conversational speech) (col. 6, lines 53-55); synthesizing, by the computing device, a subset of the plurality of viewpoints based on the content of each viewpoint of the plurality of viewpoints (Moreover, in a situation where a large number of speakers are present and are split into subgroups respectively engaged in conversations, a group of utterances related to a same conversation may be extracted as a series of a group of utterances based on the contents of the utterances and the relationship among the utterances, in which case a process of an intervention for bridging differences of opinion may be performed on each group of utterances) (col. 7, lines 15-22); and transmitting, by the computing device, a rendered synthesized visualization of the subset (In step S306, the intervening/arbitrating unit 209 generates an intervention instruction for eliciting an opinion regarding the selective element from the target person, and the output control unit 212 generates synthesized speech or a text to be output in accordance with the intervention instruction and reproduces the synthesized speech or the text using the speaker 213 or the display 214). (col. 7, lines 49-55).
Regarding Claim 2, Ito et al discloses the computer-implemented method, further comprising: generating, by the computing device, a consensus relating to a vote of participants of the multi-party discussion associated with the subset of the plurality of viewpoints (In the present embodiment, as an index representing satisfaction of a group in regards to decision making (group satisfaction), a score is introduced such that the larger the number of adopted opinions of all participants, the higher the score, and the smaller a variation in the numbers of adopted opinions of the respective participants, the higher the score) (col. 7, lines 30-36); and rendering, by the computing device, a viewpoint visualization of the plurality of viewpoints including the consensus for viewing by the participants (In step S306, the intervening/arbitrating unit 209 generates an intervention instruction for eliciting an opinion regarding the selective element from the target person, and the output control unit 212 generates synthesized speech or a text to be output in accordance with the intervention instruction and reproduces the synthesized speech or the text using the speaker 213 or the display 214). (col. 7, lines 49-55).
Regarding Claim 3, Ito et al discloses the computer-implemented method, wherein extracting the plurality of viewpoints comprises: identifying, by the computing device, a topic of dialogue associated with the plurality of viewpoints (In step S405, the conversation situation analyzing unit 204 estimates an intention and a conversation topic of each utterance from the contents (the text) of the utterance by referring to the vocabulary/intention understanding corpus/dictionary 206) (col. 9, lines 6-13); identifying, by the computing device, at least one viewpoint factor of each viewpoint relating to the topic (Examples of an utterance intention include starting a conversation, making a proposal, agreeing or disagreeing with a proposal, and consolidating opinions) (col. 9, lines 6-13); and assigning, by the computing device, a score to each viewpoint of the plurality of viewpoints based on the at least one viewpoint factor (Group satisfaction is calculated based on an opinion adoption score of each participant which is determined based on contents of expression of opinions and decided contents of each participant) (col. 12, lines 1-5).
Regarding Claim 4, Ito et al discloses the computer-implemented method, wherein the viewpoint factor is one or more of a sentiment of the viewpoint, a statement order of the viewpoint, or a semantic of the viewpoint (In step S403, the conversation situation analyzing unit 204 obtains an emotion of a speaker for each utterance. Examples of emotions to be obtained include satisfaction, dissatisfaction, excitement, anger, sadness, anticipation, relief, and anxiety) (col. 8, lines 47-51).
Regarding Claim 6, Ito et al discloses the computer-implemented method, wherein the rendered synthesized visualization comprises a clustering of the plurality of viewpoints based on the score (In step S407, the conversation situation analyzing unit 204 generates and outputs conversation situational data that integrates the analysis results) (col. 10, lines 27-38).
Regarding Claim 9, Ito et al discloses a computer system for visualizing viewpoints, the computer system comprising: one or more processors, one or more computer-readable memories (Moreover, the navigation device 111 and the server device 120 are both computers including a processing device such as a CPU, a storage device such as a RAM and a ROM, an input device, an output device, a communication interface, and the like, and realize the respective functions described above as the processing device executes a program stored in the storage device) (col. 6, lines 23-35); program instructions stored on at least one of the one or more computer-readable memories for execution by at least one of the one or more processors (Moreover, the navigation device 111 and the server device 120 are both computers including a processing device such as a CPU, a storage device such as a RAM and a ROM, an input device, an output device, a communication interface, and the like, and realize the respective functions described above as the processing device executes a program stored in the storage device) (col. 6, lines 23-35), the program instructions comprising: program instructions to receive access to a multi-party discussion occurring via a telecommunication system (In step S301, the navigation device 111 acquires conversational speech by a plurality of passengers in the vehicle 110 via the microphone 201) (col. 6, lines 43-45); program instructions to analyze the multi-party discussion using natural language processing (In step S303, the conversation situation analyzing unit 204 analyzes a situation of a conversation held by a plurality of persons. In the present embodiment, a conversation is analyzed to determine whether opinions are being coordinated by the conversation and to determine what kind of opinions are being expressed by each speaker) (col. 7, lines 6-11); program instructions to extract a plurality of viewpoints of the multi-party discussion based on the analysis (In step S302, the server device 120 extracts respective utterances of each speaker from the conversational speech) (col. 6, lines 53-55); program instructions to synthesize a subset of the plurality of viewpoints based on the content of each viewpoint of the plurality of viewpoints (Moreover, in a situation where a large number of speakers are present and are split into subgroups respectively engaged in conversations, a group of utterances related to a same conversation may be extracted as a series of a group of utterances based on the contents of the utterances and the relationship among the utterances, in which case a process of an intervention for bridging differences of opinion may be performed on each group of utterances) (col. 7, lines 15-22); and program instructions to transmit a rendered synthesized visualization of the subset (In step S306, the intervening/arbitrating unit 209 generates an intervention instruction for eliciting an opinion regarding the selective element from the target person, and the output control unit 212 generates synthesized speech or a text to be output in accordance with the intervention instruction and reproduces the synthesized speech or the text using the speaker 213 or the display 214). (col. 7, lines 49-55).
Claims 10 and 17 are rejected for the same reason as claim 2.
Claims 11 and 18 are rejected for the same reason as claim 3.
Claims 12 is rejected for the same reason as claim 4.
Claim 16 is rejected for the same reason as claim 1.
Allowable Subject Matter
Claims 5, 7, 8, 13-15, 19, and 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Cited Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Lauritsen (US 8,346,681) discloses supporting choice-making within a conceptual choicespace.
Kankipati (US 2021/0336918) discloses social media utilizing voice and audio information to enable custom, curated experiences for users.
Lee et al. (US 2024/0036705) discloses multi-party video call.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SATWANT K SINGH whose telephone number is (571)272-7468. The examiner can normally be reached Monday thru Friday 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571}270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SATWANT K SINGH/Primary Examiner, Art Unit 2653