Prosecution Insights
Last updated: April 19, 2026
Application No. 17/937,638

Viewpoint Camp Visualization

Non-Final OA §102
Filed
Oct 03, 2022
Examiner
SHAH, PARAS D
Art Unit
2653
Tech Center
2600 — Communications
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
3y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
474 granted / 645 resolved
+11.5% vs TC avg
Strong +31% interview lift
Without
With
+31.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
24 currently pending
Career history
669
Total Applications
across all art units

Statute-Specific Performance

§101
20.3%
-19.7% vs TC avg
§103
44.9%
+4.9% vs TC avg
§102
13.8%
-26.2% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 645 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 10/03/2022 was filed in compliance with the provisions of 37 CFR 1.97 and 1.98. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 6, 9-12, and 16-18 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ito et al. (US 10,592,997). Regarding Claim 1, Ito et al discloses a computer-implemented method for visualizing viewpoints comprising: receiving, by a computing device, access to a multi-party discussion occurring via a telecommunication system (In step S301, the navigation device 111 acquires conversational speech by a plurality of passengers in the vehicle 110 via the microphone 201) (col. 6, lines 43-45); analyzing, by the computing device, the multi-party discussion using natural language processing (In step S303, the conversation situation analyzing unit 204 analyzes a situation of a conversation held by a plurality of persons. In the present embodiment, a conversation is analyzed to determine whether opinions are being coordinated by the conversation and to determine what kind of opinions are being expressed by each speaker) (col. 7, lines 6-11);; extracting, by the computing device, a plurality of viewpoints of the multi-party discussion based on the analysis (In step S302, the server device 120 extracts respective utterances of each speaker from the conversational speech) (col. 6, lines 53-55); synthesizing, by the computing device, a subset of the plurality of viewpoints based on the content of each viewpoint of the plurality of viewpoints (Moreover, in a situation where a large number of speakers are present and are split into subgroups respectively engaged in conversations, a group of utterances related to a same conversation may be extracted as a series of a group of utterances based on the contents of the utterances and the relationship among the utterances, in which case a process of an intervention for bridging differences of opinion may be performed on each group of utterances) (col. 7, lines 15-22); and transmitting, by the computing device, a rendered synthesized visualization of the subset (In step S306, the intervening/arbitrating unit 209 generates an intervention instruction for eliciting an opinion regarding the selective element from the target person, and the output control unit 212 generates synthesized speech or a text to be output in accordance with the intervention instruction and reproduces the synthesized speech or the text using the speaker 213 or the display 214). (col. 7, lines 49-55). Regarding Claim 2, Ito et al discloses the computer-implemented method, further comprising: generating, by the computing device, a consensus relating to a vote of participants of the multi-party discussion associated with the subset of the plurality of viewpoints (In the present embodiment, as an index representing satisfaction of a group in regards to decision making (group satisfaction), a score is introduced such that the larger the number of adopted opinions of all participants, the higher the score, and the smaller a variation in the numbers of adopted opinions of the respective participants, the higher the score) (col. 7, lines 30-36); and rendering, by the computing device, a viewpoint visualization of the plurality of viewpoints including the consensus for viewing by the participants (In step S306, the intervening/arbitrating unit 209 generates an intervention instruction for eliciting an opinion regarding the selective element from the target person, and the output control unit 212 generates synthesized speech or a text to be output in accordance with the intervention instruction and reproduces the synthesized speech or the text using the speaker 213 or the display 214). (col. 7, lines 49-55). Regarding Claim 3, Ito et al discloses the computer-implemented method, wherein extracting the plurality of viewpoints comprises: identifying, by the computing device, a topic of dialogue associated with the plurality of viewpoints (In step S405, the conversation situation analyzing unit 204 estimates an intention and a conversation topic of each utterance from the contents (the text) of the utterance by referring to the vocabulary/intention understanding corpus/dictionary 206) (col. 9, lines 6-13); identifying, by the computing device, at least one viewpoint factor of each viewpoint relating to the topic (Examples of an utterance intention include starting a conversation, making a proposal, agreeing or disagreeing with a proposal, and consolidating opinions) (col. 9, lines 6-13); and assigning, by the computing device, a score to each viewpoint of the plurality of viewpoints based on the at least one viewpoint factor (Group satisfaction is calculated based on an opinion adoption score of each participant which is determined based on contents of expression of opinions and decided contents of each participant) (col. 12, lines 1-5). Regarding Claim 4, Ito et al discloses the computer-implemented method, wherein the viewpoint factor is one or more of a sentiment of the viewpoint, a statement order of the viewpoint, or a semantic of the viewpoint (In step S403, the conversation situation analyzing unit 204 obtains an emotion of a speaker for each utterance. Examples of emotions to be obtained include satisfaction, dissatisfaction, excitement, anger, sadness, anticipation, relief, and anxiety) (col. 8, lines 47-51). Regarding Claim 6, Ito et al discloses the computer-implemented method, wherein the rendered synthesized visualization comprises a clustering of the plurality of viewpoints based on the score (In step S407, the conversation situation analyzing unit 204 generates and outputs conversation situational data that integrates the analysis results) (col. 10, lines 27-38). Regarding Claim 9, Ito et al discloses a computer system for visualizing viewpoints, the computer system comprising: one or more processors, one or more computer-readable memories (Moreover, the navigation device 111 and the server device 120 are both computers including a processing device such as a CPU, a storage device such as a RAM and a ROM, an input device, an output device, a communication interface, and the like, and realize the respective functions described above as the processing device executes a program stored in the storage device) (col. 6, lines 23-35); program instructions stored on at least one of the one or more computer-readable memories for execution by at least one of the one or more processors (Moreover, the navigation device 111 and the server device 120 are both computers including a processing device such as a CPU, a storage device such as a RAM and a ROM, an input device, an output device, a communication interface, and the like, and realize the respective functions described above as the processing device executes a program stored in the storage device) (col. 6, lines 23-35), the program instructions comprising: program instructions to receive access to a multi-party discussion occurring via a telecommunication system (In step S301, the navigation device 111 acquires conversational speech by a plurality of passengers in the vehicle 110 via the microphone 201) (col. 6, lines 43-45); program instructions to analyze the multi-party discussion using natural language processing (In step S303, the conversation situation analyzing unit 204 analyzes a situation of a conversation held by a plurality of persons. In the present embodiment, a conversation is analyzed to determine whether opinions are being coordinated by the conversation and to determine what kind of opinions are being expressed by each speaker) (col. 7, lines 6-11); program instructions to extract a plurality of viewpoints of the multi-party discussion based on the analysis (In step S302, the server device 120 extracts respective utterances of each speaker from the conversational speech) (col. 6, lines 53-55); program instructions to synthesize a subset of the plurality of viewpoints based on the content of each viewpoint of the plurality of viewpoints (Moreover, in a situation where a large number of speakers are present and are split into subgroups respectively engaged in conversations, a group of utterances related to a same conversation may be extracted as a series of a group of utterances based on the contents of the utterances and the relationship among the utterances, in which case a process of an intervention for bridging differences of opinion may be performed on each group of utterances) (col. 7, lines 15-22); and program instructions to transmit a rendered synthesized visualization of the subset (In step S306, the intervening/arbitrating unit 209 generates an intervention instruction for eliciting an opinion regarding the selective element from the target person, and the output control unit 212 generates synthesized speech or a text to be output in accordance with the intervention instruction and reproduces the synthesized speech or the text using the speaker 213 or the display 214). (col. 7, lines 49-55). Claims 10 and 17 are rejected for the same reason as claim 2. Claims 11 and 18 are rejected for the same reason as claim 3. Claims 12 is rejected for the same reason as claim 4. Claim 16 is rejected for the same reason as claim 1. Allowable Subject Matter Claims 5, 7, 8, 13-15, 19, and 20 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Cited Art The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Lauritsen (US 8,346,681) discloses supporting choice-making within a conceptual choicespace. Kankipati (US 2021/0336918) discloses social media utilizing voice and audio information to enable custom, curated experiences for users. Lee et al. (US 2024/0036705) discloses multi-party video call. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SATWANT K SINGH whose telephone number is (571)272-7468. The examiner can normally be reached Monday thru Friday 9:00 AM to 6:00 PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571}270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SATWANT K SINGH/Primary Examiner, Art Unit 2653
Read full office action

Prosecution Timeline

Oct 03, 2022
Application Filed
Oct 19, 2023
Response after Non-Final Action
Nov 01, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586591
SOUND SIGNAL DECODING METHOD, SOUND SIGNAL DECODER, PROGRAM, AND RECORDING MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12579367
TWO-TOWER NEURAL NETWORK FOR CONTENT-AUDIENCE RELATIONSHIP PREDICTION
2y 5m to grant Granted Mar 17, 2026
Patent 12579360
LEARNING SUPPORT APPARATUS FOR CREATING MULTIPLE-CHOICE QUIZ
2y 5m to grant Granted Mar 17, 2026
Patent 12562173
WEARABLE DEVICE CONTROL BASED ON VOICE COMMAND OF VERIFIED USER
2y 5m to grant Granted Feb 24, 2026
Patent 12559026
VEHICLE AND CONTROL METHOD THEREOF
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
99%
With Interview (+31.1%)
3y 9m
Median Time to Grant
Low
PTA Risk
Based on 645 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month