Prosecution Insights
Last updated: April 18, 2026
Application No. 18/701,437

TERMINAL, INFORMATION PROCESSING METHOD, PROGRAM, AND RECORDING MEDIUM

Final Rejection §102§103
Filed
Apr 15, 2024
Examiner
WELCH, DAVID T
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Popopo Inc.
OA Round
2 (Final)
82%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
247 granted / 303 resolved
+19.5% vs TC avg
Strong +27% interview lift
Without
With
+27.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
29 currently pending
Career history
332
Total Applications
across all art units

Statute-Specific Performance

§101
11.6%
-28.4% vs TC avg
§103
47.4%
+7.4% vs TC avg
§102
20.6%
-19.4% vs TC avg
§112
12.2%
-27.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 303 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 2 is objected to because of a minor typographical informality: on lines 4-5 of the claim, “wherein determining” should be amended to omit the “wherein.” Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 6, and 8-12 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Oz et al. (U.S. Patent Application Publication No. 2021/0360199), referred herein as Oz. Regarding claim 1, Oz teaches a terminal for participating in a conference held in a virtual space in which an avatar of a participant is arranged (fig 5; paragraph 58; paragraphs 132-134), the terminal comprising: a memory storing instructions, and one or more processors configured to execute the instructions (paragraphs 42-44; paragraphs 120-122) to implement: collecting a voice of the participant (paragraph 145; paragraphs 278 and 279; voice information is collected from the participants); generating control data for controlling the avatar of the participant (paragraph 61; paragraphs 135-137; many types of “control data” are generated for controlling the avatar, just one example being gaze information); determining a state of the participant, and transmitting voice data, the control data, and a determination result of the participant (paragraph 64; paragraph 132; paragraphs 137 and 142; paragraphs 144-146; paragraphs 289 and 290; the state of the participant is determined using all of the gathered information, and user input can be provided to determine desired views of the conference); receiving voice data, control data, and a determination result of another participant (fig 5; paragraph 61; paragraphs 132-134; paragraphs 289 and 290; the above steps are performed for multiple, or all, users participating in the virtual conference); determining, from different display modes, a display mode of the conference based on the determination result of the participant and the determination result of the other participant, each of the different display modes being respective to one of different pre-set screens of the conference (fig 11, display modes 41, 42, and 43; paragraphs 132 and 144; paragraphs 154 and 155; paragraphs 286 and 291), and reproducing the voice data, controlling the avatar based on the control data, and displaying a screen of the conference according to the display mode, the screen is one of the different pre-set screens (figs 5 and 11; paragraphs 62 and 63; paragraphs 132, 144, and 145; paragraphs 154 and 155; paragraph 291; the determined results of the participants are used to reproduce the voice data and control the avatars to display particular view modes of the virtual conference). Regarding claim 2, Oz teaches the terminal according to claim 1, wherein the one or more processors are further configured to execute the instructions to implement: obtaining a captured image of the participant, determining, from the captured image, whether the participant is looking at the screen (paragraphs 132 and 136; paragraphs 261 and 292), and totaling the determination results and determining the display mode of the conference based on a totaled result (paragraphs 61 and 62; paragraph 145; paragraphs 154 and 155; the aggregate determination results determine how the virtual conference is generated and updated). Regarding claim 3, Oz teaches the terminal according to claim 2, wherein the one or more processors are further configured to execute the instructions to implement determining a viewpoint when rendering the virtual space or a division of the screen into frames, based on the totaled result, the viewpoint is the one of the plurality of different pre-set screens (figs 5 and 11; paragraph 61; paragraph 133; paragraphs 135 and 136; paragraph 261; paragraphs 286 and 291). Regarding claim 4, Oz teaches the terminal according to claim 1, wherein the one or more processors are further configured to execute the instructions to implement storing a past shot breakdown in which an avatar is displayed, specifying a participant who is in a conversation, based on the determination result, and determining a shot breakdown of an avatar of the participant who is in the conversation, based on the past shot breakdown, the shot breakdown is the one of the plurality of different pre-set screens (figs 5 and 11; paragraphs 61-63; paragraph 138; paragraphs 154 and 155; paragraphs 160 and 195; paragraphs 286 and 291). Regarding claim 6, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds. Regarding claim 8, the limitations of this claim substantially correspond to the limitations of claim 1; thus they are rejected on similar grounds. Regarding claim 9, Oz teaches the terminal according to claim 1, wherein each of the different pre-set screens are respective ones of different viewpoints of the conference (fig 11, viewpoints 41, 42, and 43; paragraphs 286 and 291). Regarding claim 10, Oz teaches the terminal according to claim 1, wherein the different pre-set screens comprise: a first screen pre-set as indicating to display a view directed at and displaying at least a speaker in the conference (fig 11, view 42; paragraphs 286 and 291; paragraphs 318 and 323), and any of: a second screen pre-set as indicating to display an overlooking view of the virtual space in which the conference is held (fig 11, view 41; paragraphs 286 and 291; paragraph 323), and a third screen pre-set as indicating to display the avatar of the participant and an avatar of the other participant in respectively different shot breakdown frames (fig 11, view 43; paragraphs 286 and 291). Regarding claim 11, Oz teaches the terminal according to claim 10, wherein the different pre-set screens comprise the first screen and the second screen, and the overlooking view of the virtual space is a view of the virtual space elevated, relative to each of the avatar of the participant and the avatar of the other participant, in the virtual space and displaying each of the avatar of the participant and the avatar of the other participant (fig 11, viewpoints 41 and 42; paragraphs 286 and 291; the angle of the table and position of the participants illustrates that the overlooking view is elevated relative to the avatars). Regarding claim 12, Oz teaches the terminal according to claim 10, wherein the different pre-set screens comprise the first screen and the third screen, and the respectively different shot breakdown frames comprise a first shot breakdown frame and a second shot breakdown frame, and the third screen comprises the first shot breakdown frame illustrated side-by-side with the second shot breakdown frame with the avatar of the participant and the avatar of the other participant shown as facing each other across the respectively different shot breakdown frames (figs 5 and 11, viewpoints 41 and 43; paragraphs 134-136; paragraphs 286 and 291; hybrid view 43 shows first and second shot breakdown frames side-by-side each depicting an avatar of a participant, and the hybrid view also displays the avatars such that they face each other when deemed appropriate, based on participant interaction). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Oz, in view of Hahn (“Taming the wild west of VR: 5 technical lessons learned making the serious game Art Sort in VRChat”), December 15, 2020. Regarding claim 5, Oz teaches the terminal according to claim 1, wherein the one or more processors are further configured to execute the instructions to implement, when the participant is in a conversation with the other participant, moving a position of the avatar of the participant to face an avatar of the other participant (paragraphs 61 and 64; paragraphs 134 and 136). Oz does not teach that the virtual avatar is moved closer to the other participant’s avatar according to a type of the terminal. However, in a similar field of endeavor, Hahn teaches a terminal for participating in a conference held in a virtual space in which an avatar of a participant is arranged (page 2, the second paragraph; page 5, fig 4 and the first paragraph), wherein virtual objects are moved closer to a participant’s avatar according to a type of the terminal (page 9, the first paragraph, lines 1-3 and 6-12; fig 8 and its caption). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the virtual object movement of Hahn with the avatar interaction of Oz because this helps to ensure that the virtual conference experience will work well for participants regardless of the device platform they are using, by making the location of virtual objects more suitable for each type of device (see, for example, Hahn, page 9, the first paragraph, lines 4-6 and 10-13). Response to Arguments Applicant’s argument with respect to the 101 rejection has been fully considered, and is persuasive. The amendments to claim 8 have resolved this issue; thus the 101 rejection is withdrawn. Applicant’s arguments with respect to the prior art rejections have been fully considered, but are not persuasive. Regarding claim 1, Applicant argues, in summary, that Oz does not teach the new features regarding determining a display mode because Oz does not suggest an “pre-set screens” determined based on the information, the determination result does not have an effect on determining the display mode, and Oz changes aspects of avatars within the panoramic view, but the view itself is not changed. (Remarks at 8-9). The Examiner respectfully disagrees with these arguments. It is first noted that the specification does not use the term “pre-set” anywhere in its disclosure, and thus, consistent with the disclosure and Applicant’s Remarks, “pre-set screens” may simply be interpreted as various views of the conference that are available as display modes. Turning to the applied reference, Oz, it is respectfully submitted that figure 11 and its corresponding disclosure, cited above, clearly describes various views that are available as display modes (e.g. panoramic view 41, partial view 42, and hybrid view 43). Now, the claim term “a determination result” has a very broad range of reasonable interpretations, some of which are also clearly disclosed in Oz. For example, as shown in the above citations, Oz discloses that the different views are selectable and configurable based on determinations regarding the state of the participants, voice data, control data, whether they need/want to view all, or some, of the participants, or if they want to see a side-by-side of two participants along with the full view of all participants, etc., such that participants interact in a more natural fashion. The results of these determinations all contribute to the display mode selection shown to the user. Thus the Examiner respectfully submits that Oz teaches these limitations. Regarding the dependent claims, the Applicant argues that Oz does not teach these claims insomuch as they depend from claim 1, which is not taught. The Examiner respectfully disagrees, for the reasons discussed above. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAVID T WELCH whose telephone number is (571)270-5364. The examiner can normally be reached Monday-Thursday, 8:30-5:30 EST, and alternate Fridays, 9:00-2:30 EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. DAVID T. WELCH Primary Examiner Art Unit 2613 /DAVID T WELCH/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Apr 15, 2024
Application Filed
Sep 19, 2025
Non-Final Rejection — §102, §103
Dec 11, 2025
Response Filed
Jan 08, 2026
Final Rejection — §102, §103
Apr 13, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602742
IMAGE PROCESSING APPARATUS, BINARIZATION METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Apr 14, 2026
Patent 12602842
TEXTURE GENERATION USING MULTIMODAL EMBEDDINGS
2y 5m to grant Granted Apr 14, 2026
Patent 12592048
System and Method for Creating Anchors in Augmented or Mixed Reality
2y 5m to grant Granted Mar 31, 2026
Patent 12579734
METHOD FOR RENDERING VIEWPOINTS AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12573119
APPARATUS AND METHOD FOR GENERATING SPEECH SYNTHESIS IMAGE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
82%
Grant Probability
99%
With Interview (+27.2%)
3y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 303 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month