Prosecution Insights
Last updated: April 19, 2026
Application No. 18/031,123

DISPLAY METHOD AND APPARATUS OF A LIVE BROADCAST ROOM, ELECTRONIC DEVICE, AND STORAGE MEDIUM

Non-Final OA §103§112
Filed
Apr 10, 2023
Examiner
HUYNH, LINDA TANG
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 8m
To Grant
68%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
100 granted / 274 resolved
-18.5% vs TC avg
Strong +32% interview lift
Without
With
+31.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
30 currently pending
Career history
304
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
53.4%
+13.4% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
18.6%
-21.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 274 resolved cases

Office Action

§103 §112
DETAILED ACTION This Office Action is sent in response to Applicant's Communication received 11/26/2025 for 18031123. Claims 1, 3-12, 14-15, and 17-21 are presented. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/26/2025 has been entered. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/26/2025 was filed before the mailing date of a first action. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments With respect to Applicant's comments that the page and line number cited in the Office Action do not appear to correspond to the cited content [pg. 12:3], Examiner notes that the Office Action references the numbered pages and full paragraphs as presented in the English machine translation for Jiang (CN 114501054 A) submitted 04/17/2025. Additional Office Action citations to Wei (CN 113965812 A) references the numbered pages and full paragraphs as presented in the English machine translation for Wei submitted with the current Office Action. Applicant's arguments with respect to the 102 rejection of claim 1 have been considered but are not persuasive in view of the newly cited Wei reference being cited in the current 103 rejection of Jiang in view of Wei. In response to Applicant's argument that the references fail to show certain features of Applicant’s invention, it is noted that the features upon which Applicant relies (i.e., where no such prerequisite of "both the anchor terminal and the audience terminal should be in the interactive mode" is needed [pgs. 12:6-13:2]; "the remote control point can independently, autonomously, and flexibly control the remote manipulation virtual object", "the first client can truly display the interactive operation of the second client", and "the remote control virtual object of the second client is truly displayed in the live stream of the first client according to the remote control information to maximize the restoration of the scene" [pg. 13:2]) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). As noted below, the Office Action presents evidence that stands in direct contrast to Applicant's arguments that the cited art, alone or in combination, do not teach the limitations of claim 1. Claim 1 remains rejected. Claims 14 and 15 recite similar limitations to those recited in claim 1 and remain rejected upon a similar basis as claim 1 as stated above. Dependent claims 3-12 and 17-21 remain rejected at least based on their dependence from independent claims 1 and 14. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 1, 3-12, 14-15, and 17-21 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor at the time the application was filed, had possession of the claimed invention. Claims 1, 14, and 15 recite the newly amended limitation "to enable the remote control point to independently control the remote manipulation object without intervention from the first client". While the specification as originally filed discloses "remote manipulation virtual object displayed in the first client is controlled by the second client, so that the interaction information of the second client can be synchronized" with a result of "[a]n interaction scenario of multi-party watching in the live broadcast room is truly displayed in the first client, thereby maximizing the restoration of a scene and enriching the display form of the live broadcast room" [Specification, pgs. 12:28-13:8], any negative limitation or exclusionary proviso must have basis in the original disclosure [MPEP 2173.05(i)]. The specification does not appear to describe in sufficient detail that one skilled in the art can reasonably conclude that the inventor had possession of the newly amended negative invention where control of the remote manipulation object is performed without intervention from the first client as recited in the newly amended limitation "to enable the remote control point to independently control the remote manipulation object without intervention from the first client". Dependent claims 3-12 and 17-21 are rejected under the written description requirement for failing to remedy the deficiencies of parent claims 1 and 14. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1, 3-12, 14-15, and 17-21 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Claims 1, 14, and 15 recite the term "without intervention" which is a relative term which renders the claim indefinite. The term "effortless" is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. The specification recites "the remote manipulation virtual object may be rendered based on the remote response information and the rendered remote manipulation virtual object is displayed in the live broadcast room displayed in the first client" [pg. 12:17-26]. It appears unclear how Applicant is defining "without intervention," given that the first client is involved in the control of the remote manipulation virtual object by rendering the animation data generated based on the remote response information acquired from the remote control point. Appropriate correction is required. Dependent claims 3-12 and 17-21 are rejected as being indefinite for failing to remedy the deficiencies of parent claims 1 and 14. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4, 7-12, 14-15, 18, and 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang (CN 114501054 A, machine translation attached) in view of Wei (CN 113965812 A, machine translation attached). As to claim 1, Jiang discloses a display method of a live stream [pg. 9:4-5, terminal displays live direct broadcasting room (read: stream)], comprising: displaying the live stream in a first client in response to a viewing trigger operation for the live stream of the first client [pg. 9:5-7, audience terminal (read: first client) displays interface presenting live broadcast room when live broadcast room content is transmitted (read: viewing trigger operation) to audience terminal]; and displaying a remote manipulation virtual object in the live stream displayed in the first client in response to determining that the remote manipulation virtual object created by a second client for the live stream exists [pgs. 9:7, 13:5, audience terminal displays presenter virtual object (read: remote manipulation virtual object) in live broadcast room when anchor presenter terminal (read: second client) enters and sets virtual object in viewed live broadcast room]; wherein displaying the remote manipulation virtual object in the live stream displayed in the first client comprises: acquiring remote response information of the remote manipulation virtual object [pgs. 9:9-10:3, 11:8-12:2, 14:1-2, obtain interactive action data (read: remote response information) for virtual object], rendering the remote manipulation virtual object based on the remote response information [pgs. 9:9-10:3, 11:8-12:2, 14:1-2, 15:1, render animation of virtual image performing interactive action], and displaying the rendered remote manipulation virtual object in the live stream displayed in the first client [pgs. 9:6-8, 10:2, 16:3, audience terminal displays live broadcast room including rendered virtual image], wherein the remote response information is determined according to remote manipulation information of a remote control point corresponding to the remote manipulation virtual object [pgs. 11:4-12:2, 14:1-2, match interactive action data to action form (read: remote manipulation information) collected by interaction operation (read: remote control point) of audience object viewing live broadcast room including virtual image]. While Jiang does not explicitly teach "to enable the remote control point to independently control the remote manipulation virtual object without intervention from the first client", one of ordinary skill in the art would recognize that the limitation "independently control the remote manipulation virtual object without intervention from the first client" is not being given patentable weight as the terms "to" and "enable" suggest or make optional and do not require the step to be performed as the limitation is an intended result of the "remote manipulation information" as recited in the claim [see MPEP 2111.04]. Nevertheless, in an effort to advance compact prosecution, Wei teaches wherein the remote response information is determined according to remote manipulation information of a remote control point corresponding to the remote manipulation virtual object to enable the remote control point to independently control the remote manipulation virtual object without intervention from the first client [pgs. 9:3-10:3, determine animation (read: remote response information) of virtual image (read: remote manipulation virtual object) corresponding to audience user from received virtual image configuration information (read: remote manipulation information) input by audience user terminal (read: remote control point) with virtual image under control of audience user (read: independently controlled) in audience activity area, note presenter (read: first client) controls presenter in presenter activity area while audience activity area is under control of audience user]. Jiang and Wei are analogous art to the claimed invention being from a similar field of endeavor of live broadcast systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the remote response information controlling a remote manipulation virtual object as disclosed by Jiang with enabling a remote control point to independent control a remote manipulation virtual object without intervention as disclosed by Wei with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Jiang as described above to enhance interaction authenticity and improve a user experience [Wei, pg. 9:5]. As to claim 4, Jiang discloses the method of claim 1, wherein acquiring the remote response information of the remote manipulation virtual object comprises: determining information acquisition time according to an information update frequency of the remote response information and an image update frequency of a live broadcast image in the live stream [pgs. 15:7-16:1, determine target drawing frame rate based on actual drawing frame rate (read: image update frequency) of live room virtual image and screen refresh rate (read: information update frequency) of live room interface]; and acquiring the remote response information of the remote manipulation virtual object according to the information acquisition time [pg. 17:1-3, obtain virtual image based on target drawing frame rate]. As to claim 7, Jiang discloses the method of claim 1, further comprising: receiving a creation trigger operation for the live stream of the first client, wherein the creation trigger operation is used for creating a local manipulation virtual object [pgs. 9:7, 10:2-5, audience terminal starts virtual image display function (read: creation trigger operation) in live broadcast room to set virtual image of audience object (read: local manipulation virtual object, note room is locally presented to audience terminal]; creating the local manipulation virtual object according to the creation trigger operation [pgs. 9:7-10:5, render set virtual image of audience object]; and displaying the local manipulation virtual object in the live stream displayed in the first client [pgs. 9:7, 10:2-5, audience terminal displays virtual image in live broadcast room]. As to claim 8, Jiang discloses the method of claim 7, wherein displaying the local manipulation virtual object in the live stream displayed in the first client comprises: acquiring local manipulation information of a manipulation point of the local manipulation virtual object and determining local response information of the local manipulation virtual object according to the local manipulation information, wherein the local response information comprises one of object action information [pgs. 11:4-12:1, 13:6, receive interactive operation (read: local manipulation information) from audience object form (read: manipulation point) and match interactive operation to target standard interactive action (read: local response information) of presenter object, note strikethrough indicates non-selected alternatives], ; and according to the local response information, displaying the local manipulation virtual object in the live stream displayed in the first client [pgs. 13:6-14:1, virtual image in live broadcast room forms action corresponding to interactive operation forwarded to audience terminal]. As to claim 9, Jiang discloses the method of claim 8, wherein determining the local response information of the local manipulation virtual object according to the local manipulation information comprises at least one of the following: according to a prebuilt scenario object, determining the scenario interaction information of the local manipulation virtual object corresponding to the local manipulation information [pg. 14:1-2, match performance of animation effect (read: scenario interaction information) by virtual image to received interactive operation of giving a gift based on preset library action (read: prebuilt scenario object)]. As to claim 10, Jiang discloses the method of claim 8, after determining the local response information of the local manipulation virtual object according to the local manipulation information, further comprising: pushing at least one of the object action information [pgs. 9:6-7, 11:5-7, 14:3-4, forward live broadcast room including virtual image performing action (read: object action information) to audience terminal when audience object enters live broadcast room with presenter virtual image]. As to claim 11, Jiang discloses the method of claim 7, further comprising at least one of the following: canceling display of the remote manipulation virtual object in the live stream in response to detecting that a second preset destruction condition for the remote manipulation virtual object is satisfied [Figs. 5-8, pgs. 10:2, 10:6-11:3, 13:2, replace displayed virtual image in live broadcast room with target virtual image object in response to receiving trigger operation (read: second preset destruction condition) for replacing virtual image function, note replacing displayed virtual image no longer displays (read: cancels) original virtual image]. As to claim 12, Jiang discloses the method of claim 1, wherein displaying the live broadcast room in the first client comprises: acquiring a virtual reality video corresponding to the live stream [pg. 9:7, collect video picture of presenter entering live broadcast room, note the live broadcast room realizing synchronization of live video content and interaction between a presenter and a viewer in real time is consistent with a virtual reality video enabling an audience to experience a scene in real time as consistent with Applicant's specification (pg. 7:18-24)] and displaying the virtual reality video in the live stream [pg. 9:7, present live video content in live broadcast room interface]. As to claim 14, Jiang and Wei, combined at least for the reasons above, Jiang discloses an electronic device, comprising: one or more processors, and a storage apparatus configured to store one or more programs, wherein when executing the one or more programs, the one or more processors is enabled to perform [pgs. 7:9-8:2, 8:6, 8:11-9:1, device includes processor and memory storing programs executed by processor]: limitations substantially similar to those recited in claim 1 and is rejected under similar rationale. As to claim 15, Jiang and Wei, combined at least for the reasons above, Jiang discloses a non-transitory storage medium comprising computer-executable instructions, wherein when executing the computer-executable instructions, a computer processor is configured to execute [pgs. 7:9-8:2, 8:6, 8:11-9:1, memory storing programs executed by device processor]: limitations substantially similar to those recited in claim 1 and is rejected under similar rationale. As to claims 18 and 21, Jiang and Wei, combined at least for the reasons above, disclose the electronic device of claim 14 comprising limitations substantially similar to those recited in claims 4 and 7, respectively, and are rejected under similar rationale. Claims 3, 5-6, 17, and 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Jiang and Wei as applied to claims 2 and 16 above, and further in view of Vestergaard et al. (US 20170142430 A1). As to claim 3, Jiang discloses the method of claim 1, wherein acquiring the remote response information of the remote manipulation virtual object comprises: receiving the remote response information of the remote manipulation virtual object pushed by the second client [pgs. 9:6-7, 11:8-12:2, 14:1-2, obtain interactive action data for virtual object transmitted from presenter terminal] and [storing] the remote response information of the remote manipulation virtual object in a target [storage] space [pgs. 13:9-14:2, library (see: target space) includes interactive action operation]; and in response to determining that a preset information acquisition condition is satisfied, acquiring the remote response information from the target [storage] space [pg. 14:2-3, obtain interactive operation action in library when interaction operation matches interactive action operation in library]. However, Jiang and Wei do not specifically disclose buffering the remote response information of the remote manipulation virtual object in a target buffer space; and wherein a "target [storage] space" is a "target buffer space". Vestergaard discloses: buffering the remote response information of the remote manipulation virtual object in a target buffer space [para 0065-0067, 0072, store frame images of received video data in frame image buffer]; and in response to determining that a preset information acquisition condition is satisfied, acquiring the remote response information from the target buffer space [para 0067, 0074-0076, 0092, frame renderer displays frame images stored in buffer at appropriate times (read: present information acquisition condition) of a configurable rate of video frame display]. Jiang, Wei, and Vestergaard are analogous art to the claimed invention being from a similar field of endeavor of video playback systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify storing and acquiring remote response information as disclosed by Jiang and Wei with the buffering and retrieving information from a target buffer space as disclosed by Vestergaard with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Jiang and Wei as described above to facilitate rendering video frames [para 0072, 0075, 0092]. As to claim 5, Jiang discloses the method of claim 4, wherein determining the information acquisition time according to the information update frequency of the remote response information and the image update frequency of the live broadcast image in the live stream [pgs. 15:7-16:1, determine target drawing frame rate based on actual drawing frame rate of live room virtual image including virtual image and screen refresh rate of live room interface] comprises: in response to determining that the information update frequency of the remote response information is less than the image update frequency of the live broadcast image in the live stream [pgs. 15:5-8, determine screen refresh rate of live room interface is less actual drawing frame rate of live room virtual image], determining a [] time according to an information update cycle of the remote response information [pgs. 15:8-9, 15:11-16:6, count number based on time period obtaining (read: information update cycle) live room image frames] and determining the information acquisition time according to the [] time [pgs. 15:8-9, 15:11-16:6, determine target drawing frame rate based on count number]. However, Jiang and Wei do not specifically disclose wherein "a [] time" is "a delay time". Vestergaard discloses: determining a delay time according to an information update cycle of the remote response information [para 0114, 0183-0184, count number of dropped frames not handled by device based on full frame rate (read: information update cycle) of received video data, note broadcast reasonable interpretation of delay includes any hinderance] and determining the information acquisition time according to the delay time [para 0114, 0168-0169, 0183-0184, determine lower video frame rate based on count number of dropped frames not handled by device, note broadcast reasonable interpretation of delay includes any hinderance]. Jiang, Wei, and Vestergaard are analogous art to the claimed invention being from a similar field of endeavor of video playback systems. Thus, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the determined time and information acquisition time as disclosed by Jiang and Wei with determining information acquisition time according to a determined delay time as disclosed by Vestergaard with a reasonable expectation of success. One of ordinary skill in the art would be motivated to modify Jiang and Wei as described above to render a desirable frame rate while reducing processor resource load [Vestergaard, para 0179]. As to claim 6, Jiang discloses the method of claim 1, wherein acquiring the remote response information of the remote manipulation virtual object [pgs. 9:9-10:3, 11:8-12:2, 14:1-2, obtain interactive action data (read: remote response information) for virtual object], rendering the remote manipulation virtual object based on the remote response information [pgs. 9:9-10:3, 11:8-12:2, 14:1-2, 15:1, render animation of virtual image performing interactive action], and displaying the rendered remote manipulation virtual object in the live stream displayed in the first client [pgs. 9:6-8, 10:2, 16:3, audience terminal displays live broadcast room including rendered virtual image] comprise: displaying the rendered remote manipulation virtual object in the live stream displayed in the first client [pgs. 9:6-8, 10:2, 17:3, audience terminal displays live broadcast room including virtual image performing interactive action operation]. Jiang teaches the acquiring, rendering, and displaying as recited in the instant claim but not explicitly in response to determining that an information update frequency of the remote response information is less than an image update frequency of a live broadcast image in the live stream, acquiring, from at least two frames of the remote response information of the remote manipulation virtual object, two adjacent frames of the remote response information which are adjacent in information generation time; performing interpolation processing according to the two adjacent frames of the remote response information to obtain interpolation response information between the two adjacent frames of the remote response information; and rendering the remote manipulation virtual object based on the remote response information and the interpolation response information. However, Jiang teaches in response to determining … an information update frequency of the remote response information [pgs. 15:7-16:1, screen refresh rate of live room interface] [and] an image update frequency of a live broadcast image in the live stream [pgs. 15:7-16:1, determine actual drawing frame rate of live room interface including virtual image]…, acquiring, from at least two frames of the remote response information of the remote manipulation virtual object, two adjacent frames of the remote response information which are adjacent in information generation time [pg. 15:3-4, obtain two frames of virtual image performing animation action, where two frames are contiguous according to the time sequence]; performing [] processing according to the two adjacent frames of the remote response information to obtain [] response information between the two adjacent frames of the remote response information [pgs. 15:3-8, determine target frame rate (see: information) for rendering image frame sequence including frames based on obtaining (see: processing) drawing frame rate and screen refresh rate]; and rendering the remote manipulation virtual object based on the remote response information and the [] response information [pg. 17:1-2, render virtual object based on determined object animation and target drawing frame rate]. Vestergaard teaches in response to determining that an information update frequency of remote response information is less than an image update frequency of an image, acquiring, from at least two frames of the remote response information of the remote manipulation virtual object [para 0168-0170, 0184, anticipate device supports frame render rate (read: information update frequency) for transmitted video lower than frame rate (read: image update frequency) of original video frame image including content and acquire frame images of video at consecutive times]; performing interpolation processing according to the two adjacent frames of the remote response information to obtain interpolation response information between the two adjacent frames of the remote response information [para 0168-0171, 0178-0179, interpolate between frame images at consecutive times to create frame image objects (read: interpolation response information)]; and rendering the remote manipulation virtual object based on the remote response information and the interpolation response information [para 0170, 0182-0183, render video frame images including content and interpolate between known image frames]. Jiang, Wei, and Vestergaard are analogous art to the claimed invention being from a similar field of endeavor of video playback systems. Thus it would have been obvious to one skilled in the art before the effective filing date of the claimed invention combine the teachings of Jiang determining an information update frequency of remote response information, an image update frequency of a live broadcast image, acquiring and processing adjacent frames of remote response information, and rendering the remote manipulation virtual object with the teachings of Vestergaard determining an information update frequency is less than an image update frequency, performing interpolation processing according to adjacent frames to obtain interpolation response information, and rendering content based on video information and interpolation response information with a reasonable expectation of success to result in in response to determining that an information update frequency of the remote response information is less than an image update frequency of a live broadcast image in the live stream, acquiring, from at least two frames of the remote response information of the remote manipulation virtual object, two adjacent frames of the remote response information which are adjacent in information generation time; performing interpolation processing according to the two adjacent frames of the remote response information to obtain interpolation response information between the two adjacent frames of the remote response information; and rendering the remote manipulation virtual object based on the remote response information and the interpolation response information [see MPEP 2143]. One of ordinary skill in the art would be motivated to apply this teaching to Jiang and Wei to render a desirable frame rate while reducing processor resource load [Vestergaard, para 0179]. As to claims 17, 19, and 20, Jiang, Wei, and Vestergaard, combined at least for the reasons above, disclose the electronic device of claim 14 comprising limitations substantially similar to those recited in claim 3, 5, and 6, respectively, and are rejected under similar rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Wang et al. (US 20230162451 A1) generally discloses presenting a live room with an audience avatar based on a live room entry instruction. Mah (US 20200151939 A1) generally discloses interpolating video frames. Garcia et al. (US 12069121 B1) and Lin et al. (US 20230113024 A1) generally disclose modifying video quality based in a live video room environment. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDA HUYNH whose telephone number is (571)272-5240 and email is linda.huynh@uspto.gov. The examiner can normally be reached M-F between 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LINDA HUYNH/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Apr 10, 2023
Application Filed
Apr 13, 2025
Non-Final Rejection — §103, §112
Jul 17, 2025
Response Filed
Aug 25, 2025
Final Rejection — §103, §112
Oct 27, 2025
Response after Non-Final Action
Nov 26, 2025
Request for Continued Examination
Dec 02, 2025
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578837
USER INTERFACES FOR MANAGING SHARING OF CONTENT IN THREE-DIMENSIONAL ENVIRONMENTS
2y 5m to grant Granted Mar 17, 2026
Patent 12547310
INFORMATION PROCESSING DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12541287
INTEGRATED ENERGY DATA SCIENCE PLATFORM
2y 5m to grant Granted Feb 03, 2026
Patent 12524136
EVENT TRANSCRIPT PRESENTATION
2y 5m to grant Granted Jan 13, 2026
Patent 12524124
RECORDING FOLLOWING BEHAVIORS BETWEEN VIRTUAL OBJECTS AND USER AVATARS IN AR EXPERIENCES
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
68%
With Interview (+31.9%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 274 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month