Prosecution Insights
Last updated: April 19, 2026
Application No. 18/439,984

DISPLAY METHOD, DATA PROCESSING METHOD, APPARATUS, ELECTRONIC DEVICE AND COMPUTER MEDIUM

Non-Final OA §103
Filed
Feb 13, 2024
Examiner
SALCE, JASON P
Art Unit
2421
Tech Center
2400 — Computer Networks
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
83%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
400 granted / 592 resolved
+9.6% vs TC avg
Strong +16% interview lift
Without
With
+15.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
32 currently pending
Career history
624
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
52.3%
+12.3% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
10.5%
-29.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 592 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/27/2026 has been entered. Response to Arguments Applicant’s arguments with respect to claims 1-6, 11-13 and 15-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-4, 11-13 and 15-20 are rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (U.S. Patent Application Publication 2024/00400097) in view of Liang (U.S. Patent Application Publication 2022/0014576) in further view of Duan et al. (U.S. Patent Application Publication 2022/0182739). Referring to claim 1, Patel discloses displaying a first live streaming scene in a set playing field space (see Figure 3B and Paragraph 0031 for displaying a classroom). Patel also discloses that in response to a preset operation, presenting a first display screen at a first location in the playing field space (the entire screen) and displaying a second live streaming scene in the first display screen (see Figures 3C-3G, Paragraph 0031 and Paragraph 0047 for displaying, in response to a user joining the lecture, the students and lecturer in a first location in the playing field space/classroom and the whiteboard containing multiple objects in a second live streaming scene in the first display screen). Patel fails to teach that the first live streaming scene is associated with a first live streaming room, and the second live streaming scene is associated with a second live streaming room. Liang discloses that a first live streaming scene is associated with a first live streaming room, and a second live streaming scene is associated with a second live streaming room (see Paragraphs 0041-0042 and 0054). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual reality system, as taught by Patel, using the multi-room live streaming system, as taught by Liang, for the purpose of providing the user with services of transmitting a live video stream and performing interaction through the media server (see Paragraph 0044 of Liang) and allowing a user to select different rooms to view that the user is interested in viewing. While Patel teaches multiple display screens including a third display screen at a third location in the playing field space (see above), Patel fails to teach that the third display screen is used for displaying bullet-screen messages sent by a user in the first live streaming scene. Duan discloses a third display screen is used for displaying bullet-screen messages sent by a user in the first live streaming scene (see Figure 2, Figure 5 and Paragraphs 0057-0059). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual reality system, as taught by Patel and Liang, using the bullet-screen messaging functionality, as taught by Duan, for the purpose of optimizing and improving the human-computer interaction efficiently based on the bullet screen (see Paragraph 0022 of Duan). Referring to claim 2, Patel also discloses that the playing field space is a 360-degree spherical space (see Paragraph 0039). Referring to claim 3, Patel also discloses that the first live streaming scene is a 360-degree 3D video stream picture (see Paragraph 0039), and the second live streaming scene is a 2D video stream picture (see Figures 3B-3G for the objects in the whiteboard being in 2D). Referring to claim 4, Patel also discloses displaying a second display screen location in the playing field space, wherein the second display screen is used for displaying live streamer information corresponding to the first live streaming scene (see Figure 3C for the lecturer being in a second display screen location and wherein the second display screen further indicates the name of the lecturer instructing the class through the live stream classroom scenario). Referring to claim 11, Patel also discloses acquiring fist post information of a head-mounted display device, determining a target scene picture in the current first live streaming scene on the first pose information and displaying, in the playing field space, the target scene picture in the first live streaming scene (see Paragraphs 0040-0041 for determining the head movement and other user movement to determine the scene to display to the user based on the inputted movement information). Referring to claim 12, Patel also discloses that a relationship between the first location and a location of a virtual object corresponding to the user is fixed (see Figure 3B for the location of the screen has a center and the whiteboard, lecturer and students are fixed). Referring to claim 13, Patel also discloses that a relationship between the first location and a location of an initial center of the first live streaming scene is fixed (see Figure 3B for the location of the screen has a center and the whiteboard, lecturer and students are fixed). Referring to claim 15, Patel also discloses presenting the first display screen at the first location in the playing field space according to a second preset display mode, wherein the second preset display mode is a splitting effect (see Paragraphs 0011, 0013 and 0055 for re-fixing the objects on the screen according to a new preset by the system). Referring to claim 16, see the rejection of claim 1 and further note Paragraph 0047 for the users wearing an HMD display. Referring to claims 17-20, see rejection of claims 1-4, respectively. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (U.S. Patent Application Publication 2024/00400097) in view of Liang (U.S. Patent Application Publication 2022/0014576) in further view of Faaborg et al. (U.S. Patent Application Publication 2017/0060230). Referring to claim 5, Patel and Liang disclose all of the limitations of claim 4, as well as in response to a hovering or dragging operation of a user on the first display screen, controlling the second screen to move to the first location (see Figures 3D-3F), but fails to teach controlling the first display screen to move to the second location. Faaborg discloses that in response to a hover operation on the first display screen, controlling the first display screen to move to the second location (see Figures 12A-12F and Paragraphs 0033, 0041 and 0053-0058 for zooming in on the objects on the screen, therefore moving the first screen to the second location at the second locations when the objects’ sizes are increased). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual reality system, as taught by Patel and Liang, using the zooming functionality, as taught by Faaborg, for the purpose of personalizing and controlling the virtual 3D environment (see the bottom of Paragraph 0002 of Faaborg). Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Patel et al. (U.S. Patent Application Publication 2024/00400097) in view of Liang (U.S. Patent Application Publication 2022/0014576) in further view of Ahn et al. (U.S. Patent No. 10,250,845). Referring to claim 6, Patel and Liang discloses all of the limitations of claim 4, but fails to teach that in response to detecting that the second display screen is turned off, controlling the first display screen to move to the second location. Ahn discloses that detecting that the second display screen is turned off, controlling the first display screen to move to the second location (see Figure 11 and Column 12, Line 21 through Column 13, Line 10). It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention, to modify the virtual reality system, as taught by Patel and Liang, using the object removal functionality, as taught by Ahn, for the purpose of performing the remote collaboration with high task efficiency without inconvenience based on a screen change (see Column 4, Lines 12-17 of Ahn). Allowable Subject Matter Claims 7-10 and 14 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON P SALCE whose telephone number is (571)272-7301. The examiner can normally be reached 5:30am-10:00pm M-F (Flex Schedule). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Flynn can be reached at 571-272-1915. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Jason Salce/Senior Examiner, Art Unit 2421 Jason P Salce Senior Examiner Art Unit 2421 February 25, 2026
Read full office action

Prosecution Timeline

Feb 13, 2024
Application Filed
Apr 30, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Oct 23, 2025
Final Rejection — §103
Dec 29, 2025
Response after Non-Final Action
Jan 27, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Feb 25, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593079
VIRTUAL LIVE-STREAMING CONTROL METHOD AND APPARATUS
2y 5m to grant Granted Mar 31, 2026
Patent 12585993
MACHINE LEARNING APPARATUS, MACHINE LEARNING SYSTEM, MACHINE LEARNING METHOD, AND PROGRAM
2y 5m to grant Granted Mar 24, 2026
Patent 12574607
SYSTEMS AND METHODS FOR PROVIDING BINGE-WATCHING PAUSE POSITION RECOMMENDATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12549817
FRAME AND CHILD FRAME FOR VIDEO AND WEBPAGE RENDERING
2y 5m to grant Granted Feb 10, 2026
Patent 12549813
MEDIA CONTENT ITEM RECOMMENDATIONS BASED ON PREDICTED USER INTERACTION EMBEDDINGS
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
83%
With Interview (+15.5%)
3y 6m
Median Time to Grant
High
PTA Risk
Based on 592 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month