Prosecution Insights
Last updated: April 19, 2026
Application No. 18/720,269

Display Method For Three-Dimensional House Model, Terminal, And Storage Medium

Final Rejection §102§103
Filed
Jun 14, 2024
Examiner
HARRISON, CHANTE E
Art Unit
2615
Tech Center
2600 — Communications
Assignee
BEIJING YOUZHUJU NETWORK TECHNOLOGY CO., LTD.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
97%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
497 granted / 725 resolved
+6.6% vs TC avg
Strong +29% interview lift
Without
With
+28.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
30 currently pending
Career history
755
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
40.3%
+0.3% vs TC avg
§102
31.8%
-8.2% vs TC avg
§112
15.2%
-24.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 725 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This action is responsive to communications: Amendment & Request for Reconsideration, filed on 03/03/2026. This action is made FINAL. 2. Claims 1-8, 11-22 are pending in the case. Claims 1, 5, and 11 are independent claims. Claims 1-8, 11 and 13-18 have been amended. Claims 9-10 are cancelled. Response to Arguments Applicant's arguments filed March 3, 2026 have been fully considered but they are not persuasive. Applicant argues (claims 1, 5 and 11) Yang fails to disclose “establishing a plurality of virtual objects in the three- dimensional house model, wherein a first virtual object of the plurality of virtual objects corresponds to a first terminal of a plurality of terminals, and wherein a second virtual object of the plurality of virtual objects corresponds to a second terminal of the plurality of terminals," as claimed. In response, Applicant admits Yang discloses receiving a request from one user, determines that user's target view range, then identifies target users who are currently within that view range. Yang then generates a target image for that one requesting user that includes avatars of the target users and returns that one target image to the requester. Thus, Yang’s generation of a target image including avatars of the target users teaches that Yang establishes first and second virtual object, e.g. avatars, that correspond respectively to first and second terminals. Moreso, Yang discloses (Specific implementation methods) "VR with" is a brand new experience interactive scene, in the VR scene, the user can advance reservation time with the broker, and interacting real-time connection, further can be added with family, friends and finish "with". Additionally, Yang discloses (Fig. 1) “103: according to the position information corresponding to the user, obtaining the target visual angle range in the presence of the target user according to the target user and the target angle range, and generating a target image, the target image is sent to the information corresponding to the request of the terminal; wherein the target image is seen by the target visual range and includes a picture representing the virtual image of the target user”. Accordingly, Yang obtains a target range of a house/room view within the view of a target user, and provides virtual reality real time connection of multiple users experiencing an interactive scene display of a target image that includes a representation, e.g. avatar, of the target user as well as other users. Therefore, Yang discloses “establishing a plurality of virtual objects in the three-dimensional house model, wherein a first virtual object of the plurality of virtual objects corresponds to a first terminal of a plurality of terminals, and wherein a second virtual object of the plurality of virtual objects corresponds to a second terminal of the plurality of terminals". Applicant argues (claims 1, 5 and 11) Yang does not disclose "configuring a first observation viewpoint corresponding to the first virtual object and configuring a second observation viewpoint corresponding to the second virtual object, wherein the first observation viewpoint is capable of moving with the first virtual object, and wherein the second observation viewpoint is capable of moving with the second virtual object," as claimed. In response, Yang discloses (Fig. 1) “server according to the viewing room information of each user received, locating to the current user of the target house, determining the position information of each user is located currently, generating a virtual image at the position corresponding to each position information. as the user adjusts the visual angle range of the house during the real-time change appears on the virtual image of the user house-showing visual angle range.” Additionally, Yang discloses (Fig. 1) “It should be noted that, because the virtual image in the target angle range indicates the corresponding house when the user viewing position, so the position of each virtual image updated in real time according to the operation of the corresponding user.” Therefore, Yang discloses "configuring a first observation viewpoint corresponding to the first virtual object and configuring a second observation viewpoint corresponding to the second virtual object, wherein the first observation viewpoint is capable of moving with the first virtual object, and wherein the second observation viewpoint is capable of moving with the second virtual object”. Applicant argues (claims 1, 5 and 11) Yang does not disclose "loading a first virtual scene through the first observation viewpoint according to position information of the first virtual object in the three-dimensional house model, wherein the first virtual scene comprises the first virtual object and the second virtual object," as claimed. In response, Applicant admits Yang discloses receiving a request from one user, determines that user's target view range, then identifies target users who are currently within that view range. Yang then generates a target image for that one requesting user that includes avatars of the target users and returns that one target image to the requester. Further, Yang discloses (Fig. 1) “103: according to the position information corresponding to the user, obtaining the target visual angle range in the presence of the target user according to the target user and the target angle range, and generating a target image, the target image is sent to the information corresponding to the request of the terminal; wherein the target image is seen by the target visual range and includes a picture representing the virtual image of the target user”. Yang discloses (Fig. 1) “server according to the viewing room information of each user received, locating to the current user of the target house, determining the position information of each user is located currently, generating a virtual image at the position corresponding to each position information. as the user adjusts the visual angle range of the house during the real-time change appears on the virtual image of the user house-showing visual angle range.” Therefore, Yang discloses "loading a first virtual scene through the first observation viewpoint according to position information of the first virtual object in the three-dimensional house model, wherein the first virtual scene comprises the first virtual object and the second virtual object". Applicant argues (claims 1, 5 and 11) Yang does not disclose "loading a second virtual scene through the second observation viewpoint according to position information of the second virtual object in the three- dimensional house model, wherein the second virtual scene comprises the first virtual object and the second virtual object," as claimed. In response, Applicant admits Yang discloses receiving a request from one user, determines that user's target view range, then identifies target users who are currently within that view range . Additionally, Yang discloses (Fig. 1) “server according to the viewing room information of each user received, locating to the current user of the target house, determining the position information of each user is located currently, generating a virtual image at the position corresponding to each position information. as the user adjusts the visual angle range of the house during the real-time change appears on the virtual image of the user house-showing visual angle range.” Therefore, Yang discloses "loading a second virtual scene through the second observation viewpoint according to position information of the second virtual object in the three- dimensional house model, wherein the second virtual scene comprises the first virtual object and the second virtual object". Applicant argues claims 2-3, 6-8, 12-13, 16-18 and 19-22 are patentable over Yang at least by virtue of their dependency from a respective base claim 1, 5 or 11. In response, claims 2-3, 6-8, 12-13, 16-18 and 19-22 are not patentable at least based on their respective dependency from rejected base claims 1, 5 or 11. To the extent that the response to the applicant's arguments may have mentioned new portions of the prior art references which were not used in the prior office action, this does not constitute a new ground of rejection. It is clear that the prior art reference is of record and has been considered entirely by applicant. See In re Boyer, 363 F.2d 455, 458 n.2, 150 USPQ 441, 444, n.2 (CCPA 1966) and In re Bush, 296 F.2d 491, 496, 131 USPQ 263, 267 (CCPA 1961). The mere fact that additional portions of the same reference may have been mentioned or relied upon does not constitute new ground of rejection. In re Meinhardt, 392, F.2d 273, 280, 157 USPQ 270, 275 (CCPA 1968). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-3, 5-8, 11-13, 16-22 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Bin Yang, CN 109829977 A. Independent claim 1, Yang discloses a display method for a three-dimensional house model, applied to a server, the display method comprising: establishing a plurality of virtual objects in the three-dimensional house model, wherein a first virtual object the plurality of virtual objects correspond to a first terminal of a plurality of terminals, and wherein a second virtual object of the plurality of virtual objects corresponds to a second terminal of the plurality of terminals (i.e. in a virtual 3D room, multiple users, each user represented by avatars, interact with one another – abstract; Fig. 1; "VR with" is a brand new experience interactive scene, in the VR scene, the user can advance reservation time with the broker, and interacting real-time connection, further can be added with family, friends and finish "with".- Specific implementation methods); configuring a first observation viewpoint corresponding to the first virtual object and configuring a second observation viewpoint corresponding to the second virtual object, wherein the first observation viewpoint is capable of moving with the first virtual object, and wherein the second observation viewpoint is capable of moving with the second virtual object (i.e. as the user adjusts the visual angle range of the house during the real-time change appears on the virtual image of the user house-showing visual angle range. wherein the process of determining position information comprises: virtual image position according to the function of the user to adjust the viewing angle range – Fig. 1; server according to the viewing room information of each user received, locating to the current user of the target house, determining the position information of each user is located currently, generating a virtual image at the position corresponding to each position information. as the user adjusts the visual angle range of the house during the real-time change appears on the virtual image of the user house-showing visual angle range; It should be noted that, because the virtual image in the target angle range indicates the corresponding house when the user viewing position, so the position of each virtual image updated in real time according to the operation of the corresponding user – Fig. 1); loading a first virtual scene through the first observation viewpoint according to position information of the first virtual object in the three-dimensional house model, wherein the first virtual scene comprises the first virtual object and the second virtual object (i.e. server according to the viewing room information of each user received, locating to the current user of the target house, determining the position information of each user is located currently, generating a virtual image at the position corresponding to each position information – Fig. 1;"VR with" is a brand new experience interactive scene, in the VR scene, the user can advance reservation time with the broker, and interacting real-time connection, further can be added with family, friends and finish "with".- Specific implementation methods); loading a second virtual scene through the second observation viewpoint according to position information of the second virtual object in the three-dimensional house model, wherein the second virtual scene comprises the first virtual object and the second virtual object (i.e. server according to the viewing room information of each user received, locating to the current user of the target house, determining the position information of each user is located currently, generating a virtual image at the position corresponding to each position information – Fig. 1; "VR with" is a brand new experience interactive scene, in the VR scene, the user can advance reservation time with the broker, and interacting real-time connection, further can be added with family, friends and finish "with".- Specific implementation methods); and sending the first virtual scene to the first terminal and second the second virtual scene to the second terminal to enable the first terminal to display the first virtual scene and to enable the second terminal to display the second virtual scene (i.e. server according to the viewing room information of each user received,… generating a virtual image at the position corresponding to each position information; present the other user represented by avatar in the image. by the method, the user not only can check each part of the house according to the desire of the user in house-showing process of, but also can see the other users viewing the house, knowing the area of interest to other users so that the user in the virtual three-dimensional space in the house-showing process capable for interacting with other users, improves the process of interaction so that the house-showing process more vivid; the generated picture is transmitted to the terminal – Fig. 1). Claim 2, Yang discloses the display method for a three-dimensional house model according to the display method for a three-dimensional house model according to wherein after the sending the first virtual scene to the first terminal, the display method further comprises: receiving viewpoint adjustment information from the first terminal (i.e. the updating and displaying the virtual image in the target image comprises moving, adding or deleting the virtual image in the target image – Fig. 1); controlling the first virtual object to move in the three-dimensional house model according to the viewpoint adjustment information (i.e. according to watching of the user viewing function and view range determining location information corresponding to each user, according to the updated location information corresponding to each user to display the virtual image in the target image - Fig. 1); and updating the first and second virtual scenes according to a position of the first virtual object in the three-dimensional house model (i.e. in the process of displaying the target image, the target house real time according to watching of the user viewing function and view range determining location information corresponding to each user, according to the updated location information corresponding to each user to display the virtual image in the target image - Fig. 1). Claim 3, Yang discloses the display method for a three-dimensional house model according to claim 1, wherein after the sending the second virtual scene to the second terminal, the display method further comprises: receiving a first input from the second terminal (i.e. receives the first triggering operation of certain virtual figure, display corresponding to the virtual image of the target user interaction dialog box – Fig. 1); and configuring the second observation viewpoint to a third virtual object, of the plurality of virtual objects corresponding to a third terminal of the plurality of terminals, in response to the first input, wherein the third terminal is any one of the plurality of terminals other than the second terminal (i.e. the commercial embodiment introduces a in the same three-dimensional space, multiple users can independently walk and space interaction and interaction scheme between humans. for example, three people at the same time in a virtual three-dimensional space – Fig. 1). Independent claim 5, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Claim 6, Yang discloses the display method for a three-dimensional house model according to the display method for a three-dimensional house model according to wherein after the displaying the virtual scene, the display method further comprises: sending viewpoint adjustment information to the server, to enable the server to control the first virtual object corresponding to the first terminal to move in the three-dimensional house model, according to the viewpoint adjustment information (i.e. server according to the viewing room information of each user received, locating to the current user of the target house, determining the position information of each user is located currently, generating a virtual image at the position corresponding to each position information; the process of determining position information comprises: virtual image position according to the function of the user to adjust the viewing angle range and determining the user is currently located point of the mapping function, the point corresponding to the user. and determining the virtual image according to the visual angle range of orientation – Fig. 1); and receiving an updated first virtual scene from the server (i.e. as the user adjusts the visual angle range of the house during the real-time change appears on the virtual image of the user house-showing visual angle range. – Fig. 1). Claim 7, Yang discloses the display method for a three-dimensional house model according to claim 5, wherein after the displaying the virtual scene, the display method further comprises: sending a first input to the server to enable the server to bind the first observation viewpoint of the first terminal to a different virtual object, corresponding to a terminal, among the plurality of terminals (i.e. the server according to the received target function and the target visual angle generated between the target function in the visual angle range of the target picture and sends to the terminal. for the user to check. method in the virtual three-dimensional space viewed chamber provided in this embodiment, in order to simulate the real room scene, at the same time also in the same set of user house by means of virtual images presented in the virtual three-dimensional space, so that the user can see the current the house of the viewing user, and interaction with the user. – Fig. 1; by the method, the user not only can check each part of the house according to the desire of the user in house-showing process of, but also can see the other users viewing the house, knowing the area of interest to other users so that the user in the virtual three-dimensional space in the house-showing process capable for interacting with other users – Fig. 1). Claim 8, Yang discloses the display method for a three-dimensional house model according to claim 5 , wherein after the displaying the first virtual scene, the display method further comprises: sending data to the server, to enable the server to forward the data to at least one another terminal among the plurality of terminals (i.e. the server according to the received target function and the target visual angle generated between the target function in the visual angle range of the target picture and sends to the terminal. for the user to check. method in the virtual three-dimensional space viewed chamber provided in this embodiment, in order to simulate the real room scene, at the same time also in the same set of user house by means of virtual images presented in the virtual three-dimensional space, so that the user can see the current the house of the viewing user, and interaction with the user. – Fig. 1), and to enable the at least one another terminal to output the data (i.e. the commercial embodiment introduces a in the same three-dimensional space, multiple users can independently walk and space interaction and interaction scheme between humans. for example, three people at the same time in a virtual three-dimensional space – Fig. 1), wherein the data comprises one or a combination of audio data and video data (i.e. through the dialog box, the user can corresponding to the avatar of the user text communication, voice communication or video chat, and so on – Fig. 1). Independent claim 11, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Independent claim 12, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Claim 13 and 16, the corresponding rationale as applied in the rejection of claim 3 applies herein. Claim 17 and 18, the corresponding rationale as applied in the rejection of claim 8 applies herein. Independent claim 19, the claim is similar in scope to claim 1. Therefore, similar rationale as applied in the rejection of claim 1 applies herein. Independent claim 20, the claim is similar in scope to claim 2. Therefore, similar rationale as applied in the rejection of claim 2 applies herein. Independent claim 21, the claim is similar in scope to claim 3. Therefore, similar rationale as applied in the rejection of claim 3 applies herein. Independent claim 22, the claim is similar in scope to claim 5. Therefore, similar rationale as applied in the rejection of claim 5 applies herein. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over Bin Yang, CN 109829977 A. Claim 4, Yang discloses the display method for a three-dimensional house model according to claim 1, further comprising: receiving data from a fourth terminal of the plurality terminals (i.e. in the process of displaying the target image, the target house real time according to watching of the user viewing function and view range determining location information corresponding to each user – Fig. 1); and sending the data to at least one fifth terminal of the plurality of terminals to enable the fifth terminal to output the data, wherein the fifth terminal is a terminal among the plurality of terminals other than the fourth terminal (i.e. the commercial embodiment introduces a in the same three-dimensional space, multiple users can independently walk and space interaction and interaction scheme between humans. for example, three people at the same time in a virtual three-dimensional space – Fig. 1); wherein the data comprises one or a combination of audio data and video data (i.e. through the dialog box, the user can corresponding to the avatar of the user text communication, voice communication or video chat, and so on – Fig. 1). Yang suggests a fourth and a fifth terminal as Yang discloses multiple users interacting in a same 3D space. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention at the time the invention was made to include fourth and fifth terminals with the method of Yang because the 3D space interaction with more than three terminals yields predictable results. Claim 14 and 15, the corresponding rationale as applied in the rejection of claim 4 applies herein. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHANTE HARRISON whose telephone number is (571)272-7659. The examiner can normally be reached Monday - Friday 8:00 am to 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHANTE E HARRISON/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Jun 14, 2024
Application Filed
Dec 01, 2025
Non-Final Rejection — §102, §103
Mar 03, 2026
Response Filed
Mar 24, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597213
GESTURE BASED TACTILE INTERACTION IN EXTENDED REALITY USING FORM FACTOR OF A PHYSICAL OBJECT
2y 5m to grant Granted Apr 07, 2026
Patent 12592043
Systems, Methods, and Graphical User Interfaces for Displaying and Manipulating Virtual Objects in Augmented Reality Environments
2y 5m to grant Granted Mar 31, 2026
Patent 12592045
AUGMENTED REALITY SYSTEM AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586322
OPTICAL DEVICE FOR AUGMENTED REALITY HAVING GHOST IMAGE PREVENTION FUNCTION
2y 5m to grant Granted Mar 24, 2026
Patent 12561891
GRAPHICS PROCESSORS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
97%
With Interview (+28.8%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 725 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month