Prosecution Insights
Last updated: April 19, 2026
Application No. 18/434,581

METHOD FOR DYNAMIC NAVIGATION MAPPING IN VIRTUAL REALITY ENVIRONMENTS BASED ON HYBRID DATA INTEGRATION

Final Rejection §103
Filed
Feb 06, 2024
Examiner
CHEN, BIAO
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Mitel Networks Corporation
OA Round
2 (Final)
84%
Grant Probability
Favorable
3-4
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
27 granted / 32 resolved
+22.4% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
69.1%
+29.1% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 32 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Office Action is in response to Applicant’s amendment/response filed on 12/23/2025, which has been entered and made of record. Applicant’s amendments to the Specification and Claims except claim 10 have overcome each and every objection and 112(b) rejection previously set forth in the Non-Final Office Action mailed 10/02/2025. Claim Objections Claims 1, 10 objected to because of the following informalities: In claim 1, line 19, “the identified one” should read “the one”. In claim 4, line 3, “from the plurality of users” should read “from a plurality of users”. In claim 10, line 2, “to analyze to VR space” should read “to analyze the VR space”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 5, 13, and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Hanke et al. (US 2022/0305388 A1, hereinafter “Hanke”) in view of Kim et al. (US 11,024,083 B2, hereinafter “Kim”). Regarding claim 1, Hanke discloses A computerized method for mapping a virtual reality (“VR”) space created by a VR server, wherein the method comprises: (para. [0016], “the systems and methods according to aspects of the present disclosure can be implemented using a single computing device or across multiple computing devices (e.g., connected in a computer network)”; FIG. 1: “Game Server 120”, “Element Update System”, and “Game Database 130”; para. [0003], “the geography of a virtual world is mapped to at least a portion of the geography of the real world. Players navigate the virtual world by traveling to corresponding portion locations in the real world”). Note that: “Game Server 120”, “Element Update System”, and “Game Database 130” can be combined to form a single comprehensive game server. utilizing one or more users device, in communication with the VR server, positioning one or more a user into the VR space; (FIG. 1: “Client Device 110”, “Game Sever 120”, and “Element Update System 140” are in communication to each other over “Network 170”; [0003], ”Players may view the virtual world via an augmented reality (AR) experience in which virtual elements are overlaid on images of the real world ( e.g., on one or more images captured by cameras of devices carried by the players). Multiplayer parallel-reality games can encourage players to get out into the real world and interact (e.g., to achieve cooperative game objectives”). Note that: one or multi-players with the client device(s) view the virtual world (space) from the corresponding viewpoints where they are positioned or located during gaming. the one or more users utilizing the user device to move from a first VR locale to a second VR locale within the VR space; (FIG. 2: the user utilizes “client device” to move or position himself from a location to another location with “Positioning Module 220”; para. [0032], “As the player moves around with the client device 110 in the real world, the positioning module 220 tracks the position of the player and provides the player position information to the gaming module 210. The gaming module 210 updates the player position in the virtual world associated with the game based on the actual position of the player in the real world”). Note that: when the player with his client device moves from a location to another location, his movement is tracked or located, and updated in the virtual world (VR space) from one location (a first VR locale) to another location (a second locale) within they VR space. the one or more user devices or the VR server communicating the user’s movement to a VR mapping engine; (FIG. 1: “Client Device 110”, “Game Sever 120”, and “Element Update System 140” are in communication to each other over “Network 170”; para. [0022], “The game server 120 may also be configured to receive other requests for game data from a client device 110 (for instance via remote procedure calls (RPCs)), such as during a parallel reality experience, and to respond to those requests via the network 170 … receive game data (e.g. player positions, player actions, player input, etc.) from a client device 110 via the network 170”; FIG 4: “Element Update System 140 includes a “Mapping Module 410” and a “Location Module 420”). Note that: (1) the game server receives player positions corresponding to the player’s movement; and (2) the “Element Update System 140” can be regarded as a VR mapping engine receiving the player’s movement from the game server over network. the VR mapping engine being in communication with a memory and the memory being configured to store (FIG. 1: “Client Device 110”, “Game Sever 120”, “Game Database 130”, and “Element Update System 140” are in communication to each other over “Network 170”; para. [0023], “The game server 120 can include or can be in communication with a game database 130. The game database 130 stores game data used in the parallel reality game to be served or provided to the client device(s) 110 over the network 170”). Note that: “Game Database 130” as a memory stores all the information in the VR space. aggregated information related to the one or more users and movements of the one or more users including (a) the identity of the one or more users, (para. [0023], “(2) data associated with players of the parallel reality game (e.g. player profiles including but not limited to player information, player experience level, player currency, current player positions in the virtual world/real world, player energy level, player preferences, team information, faction information, etc.)”). Note that: the player information can be regarded as the information related to the one or more the users and be aggregated. (b) the date and time the one or more users entered and exited the VR space, (c) a route taken by the one or more users from the first VR locale to the second VR locale, … (e) the VR locale at which the one or more users started and the VR locale at which the one or more users stopped; (para. [0023], “(7) data associated with player actions/input (e.g. current player positions, past player positions, player moves, player input, player queries, player communications, etc.)”). Note that: (1) the stored current and past positions or locations of the player indicate the timeline information or timestamps (i.e., date and time) for all player’s positions including the timestamps the player entered and exited the VR space; (2) the stored player moves can be regarded as the route taken by the VR user (player); (3) all stored player’s positions or locations can be regarded as the locales including the start and stop VR locales of the user who started and stop playing the game; and (4) the player’s moves can be regarded as movements of the one or more user and can be aggregated. The user information above and the movements of the player here can formulate aggregated information; (d) VR constructs along the route taken by the one or more users, and (para. [0023], “(4) data associated with virtual elements in the virtual world (e.g. positions of virtual elements, types of virtual elements, game objectives associated with virtual elements; corresponding actual world position information for virtual elements; behavior of virtual elements, relevance of virtual elements etc.)”). Note that: the virtual elements can be regarded as the VR constructs. the VR mapping engine generating one or more directions for the one or more optimal routes between the first VR locale and second VR locale based on the identified one or more optimal routes, and including the VR constructs that the one or more users encountered along the route; and (FIG. 1: “Client Device 110”, “Game Sever 120”, “Game Database 130”, and “Element Update System 140” are in communication to each other over “Network 170”; FIGs. 5A and 5B: “User 520A” as one user moves from the first VR locale at the start position corresponding to Virtual Element 530A in FIG. 5A to the second VR locale at the end position corresponding to Virtual Element 530A in FIG. 5B. The movement or moves of “User 520A” indicated by the point-line in “Virtua World 500” is the route between the first VR locale and second VR locale generated by the VR mapping module of engine. Virtual Element 530A and Virtual Elements 530B are the VR constructs “User 520A” encountered along the route; para. [0004], “an element update system receives a connection request from a client device and receives a route that the client device traversed in the real world. The element update system determines virtual locations at which to place virtual elements based on the route and updates a global state of the AR experience to include virtual elements at the virtual locations”; para. [0037], “The data collection module 330 can also analyze and data collected by players (e.g., as part of a crowd-sourcing effort) and provide the data for access by various platforms. To provide a specific example, players may be prompted to submit photographs of landmarks and other features of interest in their environment and the data collection module 330 may incorporate virtual elements corresponding to the real-world landmarks or features into the parallel reality game based on player submissions (e.g., subject to verifying that the landmark exists and is located where the submitting player indicated)”). Note that: (1) “Route 540C” in FIG. 5B is a route of the user in “Real World 510”; and (2) the element update system determines the virtual locations based on “Route 540C” in the real world; (3) the determined virtual locations corresponding to “Route 540C” in the real world formulate a route between the first VR locale and second VR locale indicated by the point-line in “Virtual World 500”; (4) the data collection module 330 of the VR server use the player’s submitted photographs of landmarks and other features of interest in their environment to verify the accuracy of landmarks existing as VR directions and located where the player indicated; and (5) virtual elements corresponding to the real-world landmarks or features into the parallel reality game can be regarded as the directions for one or more optimal routes. a wireframe processor in communication with the VR mapping engine, wherein the wireframe processor receives instructions from the VR mapping engine to generate the one or more routes in VR and to make a map of the route including the one or more directions and the VR constructs accessible to a second user that enters the VR space. (FIG. 4: “Element Update System 140” includes “Mapping Module 410” and “Location Module 420”; para. [0041], “the mapping module 410 accesses the game database 130 to retrieve data associated with the virtual world corresponding to locations in the real world of the route. Based on the retrieved data, the mapping module 410 determines virtual locations at which to place virtual elements at in the global state of the parallel reality experience stored at the game database 130”; para. [0043], “The location module 420 determines if client devices 110 are located at locations in the real world that correspond to virtual locations near one or more virtual elements … the location module 420 receives locations of client devices 110 connected to a parallel reality experience … For each received location, the location module 420 determines a virtual location corresponding to the location of the client device 110. The location module 420 accesses the game database 130 to retrieve virtual locations of virtual elements of the parallel reality experience”; FIGs. 5A and 5B: a user (“User 520B”), in VR space can view the user (“User 520A”)’s route and corresponding virtual elements (constructs); para. [0037], “The data collection module 330 can also analyze and data collected by players (e.g., as part of a crowd-sourcing effort) and provide the data for access by various platforms. To provide a specific example, players may be prompted to submit photographs of landmarks and other features of interest in their environment and the data collection module 330 may incorporate virtual elements corresponding to the real-world landmarks or features into the parallel reality game based on player submissions (e.g., subject to verifying that the landmark exists and is located where the submitting player indicated)”; para. [0050], “the mapping module 410 has similarly added virtual elements 530B of the second type (e.g., daisies) to virtual locations corresponding to the route 540B in the virtual world 500”). Note that: (1) the mapping module 410 determines the virtual locations based on the locations of user route in the physical world; (2) the location module 420 is regarded as a wireframe processor that determines a virtual location corresponding to the location of the client device (the player), and all aggregated virtual locations of the player (the user) formulate a virtual route; (3) the location module 420 is regarded as a wireframe processor that obtains or receives the locations of the client device and the player as instructions to place the VR elements from the element update system regarded as the mapping engine; (4) in communications over network, a user (“User 520B”) as that enters the VR space with the corresponding client device can access the optimal route of the user in the VR space (“User 520A”) in the VR space, and the user can be regarded as a second user; and (5) virtual elements corresponding to the real-world landmarks or features into the parallel reality game can be regarded as the directions for the one or more optimal routes, and virtual elements of the second type (e.g., daisies) as the VR constructs can be added to virtual locations (virtual locales) corresponding to the route by the mapping module. Therefore, it is obvious to one having ordinary skills in the art that a map of the route can be formulated by the directions and the VR constructs. However, Hanke fails to disclose, but in the same art of computer graphics, Kim discloses the VR mapping engine analyzing the aggregated information to identify one or more optimal routes within the VR space based on the movements of the one or more users; (Kim, col. 11, line 34-44, “the processor 220 may generate an optimal path, select image data from the start to the end from the database according to the generated path, obtain background images one by one from the start point, and transmit the image to the user terminal device 100. In addition, the processor 220 may set the virtual space by mapping the image obtained from the database onto the virtual space, and generate a virtual camera in the virtual space through a VR engine, such that an image of the virtual space projected by the virtual camera is provided to the user terminal device 100”; col. 17, lines 15-19, “In response to changed motion information being received from the user terminal device 100, the server 200 may analyze the received motion information (540) and adjust the direction of the virtual camera based on the analyzed motion information (550), and may extract left and right images of a view point region corresponding to the motion information through the virtual camera engine 530 and transmit the images to the user terminal device 100”). Note that: (1) the server 200 can be regarded the VR mapping engine and analyze the motion or movement information from the user device for the user (the aggregated information) in the virtual or VR space; and (2) the server 200 with its processor can generate an optimal path or route based on the user’s movements from the start to the end, select image data from the start (the first VR locale) to the end (the second VR locale) from the database according to the generated path. Hanke and Kim are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply generating an optimal path from the start to the end based on the user’s movements in the VR space, as taught by Kim into Hanke. The motivation would have been “the processor 220 may generate an optimal path, select image data from the start to the end from the database according to the generated path … the processor 220 may set the virtual space by mapping the image obtained from the database onto the virtual space” (Kim, col. 11, lines 34-40). The suggestion for doing so would allow to generate an optimal route within the VR space based on the movements of the user. Therefore, it would have been obvious to combine Hanke and Kim. Regarding claim 5, Kanke in view of Kim discloses The computerized method of claim 1 that further includes the step of mapping each route onto a VR map using the VR wireframe processor. (Hanke, FIGs. 5B-5D: the two dot-lines as the two users’ routes in “Virtual World 500” are mapped onto a VR map including the placed “Virtual Elements 530A-D” and other objects; para. [0048], “FIGs. 5A-5D depict a representation of a virtual world 500 having a geography that parallels the real world 510, according to one embodiment”; para. [0043], “The location module 420 determines if client devices 110 are located at locations in the real world that correspond to virtual locations near one or more virtual elements … The location module 420 accesses the game database 130 to retrieve virtual locations of virtual elements of the parallel reality experience, such as virtual elements added by the mapping module 410 based on routes traversed by one or more client devices 110”). Note that: The location module 420 as the wireframe processor is used to mapping the player VR locations, and the aggregation of player VR locations formulates or maps each route onto the VR map. Regarding claim 13, Hanke in view of Kim discloses The computerized method of claim 1 that includes points of interest along each route. (Hanke, para. [0041], “For each route, the mapping module 410 accesses the game database 130 to retrieve data associated with the virtual world corresponding to locations in the real world of the route. Based on the retrieved data, the mapping module 410 determines virtual locations at which to place virtual elements at in the global state of the parallel reality experience stored at the game database 130”). Note that the placed virtual elements for each route are points of interests. Claim 17 reciting “A computer system configured to map a VR space created by a VR server, wherein the system comprises:” is corresponding to the method of claim 1. Therefore, claim 17 is rejected for the same prior art and citations for claim 1. In addition, Hanke in view of Kim discloses A computer system configured to map a VR space created by a VR server, wherein the system comprises: (Hanke, para. [0016], “the systems and methods according to aspects of the present disclosure can be implemented using a single computing device or across multiple computing devices (e.g., connected in a computer network)”; FIG. 1: “Game Server 120”; para. [0003], “the geography of a virtual world is mapped to at least a portion of the geography of the real world. Players navigate the virtual world by traveling to corresponding portion locations in the real world”). Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over Hanke in view of Kim and Simpson et al. (WO 2024145065 A1, hereinafter “Simpson”). Regarding claim 2, Hanke in view of Kim discloses The computerized method of claim 1 that further includes However, Hanke in view of Kim fails to disclose, but in the same art of computer graphics, Simpson disclose the step of analysing the aggregated information to determine popular user locales in the VR space. (Simpson, page 22, lines 26-32, “At 504, the trained machine learning model is applied to the metaverse destination to compute a rank. The machine learning model takes in destination features (and in some cases user features) as input and then produces a destination rank, for the user, as output. The input features that the machine learning model 30 takes to compute a rank may include features of the user, type of destination, tags defined describing the destination, number of users at the destination, relationships between user and users who typically visit the destination, etc.”). Note that: (1) the trained machine learning model is applied to or analyze the user aggregation data (destination/locale features and user features collected for all locales and users) corresponding to the metaverse destinations (locales in the VR space); and (2) the computed rank is a popularity indication for the corresponding metaverse destination or locale. Hanke in view of Kim, and Simpson, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply computing a rank of the metaverse destination / locale by analyzing user aggregate data, as taught by Simpson into Hanke in view of Kim. The motivation would have been “A rank is computed for each metaverse destination based on a determined set of parameters. The set of parameters are determined based on at least a popularity factor or a relevance factor” (Simpson, Abstract). The suggestion for doing so would allow to compute popularity indicated ranks for metaverse destinations to generate a metaverse map. Therefore, it would have been obvious to combine Hanke, Kim, and Simpson. Regarding claim 3, the combination of Hanke, Kim, and Simpson discloses The computerized method of claim 2, wherein each popular locale has a rank based upon (a) the number of users who have visited it, and (b) the dates and times during which it has been visited, and the rank is made available to users. (Simpson, Abstract, “A rank is computed for each metaverse destination based on a determined set of parameters. The set of parameters are determined based on at least a popularity factor or a relevance factor … For each particular map object, a visual appearance for the particular map object and a position, in the 3D metaverse map, is determined based on the computed rank of the particular map object. Further, the 3D metaverse map is displayed on a user interface of an artificial reality device.”; page 22, lines 29-32, “The input features that the machine learning model takes to compute a rank may include features of the user, type of destination, tags defined describing the destination, number of users at the destination, relationships between user and users who typically visit the destination, etc.”; page 24, lines 9-14, “The relevance factor indicates a significance of the metaverse destination for the user of the artificial reality device. Significance of the metaverse destination for the user may be determined based on criteria such as, a match between subject of the destination and a subject of interest to the user, history of the user going to the destination, friends or family that have gone to the destination, users' rating of the destination”). Note that: (1) each popular locale can have a rank value based applying a machine learning model to a set of parameters determined based on one or both a popularity factor or a relevance factor; (2) the parameters include the number of users at the destination as the number of users who have visited each popular destination (locale), and history of the user going to or visiting the locale indicating timestamps (date and time) when the locale has been visited by the user; and (3) the visual appearance of a position (locale) is determined by the computed rank, indicating that the rank are available to users for them to view. The motivation to combine Hanke, Kim, and Simpson given in claim 2 is incorporated here. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Hanke in view of Kim and Ruddle (Generating trails automatically, to aid navigation when you revisit an environment. Presence: Teleoperators and Virtual Environments, 17 (6). pp. 562-574, 2008, hereinafter “Ruddle”). Regarding claim 4, Hanke in view of Kim discloses The computerized method of claim 1 that further comprises a plurality of users and (Hanke, “displaying virtual elements along routes of users of an augmented reality experience”). However, Hanke in view of Kim fails to disclose, but in the same art of computer graphics, Ruddle discloses the step of identifying the route is based on a most used route to a least popular route based upon (a) the number of users from the plurality of users who have travelled the route, and (b) the dates and times during which the route has been travelled. (Ruddle, page 7, “if the paths they frequented most often were presented as a primary trail that also then provides a framework for the person’s navigation. Other paths the person had traversed could be presented as a secondary trail, complementing memory by ensuring they didn’t forget anywhere they had been”; page 5, “portray a user’s actual movements (e.g., see Figure 2a), which allows the number of times each link has been traversed to be determined and also potentially allows particular links to be recognized from the shape of the user’s movements”). Note that: (1) presenting the primary trail and the secondary trail can be regarded as identifying the route that they (users) frequented most often and the other paths they did not frequent most, indicating that identifying the route is based on the number or frequency of users as a plurality of users who have travelled the route; and (2) to portray, identify, or present the actual movements (the route), the timestamps (dates / times) of locations of the movements for the number of times are also determined, indicating that identifying the route is also based on the timestamps (dates/ times). Hanke in view of Kim, and Ruddle, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply identifying the route based on the numbers of users who frequented most or not and the number of times, as taught by Ruddle into Hanke in view of Kim. The motivation would have been “we describe a new method for automatically generating trails from a person’s movements” (Ruddle, Abstract). The suggestion for doing so would allow to identify the route is based on a most used route to a least popular route. Therefore, it would have been obvious to combine Hanke, Kim, and Ruddle. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Hanke in view of Kim and Archive_Org (Level (video games), archive.org, https://web.archive.org/web/20240119173632/https://en.wikipedia.org/wiki/Level_(video_games, hereinafter “Archive_Org”). Regarding claim 6, Hanke in view of Kim discloses The computerized method of claim 1 that further comprises However, Hanke in view of Kim fails to disclose, but in the same art of computer graphics, Archive_Org discloses the step of the VR space being expanded to add an expanded portion or contracted to exclude a contracted portion by imported schematic data to the VR server and a VR map being adjusted accordingly to include the expanded portion of the VR space or to exclude the contracted portion of the VR space. (Archive_Org, page 1, paras. 1-2, “In video games, a level (also referred to as a map, stage, course, or round in some older games) is any space available to the player during the course of completion of an objective … In games with linear progression, levels are areas of a larger world”, page 4, paras. 1-2, “Level design for each individual level in a modern game typically starts with concept art, sketches, renderings, and physical models. [20][21] Once completed, these concepts transform into extensive documentation, environment modeling, and the placing of game specific entities (actors), usually with the aid of a level editor. A level editor may be distributed as a complete stand-alone package”; page 5, para. 5, “There are often tricks used to give the computer hardware sufficient time to load the assets for the next area”). Note that: (1) a level of video games determines the space available to the player. In games such as Green Hill Zone, levels are areas or space of a larger world. This means that a lower level has a smaller area or space (equivalent to be contracted to excluding a contracted portion) but a higher level has a larger area or space (being expanded to adding an expanded portion); (2) Level design’s output for each designed level is a set of schematic data regarding the elements for the game and a map that depicts the locations of the elements; (3) it is obvious to one having ordinary skills in the art that the designed levels as schematic data can be packed into the game software and imported to the comprehensive game sever as a combination of game server, game database, and element update system disclosed by Hanke above to generate a VR space with the corresponding VR elements according to the imported schematic data; and (4) when importing a different designed level to the game server according to the various user’s selection or user’s gaming progress, the VR space’s corresponding VR map for the user to view the VR space reflects the effective schematic data corresponding to the designed level, which means that the VR map is adjusted to include the expanded portion of the VR area or space at a higher level or to exclude the contracted portion of the VR area or space at a low level. Hanke in view of Kim, and Archive_Org, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply designing levels in video game for smaller and larger VR areas or space expanding portions and contacting portions, as taught by Archive_Org into Hanke in view of Kim. The motivation would have been “Level design or environment design [7] is a discipline of game development involving the making of video game levels—locales, stages or missions” (Archive_Org, page 1, para. 3). The suggestion for doing so would allow to design game levels to generate smaller or larger VR areas or space according to schematic data and have a VR map adjusted accordingly. Therefore, it would have been obvious to combine Hanke, Kim, and Archive_Org. Claims 7-9 are rejected under 35 U.S.C. 103 as being unpatentable over Hanke in view of Kim and Schmalstieg et al. (Managing Complex Augmented Reality Models, IEEE Computer Graphics and Applications, Volume: 27, Issue: 4, 2007, Page(s): 48-57, hereinafter “Schmalstieg”). Regarding claim 7, Hanke in view of Kim discloses The computerized method of claim 1 that further comprises However, Kanke in view of Kim fails to disclose, but in the same art of computer graphics, Schmalstieg discloses the step of the mapping engine generating signposts, arrows, lines, or other indica to position in the VR space by the VR server in order to guide a VR user to a different VR locale. (Schmalstieg, page 54, Figure 7: “Indoor Signpost: the overlay shows a world in miniature and an arrow points to the suggested exit highlighted in yellow”; page 51, col. right, para. 2, “The user can enable an additional arrow that points directly from her current position to the next waypoint”). Note that: the element update system in communication of the VR server by Hanke described above can generate the virtual element (arrow) and guide the VR user to a next waypoint (location or locale). Hanke in view of Kim, and Schmalstieg, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply generating an arrow to guide the VR user to a next location as taught by Schmalstieg into Hanke in view of Kim. The motivation would have been “The user can enable an additional arrow that points directly from her current position to the next waypoint” (Schmalstieg, page 51, col. right, para. 2). The suggestion for doing so would allow to have signposts (arrows) generated by a VR server and guide the VR user to a next location. Therefore, it would have been obvious to combine Hanke, Kim, and Schmalstieg. Regarding claim 8, the combination of Hanke, Kim, and Schmalstieg discloses The computerized method of claim 7, wherein the signposts, arrows, lines, or other indicia can guide the user along a route to a predetermined VR locale and suggest an alternate route to the predetermined VR locale or offer a different VR locale destination. (Schmalstieg, page 51, col. right, para. 2, “The user can enable an additional arrow that points directly from her current position to the next waypoint”; page 51, col. left / para.5 – col. right / para. 1, “the user selects a specific target address or a desired target location of a certain type, such as a supermarket or a pharmacy. The system then computes the shortest path in a known network of possible routes. it’s interactive and reacts to the user’s movements by continuously recomputing the shortest path to the target, if the user goes astray or decides to take another route”). Note that: (1) the VR server can continuously recompute the shortest path as alternate paths to the target location when the VR user goes astray from the predetermined shortest VR path or route; and (2) the enabled arrow can guide the VR user from the VR user’s current position due to going astray back to the predetermined locale. The motivation to combine Hanke, Kim, and Schmalstieg given in 7 is incorporated here. Regarding claim 9, the combination of Hanke, Kim, and Schmalstieg dislcoses The computerized method of claim 7, … before being positioned in the VR space by the VR server. (Hanke, para. [0054], “The mapping module 410 determines 630 virtual locations at which to place virtual elements based on the route”). Note that: the element update system including the mapping module and location module in communication to the comprehensive game sever can place the corresponding VR element. wherein the signposts, lines or other indicia are overlaid onto a wireframe model … (Schmalstieg, page 54, Figure 7: “Indoor Signpost: the overlay shows a world in miniature and an arrow points to the suggested exit highlighted in yellow”; page 54, Figure 8: “Context of wall = wireframe”). Note that: (1) the overlay or the wall (wireframe) and the arrow are shown in Figure 7 of Schmalstieg; and (2) the overlay of the arrow and the wireframe can be regarded as an element to be placed or positioned in the VR space. The motivation to combine Hanke, Kim, and Schmalstieg given in 7 is incorporated here. Claims 10-12 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Hanke in view of Kim and Park et al. (US 2023/0334775 A1, hereinafter “Park”). Regarding claim 10, Hanke in view of Kim discloses The computerized method of claim 1 that further includes However, Hanke in view of Kim fails to disclose, but in the same art of computer graphics, Park discloses a virtual drone created by the VR server, wherein the virtual drone is configured to analyse to VR space, and that further comprises the step of the virtual drone analysing the VR space and sending communications from the VR server to the VR mapping engine to map the VR space. (Park, para. [0085], “another virtual camera may be mounted on a virtual drone to move about within the VR world”; para. [0087], “The drone virtual camera can also be commanded to operate in a "follow me" manner to track the movements of the user's avatar, another avatar, or a virtual object in the VR environment”). Note that: (1) the virtual drone is created to move about in the VR space by the VR system (i.e., the VR server here); (2) the virtual drone use virtual drone camera capturing virtual images and tracking VR objects in the VR space, which is equivalent to capturing, analyzing, and mapping the VR space with virtual objects; and (3) since tracking or mapping the locations is performed by the element update system (the VR mapping engine) of Hanke above while the server and the element update system is in communication to each other over network, the virtual drone created by the VR server can communicate the captured and analyzed VR space data from the VR server to the element update system (the VR mapping engine) to map the VR space. Hanke in view of Kim, and Park, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply creating a virtual drone to capture and analyze the VR space and communicate the captured and analyzed VR space data, as taught by Park into Hanke in view of Kim. The motivation would have been “The drone virtual camera can also be commanded to operate in a "follow me" manner to track the movements of the user's avatar, another avatar, or a virtual object in the VR environment” (Park, para. [0087]). The suggestion for doing so would allow to have a virtual drone capturing and analyzing the VR space and communicating the captured and analyzed VR space data and communicate the captured and analyzed VR space data to map the VR space. Therefore, it would have been obvious to combine Hanke, Kim, and Park. Regarding claim 11, the combination of Hanke, Kim, and Park discloses The computerized method of claim 1, wherein a VR drone is used to verify the accuracy of VR directions created by the VR user. (Hanke, para. [0037], “The data collection module 330 can also analyze and data collected by players (e.g., as part of a crowd-sourcing effort) and provide the data for access by various platforms. To provide a specific example, players may be prompted to submit photographs of landmarks and other features of interest in their environment and the data collection module 330 may incorporate virtual elements corresponding to the real-world landmarks or features into the parallel reality game based on player submissions (e.g., subject to verifying that the landmark exists and is located where the submitting player indicated)”). Note that: the data collection module 330 of the VR server use the player’s submitted photographs of landmarks and other features of interest in their environment to verify the accuracy of landmarks existing as VR directions and located where the player indicated. … the VR drone … (Park, para. [0085], “another virtual camera may be mounted on a virtual drone to move about within the VR world”). Note that: a VR drone created by the game server can substitute the player to submit photographs of landmarks and other features of interest in their environment. The motivation to combine Hanke, Kim, and Park given in claim 10 is incorporated here. Regarding claim 12, the combination of Hanke, Kim, and Park discloses The computerized method of claim 1, wherein the one or more directions (Hanke, para. [0037], “The data collection module 330 can also analyze and data collected by players (e.g., as part of a crowd-sourcing effort) and provide the data for access by various platforms. To provide a specific example, players may be prompted to submit photographs of landmarks and other features of interest in their environment and the data collection module 330 may incorporate virtual elements corresponding to the real-world landmarks or features into the parallel reality game based on player submissions (e.g., subject to verifying that the landmark exists and is located where the submitting player indicated)”). Note that: the data collection module 330 of the VR server use the player’s submitted photographs of landmarks and other features of interest in their environment to determine the status of landmarks existing as VR directions and located where the player indicated. are continually updated. (Park, para. [0085], “another virtual camera may be mounted on a virtual drone to move about within the VR world”). Note that: (1) a moving VR drone created by the game server in the VR space can substitute the player to continually submit photographs of landmarks existing as VR directions to update and verify the VR directions; and (2) the directions is read as VR directions. The motivation to combine Hanke, Kim, and Park given in claim 10 is incorporated here. Regarding claim 15, the combination of Hanke, Kim, and Park discloses The computerized method of claim 1 that further includes mapping spaces outside of virtual buildings and inside of virtual buildings, by hovering a VR device on the VR space. (Park, para. [0087], “virtual camera that can be operated as a drone virtual camera can have controls for moving it around the VR environment in a manner corresponding to the operation of physical drone camera. For example, the drone virtual camera can respond to remote control commands to move under the concurrent input from the user, or it can be commanded to move in a preprogrammed pattern. The drone virtual camera can also be commanded to operate in a "follow me" manner to track the movements of the user's avatar, another avatar, or a virtual object in the VR environment”; para. [0089], “FIG. 10 is an exemplary view 1000 into a VR environment showing a virtual device 1002, operating in conjunction with cameras such as a virtual drone camera 1006, to capture viewpoints of experiences in the VR world.”). Note that: (1) since spaces outside of virtual buildings and inside of virtual buildings form the VR space, no additional information is provided for the VR space by reciting “spaces outside of virtual buildings and inside of virtual buildings” in this claim; and (2) the virtual drone created by the comprehensive game server in the VR space can be regarded as a virtual device hovering and moving in the VR space; and (3) the virtual drone can use its virtual camera to capture the visual presentation of the VR space from various viewpoints and the track the users’ movement and other virtual objects, which is equivalent to mapping the VR space. The motivation to combine Hanke, Kim, and Park given in claim 10 is incorporated here. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Hanke in view of Kim and Sambhanthan et al. (Enhancing Tourism Destination Accessibility in Developing Countries through Virtual Worlds, arXiv.org, ARXIV ID: 1306.1630, Publication Date: 2013-06-07, hereinafter “Sambhanthan”). Regarding claim 14, Hanke in view of Kim discloses The computerized method of claim 1 that includes However, Hanke in view of Kim fails to discloses, but in the same art of virtual space technology, Sambhanthan discloses a content summary of each destination. (Sambhanthan, pages 10-11, PNG media_image1.png 268 756 media_image1.png Greyscale ). Note that: (1) the second life is a VR space; and (2) for each of 13 places (destinations) with the tag of tourism, the content at each row of the table is a content summary (place, owned, locations, etc.) for the corresponding destitation. Hanke in view of Kim, and Sambhanthan, are in the same field of endeavor, namely virtual world technology. Before the effective filing date of the claimed invention, it would have been obvious to apply showing the summary of the available tourism places in second life, as taught by Sambhanthan into Hanke in view of Kim. The motivation would have been “The table shows the summary of the available tourism places in second life.” (Sambhanthan, page 10, line 18). The suggestion for doing so would allow to have a content summary at each place (destination) in the VR space. Therefore, it would have been obvious to combine Hanke, Kim, and Sambhanthan. Claim 16 is rejected under 35 U.S.C. 103 as being unpatentable over Hanke in view of Kim and Shah et al. (A Destination Recommendation System for Virtual Worlds, Proceedings of the Twenty-Third International Florida Artificial Intelligence Research Conference (FLAIRS), 2020, University of Florida, Orlando, Florida, hereinafter “Shah”). Regarding claim 16, Hanke in view of Kim discloses The computerized method of claim 1 that further includes However, Hanke in view of Kim fails to disclose, but in the same art of computer graphics, Shah discloses the step of the GUI capability of automatically transferring a VR user from one locale to another. (Shah, page 476, col. left, para. 2, “When the user clicks on the SLurl link, he/she is teleported directly to that destination”). Note that: the user clicks the link as a user interactive element like a button in the user interface to perform a teleporting function that move or transfer the VR user directly from current location to the destination (another location). Hanke in view of Kim, and Shah, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply performing teleporting to transfer a VR user to the destination, as taught by Shah into Hanke. The motivation would have been “the ability of users to teleport instantly to destinations” (Shah, page 475, col. left, para. 2). The suggestion for doing so would allow to automatically transfer a VR user from one location to another. Therefore, it would have been obvious to combine Hanke, Kim, and Shah. Response to Arguments Applicant's arguments with respect to claim rejection 35 U.S.C. 102 and 103, have been fully considered but they are not persuasive. Applicant alleges, “However, Hanke does not disclose ‘the VR mapping engine analyzing the aggregated information to identify one or more optimal routes within the VR space based on the movements of the one or more users,’” (page 11, lines 25-27). Examiner agrees that Hanke fails to disclose the corresponding newly amended limitations. However, the arguments are respectfully mooted because the corresponding newly amended limitations, “the VR mapping engine analyzing the aggregated information to identify one or more optimal routes within the VR space based on the movements of the one or more users”, have been addressed in the detailed claim rejection 35 U.S.C. 103 with Hanke in view of Kim above. The arguments are not persuasive. Applicant alleges, “However, Hanke does not disclose … ‘the VR mapping engine generating one or more directions for the one or more optimal routes between the first VR locale and second VR locale based on the identified one or more optimal routes, and including the VR constructs that the one or more users encountered along the route’” (page 11, lines 25-30). However, the arguments are respectfully mooted because the corresponding newly amended limitations, “the VR mapping engine generating one or more directions for the one or more optimal routes between the first VR locale and second VR locale based on the identified one or more optimal routes, and including the VR constructs that the one or more users encountered along the route”, have been addressed in the detailed claim rejection 35 U.S.C. 103 above. The arguments are not persuasive. Applicant alleges, “However, Hanke does not disclose … ‘a wireframe processor in communication with the VR mapping engine, wherein the wireframe processor receives instructions from the VR mapping engine to generate the one or more optimal routes in VR and to make a map of the route including the one or more directions and the VR constructs accessible to a second user that enters the VR space’” (page 11 / line 30 – page 12 / line 4). However, the arguments are respectfully mooted because the corresponding newly amended limitations, “a wireframe processor in communication with the VR mapping engine, wherein the wireframe processor receives instructions from the VR mapping engine to generate the one or more optimal routes in VR and to make a map of the route including the one or more directions and the VR constructs accessible to a second user that enters the VR space”, have been addressed in the detailed claim rejection 35 U.S.C. 103 above. The arguments are not persuasive. Applicant alleges, “Applicant submits that the comments made above with respect to claim I apply, mutatis mutandis, to amended independent claim 17. Applicant therefore respectfully submits that amended independent claim 17 is allowable over the cited references.” (page 13, lines 1-3). However, Examiner respectfully disagrees about the allegations as whole because: the independent claim 17 is corresponding to claim 1. Therefore, claim 17 is rejected for the same rationale for claim 1. The arguments are not persuasive. Applicant alleges, “All claims that depend from claim I or 17 are thereby also allowable over Hanke for the above-noted reasons. There is therefore no need to separately address the patentability of each of these claims and/or the Patent Office's interpretation in relation to any of these claims or any of the references of record in relation thereto.” (page 13, lines 4-8). However, Examiner respectfully disagrees about the allegations as whole because: all claims that depend from claim I or 17 are rejected for the respective rational above. The arguments are not persuasive. Applicant alleges, “With respect to the rejections of the dependent claims under 35 U.S.C. 103, Applicant respectfully disagrees. Claims 2-16 depend from claim 1 and are thereby also allowable over Hanke for the above-noted reasons. There is therefore no need to separately address the patentability of each of these claims and/or the Patent Office's interpretation in relation to any of these claims or any of the references of record in relation thereto. Applicant therefore respectfully submits that claims 2-16 are allowable over the cited references” (page 14, lines 3-8). However, Examiner respectfully disagrees about the allegations as whole because: claims 2-16 are rejected for the respective rational above. The arguments are not persuasive. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Biao Chen/ Patent Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Feb 06, 2024
Application Filed
Sep 30, 2025
Non-Final Rejection — §103
Dec 23, 2025
Response Filed
Feb 14, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602873
AUTOMATIC RETOPOLOGIZATION OF TEXTURED 3D MESHES
2y 5m to grant Granted Apr 14, 2026
Patent 12597149
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR NETWORK COMMUNICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12562138
METHOD AND SYSTEM FOR COMPENSATING ANTI-DIZZINESS PREDICTED IN ADVANCE
2y 5m to grant Granted Feb 24, 2026
Patent 12561897
COMPRESSED REPRESENTATIONS FOR APPEARANCE OF FIBER-BASED DIGITAL ASSETS
2y 5m to grant Granted Feb 24, 2026
Patent 12548129
APPARATUSES, METHODS AND COMPUTER PROGRAMMES FOR USE IN MODELLING IMAGES CAPTURED BY ANAMORPHIC LENSES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+26.3%)
2y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 32 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month