Prosecution Insights
Last updated: April 19, 2026
Application No. 18/489,883

METHOD AND SYSTEM FOR RENDERING MODIFIED SCENE IN AN IMMERSIVE ENVIRONMENT

Final Rejection §102
Filed
Oct 19, 2023
Examiner
BEARD, CHARLES LLOYD
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Zeality Inc.
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
235 granted / 350 resolved
+5.1% vs TC avg
Strong +36% interview lift
Without
With
+36.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
387
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
70.2%
+30.2% vs TC avg
§102
6.2%
-33.8% vs TC avg
§112
15.4%
-24.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 350 resolved cases

Office Action

§102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Received 12/09/2025 Claim(s) 1-18 is/are pending. The 35 U.S.C § 102(a)(1) rejection to claim(s) 1-18 have been fully considered in view of the amendments received on 12/09/2025 and are fully addressed in the prior art rejection below. Response to Arguments Received 12/09/2025 Applicant's arguments filed 12/09/2025 have been fully considered but they are not persuasive; as expressed below. Regarding independent claim(s) 1, 7, and 13: Applicant argues (Remarks, Page 7, ¶ 3 to Page 8, ¶ 1), that “(Feature 1: ) … Sefeik does not disclose any contextual data retrieval in the sense contemplated by the claims. In Sefeik, information exchanged between developer systems and the editor hub consists of explicit commands issued by developers to edit or manipulate virtual content, para [0093-0094], para [0103-0106]. The editor hub merely receives and executes manual edit commands from connected developer platforms; it does not autonomously or dynamically retrieve contextual data derived from user state, conversation, intent, or environmental semantics. Moreover: Sefeik's ‘users’ are game developers collaborating in a virtual development space, para [0038-0045] and para [0048-0054], not participants in an immersive end-user experience. The claimed ‘contextual data’ (scene context, conversation, user type, preferences, and intent) refers to live contextual cues inferred during rendering for adaptive content delivery, whereas Sefeik's system only processes discrete edit instructions and avatar position updates. Sefeik's ‘field-of-view adjustments’ para [0088] are pre-defined responses to user-controlled camera movements, not autonomous contextual retrievals. Hence, Sefeik lacks any disclosure of dynamically retrieving contextual data in the claimed sense of semantic or behavioral context acquisition during real-time rendering.” The Examiner disagrees. Applicant’s arguments fail to view the broad nature of the claim limitation(s) regarding “(to: ) dynamically retrieve contextual data”. Applicant’s arguments directed toward the terms “dynamically” or “autonomously” fails to be as limiting, as inferred by Applicant. Wherein, dynamic is defined as a constant change (Merriam-Webster; “marked by usually continuous and productive activity or change” [https://www.merriam-webster.com/dictionary/dynamic]), which is addressed by the teachings of received commands and data changes (i.e. updates) as taught by Sefeik et al. (US PGPUB No. 20200139248 A1) (Sefeik; “The editor hub 410 can execute the commands received from the command requests module 420 via the execute commands module 408. After the editor hub 410 executes the commands via the execute commands module 408, the file changes module 412 can change the immutable game data and store a new version of the immutable game data in the version control file storage 414. After the editor hub 410 executes the commands via the execute commands module 408, the data changes module 406 can modify the immutable game data and send the modified immutable game data to the build system 404. The build system can take the immutable game data changes and processes the immutable game data changes for varying platforms to produce a cache of the updates. The cache of results can be saved for each target platforms 402. Then, the User 1 Platform A 416 and the User 2 Platform can see the changes. As such, the game developers can edit the immutable game data that ultimately gets built and distributed to the run-time platforms” [¶ 0094]). The editor hub (as taught by Sefeik) is configured to dynamically retrieve contextual data as further addressed (Sefeik; “At (605), the editor hub can receive a first command from the first developer system to modify a first virtual object. At block (606), the editor hub can receive a second command from the second developer system to modify a second virtual object. At (608), the editor hub can compile the changes to the first and second virtual object in the immutable game data. In some embodiments, the editor hub compiles the first command before compiling the second command. In some embodiments, the editor hub compiles both the first and second command at the same time” [¶ 0100-0101 and ¶ 0104]). Wherein, the receiving of commands (i.e. 1st and 2nd commands) are related to multiple modifications to/of virtual objects (Sefeik; [id.]; additionally, “… command handling by multiple game developers of the same virtual object … handle simultaneous commands by multiple game developers of the same virtual object” [¶ 0113-0114, ¶ 0116, and ¶ 0118]), and streaming state information between developers (Sefeik; [¶ 0070]). Still further, contextual data corresponds to one or more virtual objects, as addressed above. Thus, Sefeik et al. teaches dynamically retrieve contextual data. Additionally, developers as taught by Sefeik et al. corresponds to an immersive environment (Sefeik; “The game developer 101A can be using a virtual reality controller 112A to control avatar 206A. The editor hub 106 of FIG. 1 can employ the change in the avatar 206A. The editor hub 106 can identify the game developer computing system 104A and modify the field of view 202A for the game developer computing system 104A, and can identify the game developer computing system 104B and modify the field of view 202B for the game developer computing system 104B. Then, the field of view 202A can be displayed on the virtual reality head set 102A and the field of view 202B can be displayed on the virtual reality head set 102B, where the field of view 202A and 202B can be different. The field of view 202A and 202B can be from behind the associated avatar 206A and 206B respectively. The field of view 202A and 202B can be from the viewpoint of the avatar 206A and 206B respectively. As shown in the field of view 202A, the field of view for the game developer 101A is behind the developer's avatar 206A and shows the avatar 208A for game developer 101 B. The field of view 202B for the game developer 101B is behind the developer's avatar 206B and shows the avatar 208B for game developer 101 B at a different angle based on the different field of view” [¶ 0088]). Applicant’s argument of an “end-user” as corresponding to at least one user within the immersive environment is silent within the claim 7. Applicant’s argument infers that a developer is not able to operate within an immersive environment, however Sefeik et al. teaches developers are able to connect to the virtual space and operate within the immersive environment as avatars (Sefeik; “FIGS. 3A and 3B illustrate embodiments 300 and 350 of a virtual development space with multiple game developers and a director. The editor hub can create the virtual development space 108. The game developer computing systems can connect to the virtual development space change, manipulate, and/or create immutable game data of the game application. Game developers can control respective avatars 110A, 110B, 110C in the virtual space 108” [¶ 0089]; additionally, “… to: create virtual development space, wherein the virtual development space enables a plurality of game developer systems to edit immutable game data associated with virtual objects of the game application, wherein the virtual development space includes a three-dimensional simulated space instanced on a computer server that is accessible by a plurality of game developers systems and that are located remotely from the computer server to edit immutable game data, wherein the immutable game data includes at least a first virtual object; connect a first game developer system to the virtual development space; connect a second game developer system to the virtual development space; receive a first command from the first game developer system to edit the first virtual object in the virtual development space; initiate a change to the first virtual object in the immutable game data corresponding to the first command to generate an updated first virtual object; and transmit the updated first virtual object to the second game developer system” [¶ 0013]). And, Applicant fails to view developer operated avatars within a virtual development space as illustrated within Figs. 3A-B (Sefeik; [¶ 0063 and ¶ 0090-0091]), wherein Fig. 3A illustrates virtual space 108 with developers/avatars 110A and 110C within said virtual space. In any event, the term “end-user” is relative based on the application; such that, within a game play mode of an application, an end-user is able to interact with the game in a manner that does not allow the changing of certain data or game aspects (i.e. a player); and, within a developer mode of an application, an end-user is able to interact with the game in a manner that allows for changing of certain data or game aspects (i.e. a developer) (Sefeik; video game development platform [¶ 0063]). In other words, both are end-users, having different use cases within their version of an application (e.g. Autodesk Maya and 3D Max correspond to developer platforms wherein the end-user is a developer, these platforms create games wherein the end-user is a player) (Sefeik; [¶ 0051, ¶ 0057, and ¶ 0061]; moreover, video game development platform [¶ 0063]). Subsequently, contextual data as inferred by Applicant fails to be as limiting within claim 7 as in the manner argued (“(scene context, conversation, user type, preferences, and intent) refers to live contextual cues”). Furthermore, terms such as data, content, context, or contextual data correspond to information, graphics, commands, inputs, and/or data values given their broad meaning. Still further, the limitation of autonomous with regards to contextual data retrieval is silent within the claim language of claim 7. Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 8, ¶ 2-4), that “(Feature 2: ) … Sefeik contains no teaching or suggestion of retrieving or processing conversation, user intent, or user type for influencing content rendering. None of the paragraphs [0088]-[0107] disclose or infer any form of semantic context extraction or behavioral inference. Instead: Sefeik describes explicit developer commands such as ‘select,’ ‘modify,’ or ‘approve’ a virtual object Para [0103-0107]. These are instructional inputs, not contextual data. There is no disclosure of voice, conversation, or interaction analysis between users that could serve as contextual parameters. The only user-related characteristic discussed is device capability (processing power, display capability, etc., Para [0105-0108), which determines representation quality-not user intent or scene context. No mention is made of ‘preferences’ or ‘intent related to rendering’; the director's approval process para [0091] governs source control, not adaptive scene personalization. Accordingly, Sefeik fails to disclose the claimed composition of contextual data.” The Examiner disagrees. Applicant’s arguments fail to view the broad nature of the subject matter of “contextual data”. Wherein, the term contextual is not as limiting as semantic or behavioral per its definition (Merriam-Webster; “in, relating to, determined by, or conforming to a context” [https://www.merriam-webster.com/dictionary/contextual]). Applicant infers that commands do not correspond to contextual data, however the commands (as taught by Sefeik) are relate to context within the application (Sefeik; “… change, manipulate, and/or create immutable game data of the game application. Game developers can control respective avatars 110A, 110B, 110C in the virtual space 108” [¶ 0089]; wherein, “The execution of the game instance may enable interaction by the users with the online game and/or each other. The editor hub 106 may be configured to perform operations in the game instance in response to commands received over network from the game developer computing systems 104. Within the instance of the online game, users may interact with elements in the online game and/or with each other through gameplays provided by the online game” [¶ 0073]; moreover, perform operations in the game instance [id.]; additionally, user input commands (“A given user may input commands with specific parameters to undertake specific deeds, actions, functions, spheres of actions and/or any other types of interactions within the virtual space. For example, the given user may input commands to construct, upgrade and/or demolish virtual buildings; harvest and/or gather virtual resources; heal virtual user-controlled elements, non-player entities and/or elements controlled by other users; train, march, transport, reinforce, reassign, recruit, and/or arrange troops; attack, manage, create, demolish and/or defend cities, realms, kingdoms, and/or any other virtual space locations controlled by or associated with the users; craft or transport virtual items; interact with, compete against or along with non-player entities and/or virtual space elements controlled by other users in combats; research technologies and/or skills; mine and/or prospect for virtual resources; complete missions, quests, and/or campaigns; exercise magic power and/or cast spells; and/or perform any other specific deeds, actions, functions, or sphere of actions within the virtual space”) [¶ 0079-0080]). Still further, Applicant’s arguments directed toward commands fails to view that said commands further correspond to game data (i.e. game related data) (Sefeik; [¶ 0093-0094]; moreover, “At (605), the editor hub can receive a first command from the first developer system to modify a first virtual object. At block (606), the editor hub can receive a second command from the second developer system to modify a second virtual object” [¶ 0100-0101]); and, that said game data is transmittable via the editor hub (Sefeik; [¶ 0069-0071]). Additionally, the claim language of claim 7, is silent regarding “voice, conversation, or interaction analysis between users” Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 8, ¶ 5 to Page 9, ¶ 2), that “(Feature 3: ) … Sefeik does not perform scene analysis; rather, it allows a developer to manually select virtual objects for editing. The ‘identification’ of modifiable objects in Sefeik arises solely through user selection, e.g., when a developer ‘selects virtual content 302A’ via an avatar and controller para [0090]. The system merely registers that selection; it performs no computational analysis of a rendered scene to autonomously detect modifiable regions or objects. In contrast, the claimed invention requires the processing unit itself to analyze a rendered scene, without explicit user instruction, to identify modifiable contents dynamically based on context. This analytical operation is absent from Sefeik, which functions as a collaborative editing tool, not a real-time scene analyzer.” The Examiner disagrees. Applicant’s arguments fail to view the teachings of the prior art. Wherein, Sefeik et al. teaches immediate feedback from game developers related to any changes and game developers can connect to a virtual space creating and collaborate in real-time (Sefeik; [¶ 0044]; moreover, feedback of changes in real-time [¶ 0045]). Still further, Applicant fails to view “… the system can enable live updates on all modifications in run-time views across multiple and different platforms” (Sefeik; [¶ 0046]). The “computational analysis of a rendered scene” as inferred by Applicant, fails to consider the teachings of “The editor hub 106 can identify the game developer computing system 104A and modify the field of view 202A for the game developer computing system 104A, and can identify the game developer computing system 104B and modify the field of view 202B for the game developer computing system 104B”, which revels situations where the editor hub’s capability to operate independently of direct user input (but as a means of response) and being based on modifiable contents (Sefeik; [¶ 0088]; moreover, updates in contents within the system are in response to changes and not to a direct command to do so (the direct commands are directed toward modification of the content) [¶ 0110-0112], including modifications based on system capabilities [¶ 0106-0108]); and, the editor hub being a computer performing operations corresponds to computational analysis of game state information (Sefeik; [¶ 0069-0070]) associated with producing changes (Sefeik; [¶ 0078]). Applicant argues (Remarks, Page 9, ¶ 3-7), “(Feature 4: ) … Sefeik's modifications are manual edits applied to immutable game data via explicit user commands. The editor hub executes a ‘received command by the first developer system to modify the virtual object’ para [0104]. The trigger is not contextual data but the developer's direct instruction. Furthermore: The ‘context’ determining modification is the developer's command content, not dynamically inferred user behavior or scene semantics. Modifications occur on stored game assets, not on a real-time rendered scene. The claimed invention's modification is autonomous and context-driven, performed during live rendering to achieve adaptive augmentation; Sefeik's edits are asynchronous development operations. Thus, Sefeik fails to teach or suggest modification ‘based on contextual data’ as recited.” The Examiner disagrees. Applicant’s arguments fail to fully view the prior art teachings. Wherein, commands by the developer(s) are directed toward the modification/editing of the virtual content (Sefeik; “As game developers connect to the same virtual development space and make modifications to the same virtual content” [¶ 0044]; “The game developer computing systems can connect to the virtual development space change, manipulate, and/or create immutable game data of the game application” [¶ 0089]; “The editor hub 410 can communicate with user platforms, such as the User 1 Platform A 416 and a User 2 Platform B 418 by receiving commands via the command request module 420. For example, the User 1 Platform A 416 and the User 2 Platform B 418 can send a command to move an avatar or modify virtual content in a virtual space” [¶ 0093]). Although, the editor hub executes commands received for modifying virtual content, the editor hub also provides updates to one or more developers/platforms which is a response to modifications of the virtual content (Sefeik; “The editor hub 410 can execute the commands received from the command requests module 420 via the execute commands module 408. After the editor hub 410 executes the commands via the execute commands module 408, the file changes module 412 can change the immutable game data and store a new version of the immutable game data in the version control file storage 414. After the editor hub 410 executes the commands via the execute commands module 408, the data changes module 406 can modify the immutable game data and send the modified immutable game data to the build system 404. The build system can take the immutable game data changes and processes the immutable game data changes for varying platforms to produce a cache of the updates. The cache of results can be saved for each target platforms 402. Then, the User 1 Platform A 416 and the User 2 Platform can see the changes. As such, the game developers can edit the immutable game data that ultimately gets built and distributed to the run-time platforms” [¶ 0094, ¶ 0100-0101, ¶ 0104-0106]; in other words, actions such as storing a new version, version control, sending the modified data, saving data to cache, and/or transmitting data of the virtual object are not taught as user directed commands [id.]). Additionally, the arguments regarding dynamically inferred user behavior or scene semantics, real-time rendered scene, autonomous and context-driven modification based on contextual data, have all been addressed above. Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 9, ¶ 8 to Page 10, ¶ 2), that “(Feature 5: ) … The cited portions describe broadcasting developer edits within a virtual development space so that other developers can view the changed content. This process reflects synchronization of edited game data across developer workstations para [0043-0044] and para [0093-0094], not augmentation of a real-time immersive scene experienced by end-users. Sefeik's ‘modified content’ exists only as updated immutable game data stored in version control para [0094]. The ‘playable instance’ incorporating those edits is generated only after build compilation para [0101], not in real time. Therefore, no real-time augmentation of rendered scenes is disclosed.” The Examiner disagrees. Applicant’s arguments fail to view the broadness of the claim language within the argument. Wherein, (1) the “end-user” for an immersive scene experience is/are limitations that are silent within claim 7. Wherein, (2) the inferred argument that there is no augmentation of a real-time immersive scene, fails to view the teachings of one or more developers interacting within a virtual development space (Sefeik; “FIGS. 3A and 3B illustrate embodiments 300 and 350 of a virtual development space with multiple game developers and a director. The editor hub can create the virtual development space 108. The game developer computing systems can connect to the virtual development space change, manipulate, and/or create immutable game data of the game application. Game developers can control respective avatars 110A, 110B, 110C in the virtual space 108” [¶ 0089]). PNG media_image1.png 556 774 media_image1.png Greyscale Even further support of this teaching is illustrated within Fig. 2 of the developers utilizing immersive reality equipment (i.e. VR headset and controller) (Sefeik; [¶ 0088]; moreover, VR headset [¶ 0049 and ¶ 0057-0058]) in order control the avatar as taught within Fig. 3A (Sefeik; [¶ 0089]; wherein, Fig. 3A illustrates virtual space 108 with developers/avatars 110A and 110B within said virtual space). PNG media_image2.png 476 780 media_image2.png Greyscale Moreover, the inferred argument by Applicant that a developer is not able to be immersive, fails to consider the developer operating avatars within a virtual development space as illustrated within Figs. 3A-B (Sefeik; ¶ 0063 and ¶ 0090-0091]). Lastly, Applicant fails to view the teachings of real-time developmental changes (Sefeik; “… systems and methods can include a feedback loop such that when one game developer makes a change, the change can be broadcasted to other developers to see the change within the virtual development space. Advantageously, all connected game developer systems that have run-times of the virtual space can see that change in real time” [¶ 0043 and ¶ 0094]). Additionally, ¶ 0101 of Sefeik et al. is silent regarding a compile time vs. runtime as inferred by Applicant’s argument. Although, ¶ 0100 of Sefeik et al. teaches compiling, this is in relation with implementing changes for a playable instance of game play (Sefeik; [¶ 0097]) which corresponds to dynamic rendering (Sefeik; “The editor hub can manage some or all change requests by connected game developers and execute the change requests to immutable game data into the runtime version of the data. For example, a game developer interacting with virtual content in a connected runtime version of the virtual environment can signal to the editor hub to make a change to the virtual content. The editor hub can execute the requested changes from all connected runtime interactions of the virtual space that includes the virtual content for a plurality of connected game developers. Each requested change can be propagated through a build pipeline, and cache of the build results can be stored with a unique identifier” [¶ 0039]; moreover, “… a game developer interacting with virtual content in a connected runtime version of the virtual environment can signal to the editor hub to make a change to the virtual content … changes from all connected runtime interactions of the virtual space that includes the virtual content for a plurality of connected game developers” [id.]). Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 10, ¶ 3-4), that “(Feature 6: ) … Sefeik lacks any disclosure of dynamic rendering in the sense of live scene replacement. The editor hub only compiles and distributes updated immutable game data for later use para [0094, 0101]. The developers may view the changed objects within the editing interface, but there is no teaching of: Continuous rendering pipeline replacement of the live immersive view; or Run-time scene substitution for an end-user experience. The claimed feature explicitly requires dynamic rendering of the modified scene by replacing the real-time scene, a real-time process that adapts an ongoing immersive environment. In contrast, Sefeik's rendering pertains to development previews of edited content, not to immersive runtime rendering experienced by users. Accordingly, Sefeik does not disclose or suggest dynamically rendering a modified scene in the claimed manner. Additionally, without prejudice to the submissions made in respect of independent claim 7, the Applicant respectfully traverses the objections raised against dependent claims 8-12. Each of the dependent claims introduces additional and specific limitations that are neither disclosed nor rendered obvious by the cited document Sefeik et al. (US 20200139248 A1). The dependent claims, therefore, define distinct and non-anticipated subject matter.” The Examiner disagrees. Applicant’s arguments are more limiting than as presented within the claim language within Claim 7. Wherein the term dynamic (or dynamically) is not equivalent to “continuous”; moreover, the limitation of “dynamically render” is not to equivalent “continuous rendering” as inferred by applicant. Although, Applicant argues the term continuous (or continuously) rendering, however Applicant fails to view the scope and range of the meaning of the term (Merriam-Webster; “marked by uninterrupted extension in space, time, or sequence” [https://www.merriam-webster.com/dictionary/continuous]). Wherein, continuous rendering ranges from endless/infinity uninterrupted rendering to uninterrupted from one rendering to the next rendering. The concept of infinite rendering being beyond the scope of the disclosure of the invention, therefore continuous rendering is limited to an application runtime. Thus, although being distinct from an end-user game play (Sefeik; [¶ 0078-0080]), a developer has their own aspect of the application that allows runtime within a virtual developer (Sefeik; [¶ 0066-0067 and ¶ 0085]; in other words, a simulated virtual space inherently involves a runtime [id.]; further see Fig. 3A [¶ 0089-0090]). Additionally, the subject matter of the immersive environment and real-time scene have been addressed further above. Still further, the argument of “Run-time scene substitution for an end-user experience” has been addressed further above. Therefore, Applicant’s arguments above are not persuasive. Regarding dependent claims 8-12: Applicant’s arguments (Remarks, Page 10, ¶ 5), filed 12/09/2025, with respect to the rejection(s) of claim(s) 8-12 under 35 U.S.C § 103 have been fully considered, due the dependency upon claim 7 respectively. Wherein, the arguments are not persuasive, regarding reasons as addressed above. Applicant argues (Remarks, Page 10, ¶ 6 to Page 11, ¶ 5), that “Sefeik fails to disclose modifiable objects or regions within a real-time scene rendered to an end-user. The cited ‘virtual content’ (e.g., the dragon or monster of para 0090-0091) are development-time assets being edited by developers, not dynamically modifiable contents within a live immersive scene. The so-called ‘virtual development space 108’ is a workbench for editing immutable game data, not a runtime environment that is being rendered to an end-user during experience. The ‘modifiable regions’ of claim 8 are spatial portions of a rendered immersive environment that may be automatically augmented or replaced based on contextual data. No equivalent notion exists in Sefeik; regions there are never delineated, analyzed, or selected for modification. Accordingly, Sefeik's system deals exclusively with developer-authored static assets, not modifiable scene components identified and adapted during real-time rendering as claimed.” The Examiner disagrees. Applicant’s arguments fail to fully view the teaches within the prior art. Wherein, the virtual development space 108 corresponds to a simulated space (Sefeik; [¶ 0085]). Although, computer simulations inherently involve a runtime environment (Sefeik; “… one or more processors to: create virtual development space, wherein the virtual development space enables a plurality of game developer systems to edit immutable game data associated with virtual objects of the game application, wherein the virtual development space includes a three-dimensional simulated space instanced on a computer server that is accessible by a plurality of game developers systems and that are located remotely from the computer server to edit immutable game data” [¶ 0013]), Sefeik et al. teaches real-time interaction within the simulated space (Sefeik; “The simulated space may have a topography, express real-time interaction by the user, and/or include one or more objects positioned within the topography that are capable of locomotion within the topography” [¶ 0085]) and real-time updating of the virtual development space (Sefeik; “… the game developers can edit the immutable game data that ultimately gets built and distributed to the run-time platforms” [¶ 0094]). Lastly, the subject matter of the “modifiable regions” is an alternative limitation within the language of claim 8 (see line 2 “at least one of”). Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 11, ¶ 6 to Page 12, ¶ 2), that “Sefeik discloses manual selection and modification driven by explicit developer input, not context-driven autonomous selection as recited. In Sefeik, avatar 110A ‘selects virtual content 302A via indicator 306A’ para [0090]. The editor hub simply registers this command and executes it para [0104]. There is no contextual indication or automated reasoning leading to object selection. The claimed invention requires that the ‘contextual data’, such as user conversation, preferences, or intent, indicate which object should be modified and how. This semantic coupling between context and object selection is entirely absent from Sefeik. Further, Sefeik's modifications concern asset attributes (geometry, texture) in stored game data, not contextually guided transformations within a running immersive scene. Therefore, Sefeik does not anticipate the limitation of ‘selecting wherein the contextual data indicates’ the modifiable object, nor ‘modifying based on the contextual data.’”. The Examiner disagrees. Applicant’s arguments are more limiting than as presented within the claim language within Claim 9. Wherein, claim language is silent regarding the more limiting subject matter of “user conversation, preferences, or intent” as inferred by Applicant. However, Applicant fails to view the teachings of communications (Sefeik; “Controls of virtual elements in the online game may be exercised through commands input by a given user through the game developer computing systems 104. The given user may interact with other users through communications exchanged within the virtual space. Such communications may include one or more of textual chat, instant messages, private messages, voice communications, and/or other communications. Communications may be received and entered by the users via their respective game developer computing systems 104. Communications may be routed to and from the appropriate users through server(s) (e.g., through the editor hub 106)” [¶ 0077]). The claim language of claim 9, is further silent regarding “semantic coupling” and even further regarding “semantic coupling between context and object selection”. Wherein, the term “indicates” is broader than the subject matter of “semantic coupling”, thus “contextual data indicates the at least one modifiable object” corresponds to updates (i.e. appearance changes) indicating at least one modified virtual object (Sefeik; “… the editor hub can transmit to a second developer system an indication of the selection by the first developer system … the second developer can know that the first developer system is currently working on the virtual object” [¶ 0103] and “… the editor hub can change the virtual object based on the received command by the first developer system” [¶ 0104]; wherein, the received command corresponds to “… avatar 110A makes modifications to the virtual content 302A to generate virtual content 310A (e.g. less eyes and different teeth) and avatar 110B makes modifications to virtual content 302B to generate virtual content 310B (e.g. a larger dragon)” [¶ 0091]). Additionally, as addressed in more detail above, commands are directed to modification of a virtual object and updating virtual objects performed by the editor hub are in response to a modification to virtual objects (and not based on a direct command to do so). Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 12, ¶ 2-6), that “… Sefeik's duplicated objects are temporary concurrency copies created for command-conflict resolution among developers, not contextual insertions of new content into a rendered scene. The process of para [0110-0117] merely handles simultaneous edit commands by generating ‘a first copy’ and ‘a second copy’ of an object to allow concurrent edits. These are not new objects relating to contextual data; they are redundant copies of existing development assets. There is no notion of identifying a ‘modifiable region’ in the virtual space based on contextual data. The spatial scope of modifications is determined solely by developer selection. Nor does Sefeik insert newly generated objects into a live immersive scene; rather, changes are reflected in immutable data stored in version control para [0094]. Hence, the claimed process of context-driven region selection and insertion of new context- derived content is absent from Sefeik.” The Examiner disagrees. Applicant arguments fail to view the teachings of the applied prior art. Wherein, ¶ 0110-0117 are not relied upon within the rejection of claim 10. And wherein, Applicant fails to consider changes made to a virtual object corresponds to creating a new object (Sefeik; “After the editor hub 410 executes the commands via the execute commands module 408, the file changes module 412 can change the immutable game data and store a new version of the immutable game data in the version control file storage 414. After the editor hub 410 executes the commands via the execute commands module 408, the data changes module 406 can modify the immutable game data and send the modified immutable game data to the build system 404. The build system can take the immutable game data changes and processes the immutable game data changes for varying platforms to produce a cache of the updates” [¶ 0094]; moreover, change the immutable game data and store a new version of the immutable game data in the version control file storage [id.]). Although, Applicant infers the changed virtual objects as temporary or redundant “copies”, this fails to view the version control file as new version of the immutable game data. Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 12, ¶ 7 to Page 13, ¶ 2), that “… The roles of director and developers in Sefeik differ fundamentally from the claimed presenter and attendees. The director in Sefeik merely exercises edit approval authority over game data para [0091]. The developers collaborate to modify content assets; they are all co-authors, not passive viewers. In contrast, the claimed invention defines an immersive presentation setting, where a presenter delivers or controls content visible to one or more attendees, and the system adapts the rendered scene for these attendees based on contextual understanding. Sefeik does not disclose or contemplate a one-to-many presentation context, nor dynamic audience-specific rendering of immersive content. Therefore, the cited ‘director’ cannot reasonably be equated to a ‘presenter,’ and Sefeik fails to anticipate this social and functional configuration.” The Examiner disagrees. Applicant fails to fully view the teachings of the applied prior art. Wherein, Sefeik et al. teaches a collaborative environment which allows for both roles of presenter and attendee (Sefeik; “The avatars 110 can be seen by each of the game developers 101. The avatars 110 can help the game developers 101 identify what the other game developers 101 are focusing on in the virtual space 108 … The game developer 101A can see the other avatars 110B, 110C, 110D of the other game developers 101B, 101C, 101D” [¶ 0052 and ¶ 0054]; moreover, “The avatars can bring remote developers into a close virtual proximity creating a collaborative environment, as if the developers were in a meeting room working together” [¶ 0041]) depending on the situation (Sefeik; [¶ 0044-0045, ¶ 0052, and ¶ 0089-0091]; moreover, “… the first game developer system to edit the first virtual object in the virtual development space; initiate a change to the first virtual object in the immutable game data corresponding to the first command to generate an updated first virtual object; and transmit the updated first virtual object to the second game developer system” (i.e. one developer present a change to a VR object and another developer witness said change via a transmitted update) [¶ 0013]; such that, one developer is able to present changes to a VR object while the other is made aware of said changes in a passive role (wherein they cannot make changes to the same VR object) [¶ 0103-0104]). Additionally, Applicant fails to view a developer (110B) in the role of presenter because the developer (110B) is presenting virtual object (302B/308B) to another developer (110C) in the role of attendee because the other developer (110C) is not presenting a virtual object and has the presented virtual object (302B/308B) within view, as illustrated within Fig. 3A. Let alone, a developer (110B) in the role of presenter uses a laser pointer to indicate their presentation of virtual object (114) to another developer (110C) in the role of attendee (Sefeik; “The indication 116 can include a pointer, such as a laser pointer, from the associated avatar to the virtual content” [¶ 0054]), as illustrated within Fig. 1. Therefore, Applicant’s arguments above are not persuasive. Applicant argues (Remarks, Page 13, ¶ 3-6), that “The ‘approval’ in Sefeik and the ‘recommendation’ in the claimed invention are distinct in nature, purpose, and timing. In Sefeik, the ‘director’ approves modifications to immutable game data as part of a version- control workflow. Approval determines whether edits are committed to the repository-not whether a live scene should be rendered. The claimed invention, by contrast, generates recommendations for dynamic scene rendering based on contextual inferences (for instance, adjusting visuals during a live immersive session) and may optionally prompt the presenter to apply them in real time. Sefeik's approval process is purely administrative and occurs prior to compilation; it is not a user-experience-level recommendation within an ongoing immersive rendering pipeline. Thus, the feature of recommendation-based dynamic rendering control is not disclosed or suggested in Sefeik.” The Examiner disagrees. Applicant’s arguments infer a narrower view than the claimed limitation(s). Wherein, an approval corresponds to a proposal as to a best course of action (Merriam-Webster; “to endorse (someone) as fit, worthy, or competent” or “to suggest (an act or course of action) as advisable” [https://www.merriam-webster.com/dictionary/recommending]) given the broadness of the term “recommendation”. However, the teaching of indication by Sefeik et al. corresponds to a recommendation of which the developers are suggest to direct their focus/attention (Sefeik; [¶ 0054]; moreover, “The avatars can stand near virtual objects and make selections of the virtual objects. The editor hub can generate an indication of a selection of a virtual object and broadcast the selection to other developers, showing the intent of the game developer to make a modification on a virtual content in the immutable game data” [¶ 0041]). Additionally, the claim language of claim 12 is silent regarding the proved recommendation being from a “user-experience-level” as inferred by Applicant. Lastly, the option of/for selection/modification triggers updating of the virtual object by the editor hub (Sefeik; [¶ 0053-0054 and ¶ 0090-0091]; moreover, “The selection is displayed via the box 308A indicating to the other game developers that the virtual content 302A has been selected” [¶ 0090], such that “… the game developer computing system 1048, via the avatar 1108 and the virtual reality controller 1128, can select 118 a dragon 114 in the virtual space 108. The command for the selection 118 can be sent to the editor hub 106, which can generate an indication 116 of the selection 118 to be sent to the other game developer computing systems. The indication 116 of the selection 118 can be sent to the other game developer computing systems 104A, 104C, 104D” [¶ 0054]). Therefore, Applicant’s arguments above are not persuasive. In summary: Applicant arguments try to incorporate more narrow limitations and/or meaning than the BRI of the terms argued (MPEP; [2111]). Although, the Specification discloses distinguishable subject matter over the applied prior art, the claim language fails to reflect this. The Examiner suggests Applicant incorporate more detail. For example, Applicant argues “dynamically retrieve contextual data”, however the claim language fails to incorporate what or how the retrieval is dynamic. Applicant appears to use the term continuous interchangeable with dynamic, however this would be improper. Additionally, the limitation of “contextual data” (claim 7) is broader than “real-time conversation between at least two users amongst the plurality of users” (Spec. [¶ 00034]). Conversely, if the two limitations were equivalent or apparent, as inferred by Applicant, then amending “contextual data” to “real-time conversation between at least two users amongst the plurality of users” would create no material difference to claim’s subject matter. And yet, the Specification defines contextual data as serval things for which it “is not limited to” (Spec. [¶ 00034]). Claim 7 relies on alternative language (“at least one of”) when defining contextual data (i.e. “… wherein the contextual data comprises at least one of context of the real-time scene, real-time conversation between at least two users amongst the plurality of users, type of the at least one user, preferences and inputs of the at least one user, and intent related to rendering of the real-time scene to the at least one user;”). Thus, the applied prior art only is required to address one limitation from the group, which in this case is “context of the real-time scene” (Merriam-Webster; “the situation in which something happens” [https://www.merriam-webster.com/dictionary/context]) (see responses above regarding contextual data). Still further, Applicant dismisses the developer environment as real-time scene, even though the it is a collaboration environment in real-time with feedback and conversation between users. Applicant also argues an end-user even though the claim slimily recites a “user” or “users” without an additional limitation(s) to indicate the type of user. Applicant argues a rendering pipeline or ordered rendering sequence, however claim 7 and/or claim 12 have no terminology that would indicate any such sequence of events are performed in an order, directly after one another, or without any intermediate actions. Lastly, terms such as dynamic, context, real-time, modification (thereof), and/or replacement (thereof) are broad within a computer generated environment (e.g. dynamically retrieving corresponds to retrieving data just twice, multiple times, or at least two different types of data within a few seconds or a few minutes, in either of the scenarios the data is dynamically retrieved. And, real-time for a computer environment can be within seconds or minutes depending on the subjective expectation of the operation or operations involved). Therefore, it is suggested that Applicant incorporate additional and/or more limiting subject matter, in order to better distinguish the claimed invention from the applied prior art. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-18 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sefeik et al., US PGPUB No. US PGPUB No. 20200139248 A1, hereinafter Sefeik. Regarding claim 7, Sefeik discloses a processing unit for rendering a modified to a user in an immersive environment (Sefeik; a processing unit (i.e. computer system) for rendering a modified to a user in an immersive environment [¶ 0013, ¶ 0122, and ¶ 0133], as illustrated within Fig. 9; moreover, editor hub [¶ 0050 and ¶ 0063-0065] in relation with creating a collaborative environment [¶ 0038, ¶ 0041, and ¶ 0048], as illustrated within Fig. 1), the processing unit comprises: one or more processors (Sefeik; the processing unit [as addressed above] comprises one or more processors [¶ 0013, ¶ 0122, and ¶ 0133]); and a memory communicatively coupled to the one or more processors (Sefeik; a memory communicatively coupled to the one or more processors [¶ 0122, ¶ 0127-0128, and ¶ 0131], as illustrated within Fig. 9), wherein the memory stores processor-executable instructions, which, on execution, cause the one or more processors (Sefeik; the memory stores processor-executable instructions, which, on execution, cause the one or more processors to perform operations [¶ 0050 ¶ 0065, ¶ 0133, and ¶ 0136]) to: dynamically retrieve contextual data associated with at least one user amongst a plurality of users in an immersive environment (Sefeik; the processor [as addressed above] (configured) to dynamically retrieve contextual data associated with at least one user amongst a plurality of users in an immersive environment [¶ 0088-0091]; moreover, user commands for modifying content [¶ 0092-0094, ¶ 0100, and ¶ 0103]), during rendering of a real-time scene to the at least one user within the immersive environment (Sefeik; during rendering of a real-time scene to the at least one user within the immersive environment [¶ 0043-0045 and ¶ 0067]; moreover, data changes are processed during operation [¶ 0092-0094], as illustrated within Fig. 4), wherein the contextual data comprises at least one of context of the real-time scene (Sefeik; the contextual data comprises (at least one of) context of the real-time scene [¶ 0089-0091]; moreover, real-time experience [¶ 0043-0045 and ¶ 0067]), real-time conversation between at least two users amongst the plurality of users, type of the at least one user, preferences and inputs of the at least one user, and intent related to rendering of the real-time scene to the at least one user; analyze the real-time scene to identify one or more modifiable contents within the real-time scene (Sefeik; the processor [as addressed above] (configured) to analyze the real-time scene to identify one or more modifiable contents within the real-time scene [¶ 0044-0045 and ¶ 0089-00091], as illustrated within Figs. 3A-B); modify the one or more modifiable contents based on the contextual data, to obtain one or more modified contents (Sefeik; the processor [as addressed above] (configured) to modify the one or more modifiable contents based on the contextual data to obtain one or more modified contents [¶ 0089-0091], as illustrated within Figs. 3A-B); augment the real-time scene with the one or more modified contents to output a modified scene of the immersive environment (Sefeik; the processor [as addressed above] (configured) to augment the real-time scene with the one or more modified contents [¶ 0089-0091] to output a modified scene of the immersive environment [¶ 0092-0094]); and dynamically render the modified scene to the at least one user in the immersive environment, by replacing the real-time scene with the modified scene (Sefeik; the processor [as addressed above] (configured) to dynamically render the modified scene to the at least one user in the immersive environment by replacing the real-time scene with the modified scene [¶ 0090-0091 and ¶ 0093-0094]). Regarding claim 8, Sefeik further discloses the processing unit of claim 7, wherein the one or more modifiable contents comprises at least one of: one or more modifiable objects amongst plurality of objects present in the real-time scene (Sefeik; the one or more modifiable contents [as addressed within the parent claim(s)] comprises (at least one of) one or more modifiable objects amongst plurality of objects present in the real-time scene [¶ 0090-0091 and ¶ 0094]); and one or more modifiable regions within the real-time scene. Regarding claim 9, Sefeik further discloses the processing unit of claim 8, wherein the processing unit is configured to modify the one or more modifiable contents (Sefeik; the processing unit (i.e. computer system) is configured to modify the one or more modifiable contents [as addressed within the parent claim(s)]) by: selecting at least one modifiable object amongst the one or more modifiable objects (Sefeik; modifiable contents [as addressed above] by selecting at least one modifiable object amongst the one or more modifiable objects [¶ 0090-0091 and ¶ 0103-0107]), wherein the contextual data indicates the at least one modifiable object (Sefeik; the contextual data indicates the at least one modifiable object [¶ 0066 and ¶ 0103-0107]); and modifying the at least one modifiable object based on the contextual data to obtain the one or more modified contents (Sefeik; modifiable contents [as addressed above] by modifying the at least one modifiable object based on the contextual data to obtain the one or more modified contents [¶ 0103-0107]; moreover, user made modifications in order to have modified content [¶ 0090-0091 and ¶ 0093-0094]). Regarding claim 10,Sefeik further discloses the processing unit of claim 8, wherein the processing unit is configured to modify the one or more modifiable contents (Sefeik; the processing unit (i.e. computer system) is configured to modify the one or more modifiable contents [as addressed within the parent claim(s)]) by: selecting at least one modifiable region amongst the one or more modifiable regions, based on the contextual data (Sefeik; modifiable content [as addressed above] by selecting at least one modifiable region amongst the one or more modifiable regions based on the contextual data [¶ 0090-0091 and ¶ 0103-0107]); creating one or more new objects relating to the contextual data (Sefeik; modifiable content [as addressed above] by creating one or more new objects relating to the contextual data [¶ 0042 and ¶ 0090-0091], as illustrated within Figs. 3A-B; moreover, new version of data [¶ 0094]); and inserting the one or more new objects within the at least one modifiable region to obtain the one or more modified contents (Sefeik; modifiable content [as addressed above] by inserting the one or more new objects [¶ 0042 and ¶ 0093-0094] within the at least one modifiable region to obtain the one or more modified contents [¶ 0090-0091]). Regarding claim 11, Sefeik further discloses the processing unit of claim 7, wherein the plurality of users comprises a presenter and one or more attendees (Sefeik; the plurality of users comprises a presenter (i.e. user 1) and one or more attendees (i.e. user 2) [¶ 0048-0049, ¶ 0052, and ¶ 0093], as illustrated within Fig. 1 and Fig. 4). Regarding claim 12, The processing unit of claim 11, wherein, when at least one user is one of the presenter and an attendee amongst the one or more attendees (Sefeik; at least one user is one of the presenter (i.e. user 1) and an attendee (i.e. user 2) amongst the one or more attendees [¶ 0048-0049, ¶ 0052, and ¶ 0093], as illustrated within Fig. 1 and Fig. 4), dynamically rendering the modified scene (Sefeik; [¶ 0041, ¶ 0054, and ¶ 0093-0094]) comprises: providing a recommendation to the presenter, indicating the modified scene, wherein, optionally, an option is provided along with the recommendation (Sefeik; providing a recommendation to the presenter, indicating the modified scene, wherein, optionally, an option is provided along with the recommendation [¶ 0054 and ¶ 0090-0091]), said option prompts dynamic rendering of the modified scene to the at least one user (Sefeik; said option prompts dynamic rendering of the modified scene to the at least one user [¶ 0053-0054 and ¶ 0090-0091]). Regarding claims 1-6, the rejection of claims 1-6 are addressed within the rejection of claims 7-12, due to the similarities claims 1-6 and claims 7-12 share, therefore refer to the rejection of claims 7-12 regarding the rejection of claims 1-6. Regarding claims 13-18, the rejection of claims 13-18 are addressed within the rejection of claims 7-12, due to the similarities claims 13-18 and claims 7-12 share, therefore refer to the rejection of claims 7-12 regarding the rejection of claims 13-18. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Baszucki et al. (US PGPUB No. 20220277505 A1); Lu et al. (US PGPUB No. 20250053926 A1); and Pejsa et al. (US PGPUB No. 20210350604 A1). The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Refer to PTO-892, Notice of Reference Cited for a listing of analogous art. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Charles Lloyd Beard whose telephone number is (571)272-5735. The examiner can normally be reached Monday - Friday, 8:00 AM - 5: 00 PM, alternate Fridays EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CHARLES LLOYD. BEARD Primary Examiner Art Unit 2611 /CHARLES L BEARD/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Oct 19, 2023
Application Filed
Sep 06, 2025
Non-Final Rejection — §102
Dec 09, 2025
Response Filed
Mar 12, 2026
Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579729
VOLUMETRIC VIDEO SUPPORTING LIGHT EFFECTS
2y 5m to grant Granted Mar 17, 2026
Patent 12548225
AUDIO OR VISUAL INPUT INTERACTING WITH VIDEO CREATION
2y 5m to grant Granted Feb 10, 2026
Patent 12519924
MULTI-PERSPECTIVE AUGMENTED REALITY EXPERIENCE
2y 5m to grant Granted Jan 06, 2026
Patent 12511801
GENERATING VIDEO STREAMS TO DEPICT BOT PERFORMANCE DURING AN AUTOMATION RUN
2y 5m to grant Granted Dec 30, 2025
Patent 12513279
STEREOSCOPIC VIDEO DISPLAY DEVICE, STEREOSCOPIC VIDEO DISPLAY METHOD, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
99%
With Interview (+36.1%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 350 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month