Prosecution Insights
Last updated: April 19, 2026
Application No. 18/315,719

Embedding Digital Signatures with Content Created by Users Sharing a Virtual Environment

Final Rejection §103
Filed
May 11, 2023
Examiner
KALHORI, DAN F
Art Unit
2618
Tech Center
2600 — Communications
Assignee
Google LLC
OA Round
2 (Final)
100%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
3 granted / 3 resolved
+38.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
19 currently pending
Career history
22
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
71.9%
+31.9% vs TC avg
§102
4.7%
-35.3% vs TC avg
§112
12.5%
-27.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 3 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status This action is in response to the amendment filed on October 1, 2025. Claims 1, 9, 11, 14, 16, and 18-20 have been amended. Claims 1-20 remain rejected in the application. Response to Arguments Applicant’s arguments with respect to Claim 1, in item (4), Applicant argues that the “configuration data” of Goncalves (¶0059) cannot reasonably correspond to “contextual information associated with the second user.” This argument has been fully considered, but is not persuasive. Goncalves teaches that the received configuration data can include user-selected predetermined styles and other configuration data (sizes, color ranges, resolutions) used to generate/display the digital item (¶0059), the user-selected preferences constitute “contextual information associated with” the user and used in generation. Applicant’s remaining arguments regarding the prior § 103 rejection is persuasive in view of the amendments to claim 1, and the Examiner has introduced new grounds of rejections based on new references to address amended limitations. Regarding arguments to independent claims 16 and 20, they have been amended in an analogous manner to claim 1, and, for the reasons discussed above, the prior § 103 rejections of claims 16 and 20 are not maintained and new grounds of rejection are set forth below. Applicant’s arguments regarding the dependent claims being in condition for allowance due to the reasons related to the corresponding independent claims are not persuasive because the independent claims are not allowed, therefore the dependent claims remain rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 13, and 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over (Goncalves et al. US 20230077278 A1), hereinafter “Goncalves”, Geraghty (US11087479B1), and Misung (KR20090044438A). Regarding claim 1, Goncalves teaches a computer-implemented method, comprising: providing, by a computing system, a virtual three-dimensional (3D) environment (Goncalves ¶0003 “An artificial reality (XR) device, such as an augmented reality (AR) device, mixed reality (MR), or virtual reality (VR) device, can be used to display additional content over a depiction of a real-world environment. For instance, users on an XR device can view objects in an environment or perform social interactions on a social media platform via the XR device.” The computing device being the XR device, which provides access to an environment where users can view objects or perform interactions), receiving, by the computing system, a first input associated with the first user, (Fig. 7, ¶0003 “For example, a user can instantiate these virtual objects by activating an app and telling the app to create the virtual object, and using the virtual object as an interface back to the app. A user (input) the app to create the virtual object) to cause a generative machine-learned model to generate a first virtual object within the virtual 3D environment (Goncalves describes, ¶0069 “One or more generative machine learning models can be used to generate the data file/representation of the digital item according to the digital item definitions”, that one or more generative machine learning models can be used to generate the data file/representation of the virtual item. This teaches receiving an input used to cause a generative model to generate a virtual object withing a digital environment.) receiving, by the computing system, a second input associated with the second user, to cause the generative machine-learned model to generate a second virtual object within the virtual 3D environment (the second user repeating the method discussed previously with first user input), the second virtual object being generated based on the second input and contextual information associated with the second user (¶0059 “In some implementations, the received configuration data can include selection of one or more predetermined styles. For example, a style library can include predetermined styles that correspond to predetermined configuration data, such as sizes, color ranges, resolutions, and other suitable configuration data.” The configuration data selection(input) from a style library that can include preferences such as size and colors correspond to contextual information for use in generation by that user. However, Goncalves does not explicitly disclose wherein a first user and a second user simultaneously view the virtual 3D environment and interact with the virtual 3D environment. Geraghty describes (Fig. 1 and pg. 16 col. 4 lines 15-34) a “multi-user environment” where an artificial reality application “may present artificial reality content to each of users 110A, 110B… based on a current viewing perspective…for the respective user” and (col. 4 lines 1-14) uses HMD/controllers/image capture data to capture motion/tracking and render artificial reality content having virtual objects. This teaches a virtual environment simultaneously presented to multiple users for viewing and interaction. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the XR/digital item generation system of Goncalves with the multi-user artificial reality environment of Geraghty into in order to provide a multi-user virtual environment that multiple users can simultaneously view and interact with in order to provide ease of interaction and collaboration. However, Goncalves in view of Geraghty does not explicitly disclose the first virtual object being embedded with a first unique digital signature assigned to the first user; the second virtual object being embedded with a second unique digital signature assigned to the second user; and causing, by the computing system, a third unique digital signature to be associated with the virtual 3D environment including the first virtual object and the second virtual object, the third unique digital signature being assigned to the first user and the second user. Misung describes (1.3.1 3D Secret Watermark Insertion Procedure and 2.2 How To Insert Multiple Watermarks) that each producer selects a secret key Ski and inserts the producer’s own secret watermark, Wi, using the producer’s secret key and further describes (2.1 Secret Key Sharing Protocol, including 2.1.1 and 2.1.2) generating a shared secret key SSk from multiple users’ secret keys using a key sharing protocol and (2.2 How to Insert Multiple Watermarks) inserting the joint watermark W into the entire work, I T, using SSk. This teaches embedding a user-assigned unique signature in each user’s contribution/object and embedding a third joint signature associated with the combined VR data/environment and assigned to multiple users. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the system of Goncalves in view of Geraghty with the watermarking/signature techniques of Misung in order to embed individual and joint ownership markers to provide secure and efficient attribute/ownership verification to users. Claim 16 has similar limitations as claim 1, therefore it is rejected under the same rationale as claim 1. Regarding Claim 2, The computer-implemented method of claim 1, wherein one or more of the first unique digital signature, the second unique digital signature, and the third unique digital signature, comprises a non-fungible token. As described in claim 1, Goncalves, in view of Geraghty and Misung, discloses (Goncalves; Fig. 7, ¶0084 see claim 1) generating NFT’s with metadata, such as ownership or location, to associate users or objects with a digital environment. Claim 17 has similar limitations as claim 2, therefore it is rejected under the same rationale as claim 2. Regarding Claim 13, Goncalves in view of Geraghty and Misung teaches The computer-implemented method of claim 1, wherein the virtual 3D environment includes a virtual reality environment or an augmented reality environment. As discussed previously Goncalves describes the use of an artificial reality (XR) device, such as an augmented reality (AR) device, mixed reality (MR), or virtual reality (VR) device (see claim 1). Claims 3-8 are rejected under 35 U.S.C. 103 as being unpatentable over (Goncalves et al. US 20230077278 A1), hereinafter “Goncalves”, Geraghty (US11087479B1), Misung (KR20090044438A) and Quigley et al. WO 2022204404 A1, (hereinafter “Quigley”). Regarding Claim 3, Goncalves in view of Geraghty and Misung teaches The computer-implemented method of claim 1, further comprising, but does not explicitly disclose in response to generating the second virtual object, providing the second user access to another virtual 3D environment using the second unique digital signature or using a separate property embedded with the second virtual object. However, Quigley teaches (¶1080 “Viewer devices 9406 may be used to access and output (e.g., display on a screen, play through speakers, etc.) encrypted digital assets using a DRM NFT.” and “The digital assets may include, for example, audio data, video data, image data, digital trading cards, digital artwork, digital photos, digital videos, video games, video game characters, video game levels, video game items, video game skins, VR items or locations, songs, albums, podcasts, audio recordings, files, virtual representations of items, and/or any other digital assets.”) that NFTs may be used to control access to digital content including video game levels and VR locations (another virtual 3d environment). Creation of the NFT would automatically (in response) grant the owner access rights. As previously discussed, Goncalves teaches more than one user able to generate objects, as well as NFTs for the object, which implies the second user (see claim 1). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the NFT access control as taught by Quigley to the method of user-object and NFT generation of Goncalves as NFT’s commonly serve as digital records and keys to provide owners with access as set by rules stored on the blockchain and merely amounts to applying a known technique to a known method ready for improvement to yield predictable results of improved security/ licensing/ownership and known advantages of blockchain technology. Regarding claim 4, The computer-implemented method of claim 3, further comprising: identifying, by the computing system, the another virtual 3D environment as a virtual 3D environment to provide access to the second user to, based on at least one of the second input associated with the second user or the contextual information associated with the second user. Goncalves further teaches, in view of Geraghty, Misung, and Quigley (Goncalves; ¶0087 “A product recommendation system (hereinafter "recommendation system") can generate recommendations for one or more products and provide those recommendations in real time. Herein, the term "product" can include, for example, an item to be purchased, a movie or video selection, a restaurant selection, an action to be taken in a social media environment, a virtual object in an AR/VR environment, etc. In various implementations, the recommendations can be made using a current context of user activities (e.g., shopping, browsing online, engaging in an AR/VR environment, interacting on social media, etc.)”) The recommendation system uses various contextual information of user activities to make a recommendation. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the recommendation system of Goncalves in view of Geraghty and Misung to apply to the access rights of Quigley with the motivation of providing the predictable result of a further personalized experience. Regarding Claim 5, Goncalves in view of Geraghty, Misung, and Quigley describes the computer-implemented method of claim 4, wherein the separate property includes a digital utility token. (Goncalves; ¶0084 see claim 1 and “The NFT(s) for the digital item(s) can support transactions for the digital item(s) using a blockchain ledger (e.g., after exportation to the digital environment).”) Goncalves teaches generating NFT’s in addition to the digital signatures which are commonly used as proof of ownership/transaction/etc. (utility token). Claim 8 has similar limitations as of claim(s) 5, therefore it is rejected under the same rationale as claim(s) 5. Regarding Claim 6, Goncalves in view of Geraghty and Misung teaches The computer-implemented method of claim 1, but does not explicitly disclose it further comprising: in response to the generation of the second virtual object, providing the second user access to a real-world experience using the second unique digital signature or using a separate property embedded with the second virtual object. However, Quigley teaches (¶0428] “The present disclosure relates to a tokenization platfom1 that enables the creation of tokenized virtual representations of items (also referred to as "VlRLs"), such as goods, services, and/or experiences. As used herein the term "item" may refer to a digit.al asset (e.g., gift card, digital music file, digital video file, software, digital photograph, etc.), physical good, digital service (e.g., video streaming subscription), physical service (e.g., chauffer service, maid service, dry cleaning service), and/or purchased experience (e.g., hotel package, concert ticket airlines ticket, etc.), or any combination thereof.”) Quigley describes a token being created (generation) for both virtual or physical goods including providing access to concerts (real-world experience). It would have been obvious to a person having ordinary skill in the art to modify the teachings Goncalves in view of Quigley to perform its intended function. Regarding Claim 7, Goncalves, in view of Geraghty, Misung, and Quigley, further teaches The computer-implemented method of claim 6, further comprising: identifying, by the computing system, the real-world experience as a real-world experience to provide access to the second user to, based on at least one of the second input associated with the second user or the contextual information associated with the second user. Goncalves describes a recommendation system (identifying the experience) that can provide real-world recommendations such as restaurants, shopping, or engaging in an AR environment. (see claim 4 for detailed analysis). Claims 9-12 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over (Goncalves et al. US 20230077278 A1), hereinafter “Goncalves”, Geraghty (US11087479B1), Misung (KR20090044438A) and (Kapur et al. US 20230070586 A1), hereinafter “Kapur”. Regarding Claim 9, Goncalves, in view of Geraghty, Misung discloses The computer-implemented method of claim 1, further comprising: Goncalves teaches generating a virtual object based on user input to a computer system and associating the object with an NFT(see claim 1), but does not explicitly disclose receiving, by the computing system, a third input associated with the first user. However, Goncalves teaches (¶0070 “Editor 614 can adapt the generated digital items according to the style protocols for the selected digital environments. For example, one or more models (e.g., machine learning models) can be configured to adjust characteristics of the data file that defines the visual representation of the digital items.” and “Adapting a version of a digital item to meet a style protocol can include any other suitable user input for altering the version of the digital item.”) that the user can provide additional inputs (third input) to modify the object, but does not explicitly disclose to cause the generative machine-learned model to modify the second virtual object to generate a modified second virtual object within the virtual 3D environment, the modified second virtual object being generated based on the third input and contextual information associated with the first user. However, Kapur teaches (¶0457 “For example, a newly generated NFT could include visual artwork generated by applying techniques such as Style Transfer to a user's photos, and/or an NFT owned by a music fan could evolve to contain an iteratively growing poem whose lines are generated by a language model trained and/or fine-tuned on lyrics by the musicians the user listens to most often.”) generation using a language model and (Fig. 37, 38, ¶0429 “As described, evolution of an NFT can occur when at least on one content element is replaced by another content element, where both content elements may be associated with an NFT.” and “Evolution can include peeling where a triggering event can lead to a change where a user action can be performed for the purposes of initiating change.”) which describes modification of an object and/or replacement of the content (modified second object) as well as a trigger event, (¶430 “A trigger event can include a public event (e.g., one that is recorded on a blockchain, a user input or action, a random coin toss, and/or a combination of such) and/or private events.”) where a user can trigger the evolution of the content with an input (first user’s) and (“The process can generate 3803 content for an NFT based on detecting one or more trigger events and the determined user profile.”) generates a modified new content (modified second virtual object) based on the input and user context (first user). Kapur also teaches the modified second virtual object being embedded with a fourth unique digital signature assigned to the first user. (Kapur; ¶0200 “NFTs minted in accordance with several embodiments of the invention may incorporate a series of instances of digital content elements in order to represent the evolution of the digital content over time. Each one of these digital elements can have multiple numbered copies, just like a lithograph, and each such version can have a serial number associated with it, and/or digital signatures authenticating its validity. The digital signature can associate the corresponding image to an identity, such as the identity of the artist.”) This teaches the digital signature being assigned to the creator/user. The modified content (second virtual object) is minted with a digital signature that corresponds to the identity of the creator. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify teachings of Goncalves in view of Geraghty and Misung with the teachings of Kapur to allow for the ability to modify existing digital objects, including those created by other users, based on input and contextual information, and to associate a digital signature of the new creator with the updated object in order to enable collaboration while tracking authorship. Claim 18 has similar limitations as of claim(s) 9, therefore it is rejected under the same rationale as claim(s) 9. Regarding Claim 10, Goncalves in view of Geraghty, Misung, and in further view of Kapur teaches The computer-implemented method of claim 9, further comprising: obtaining, by the computing system, the contextual information associated with the first user based on at least one of a user profile associated with the first user, preferences associated with the first user, or information about the first user obtained from an external source. As discussed in claim 9, Kapur teaches (Kapur; Fig. 38 ¶0430 “The process can generate 3803 content for an NFT based on detecting one or more trigger events and the determined user profile.”) generating content based on the determined user profile (user profile associated with the first user) and further teaches (¶0176“The user profile can include data obtained from various different sources that can provide data regarding users, including digital wallets associated with the user and/or data regarding transactions related user.”) the user profile can include data obtained by various sources including transaction data as well as associated digital wallets (contextual information). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Goncalves in view of Geraghty, Misung, and in further view of Kapur in order to generate and/or modify digital objects, including those created by other users, based on input and contextual information, to provide a more customized, personalized, or relevant object for the user. Using contextual information from a user profile is an application of known techniques to yield predictable results. Regarding Claim 11, Goncalves in view of Geraghty and Misung discloses The computer-implemented method of claim 1, further comprising: receiving, by the computing system, a third input. As discussed in claim 1, Goncalves discloses receiving inputs from a computing system, but not that it is associated with an entity not simultaneously viewing the virtual 3D environment with the first user and the second user, Examiner interprets that this input may come from any entity (including from the users viewing the 3D environment) and is associated with an entity that is not simultaneously viewing the environment. However, Kapur teaches, (¶0388 “Similarly, a text associated with a first token can be modified in the style associated with a second token. For example, token A may correspond to a recipe for pancakes, and token B may correspond to the book "One Hundred Years of Solitude" by author Gabriel Garcia Marquez; from this, the recipe for pancakes of token A may be expressed in the writing style exhibited in "One Hundred Years of Solitude", and embodied in token C.”) This teaches associating the input with an external entity, in the example: a recipe for pancakes (input) modified to use the style (associated) of author Gabriel Garcia Marquez (entity not viewing). Kapur also teaches to cause the generative machine-learned model to modify the second virtual object to generate a modified second virtual object within the virtual 3D environment, as discussed in claim 9, Kapur describes generating content using a language model (¶0457), modifying and/or replacing content (second object ¶0429) by a trigger event (input ¶0430) and the modified second virtual object being generated based on the third input and contextual information associated with the entity the content(object) being generated from the trigger event (input) and the determined user profile (context) (¶430). Kapur does not explicitly disclose the modified second virtual object being embedded with a fourth unique digital signature assigned to the entity, however, at ¶0492, Kapur incorporates by reference in its entirety Jakobsson et al. (US 2022/0407702 A1, hereinafter “Jakobsson”) for token creation and management. Jakobsson teaches (¶0107 “NFTs minted in accordance with several embodiments of the invention may incorporate a series of instances of digital content elements in order to represent the evolution of the digital content over time. Each one of these digital elements can have multiple numbered copies, just like a lithograph, and each such version can have a serial number associated with it, and/or digital signatures authenticating its validity. The digital signature can associate the corresponding image to an identity, such as the identity of the artist.”) the digital signature that is attached can be associated with the identity of the artist (entity). This teaches the digital signature being assigned to the entity (artist identity). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the Goncalves in view of Geraghty and Misung with the teachings of Kapur with the motivation of providing further ease of user customization and attribution. Claim 19 has similar limitations as of claim(s) 11, therefore it is rejected under the same rationale as claim(s) 11. Regarding Claim 12, Goncalves in view of Geraghty, Misung, and Kapur teaches The computer-implemented method of claim 11, further comprising: obtaining, by the computing system, the contextual information associated with the entity based on information about the entity obtained from an external source. Kapur further teaches (Kapur; ¶ 0388, “This can be done using Natural Language Processing (NLP) technologies that extract a sentence structure model from a text, such as that of token B, and apply them to a target text, such as the pancake recipe of token A, or a textbook on the topic of woodworking techniques from the renaissance period.”) the Natural Language Processor obtains a sentence structure model (contextual information) from a text (external source). It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the teachings of Goncalves in view of Geraghty and Misung with the teachings of Kapur in order to provide further personalization/customization. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over (Goncalves et al. US 20230077278 A1), hereinafter “Goncalves”, Geraghty (US11087479B1), Misung (KR20090044438A) and (Singh et al. US 20200302693 A1), hereinafter “Singh”. Regarding Claim 14, as discussed previously Goncalves in view of Geraghty and Misung teaches The computer-implemented method of claim 1, further comprising: receiving, by the computing system, a third input associated with the first user, receiving an input associated with a user to generate an object (see claim 1) and that user being able to provide additional input (third input) after the object is created (Goncalves; ¶0070 see claim 9) and the modified virtual 3D environment being embedded with a fourth unique digital signature assigned to the first user, Goncalves teaches embedding digital objects with a digital signature (see claim 1), which can include large digital objects such as a virtual environment, but not explicitly to cause the generative machine-learned model to modify the virtual 3D environment by changing an environmental condition of the virtual 3D environment to generate a modified virtual 3D environment. However, Singh teaches (Fig. 3 ¶0033 “In some embodiments, user device 210 may access a virtual environment generated and/or hosted by the virtual environment generation server 230.The user may interact with the virtual environment and provide user inputs (e.g., touch inputs, mouse inputs, voice inputs, gyroscope inputs through movement of the user device 210, etc.) to scroll/browse through the virtual environment, modify the virtual environment, obtain additional information regarding products in the virtual environment, etc.”) the user being able to interact with and modify the virtual environment through input (change in environmental condition) and cause the virtual environment generator to create and/or change the current environment. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Goncalves in view of Geraghty and Misung with the teachings of Singh to provide further user customization to allow for the automation of personalization and/or interaction of the user environment and tracking/attribution. Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over (Goncalves et al. US 20230077278 A1), hereinafter “Goncalves”, Geraghty (US11087479B1), Misung (KR20090044438A), (Singh et al. US 20200302693 A1), hereinafter “Singh”, and Niniane Wang (US 7269539 B2), hereinafter “Wang”. Regarding Claim 15, Goncalves in view of Geraghty, Misung, and Singh teaches The computer-implemented method of claim 14 (see claim 14), but not explicitly wherein the environmental condition includes one more of a lighting condition of the virtual 3D environment, a weather condition of the virtual 3D environment, a noise condition of the virtual 3D environment, a time of day in the virtual 3D environment, or a geographic location of the virtual 3D environment. However, Wang teaches (Fig. 7, Col. 9 ln. 39-43 “The user may alternatively select User-defined weather 613, which may launch a custom weather menu 701, illustrated in FIG. 7. Custom weather menu allows the user to change general weather conditions for any or all weather stations for which weather METARs may be received.”) the user may use a weather menu (input) to change the weather condition. It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Goncalves in view of Geraghty, Misung, Singh, with the weather conditions of Wang, to provide further user customization to allow for the automation of personalization and/or interaction of the user environment. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over (Goncalves et al. US 20230077278 A1), hereinafter “Goncalves”, Geraghty (US11087479B1), Misung (KR20090044438A), (Kapur et al. US 20230070586 A1), hereinafter “Kapur”, (Singh et al. US 20200302693 A1), hereinafter “Singh”, and Quigley et al. WO 2022204404 A1, (hereinafter “Quigley”). Regarding claim 20, as previously discussed in claim 1, Goncalves in view of Geraghty and Misung teaches the limitations of claim 1 and therefore teaches the corresponding limitations of claim 20. Claim 20 further recites to change an environmental condition of the virtual 3D environment to generate a modified virtual 3D environment (Goncalves, in view of Geraghty and Misung, fails to teach, but Singh teaches (Fig. 3 ¶0033) a user being able to interact with and modify the virtual environment. This teaches changing an environmental condition of the virtual 3D environment to generate a modified virtual 3D environment. It would have been obvious to one of ordinary skill in the art, before the effective filing date, to modify the XR environment of Goncalves, in view of Geraghty and Misung, with the environmental condition modification of Singh to provide further user customization to allow for the automation of personalization and/or interaction of the user environment and tracking/attribution.), the modified virtual 3D environment being embedded with a second unique digital signature assigned to the first user (Goncalves in view of Geraghty, Misung, and Singh describes (Misung; 1.3.1 and 2.2) inserting Wi using Ski. This teaches embedding user-assigned watermarks/signatures in VR/3D data using each user’s secret key.), receiving, by the computing system, a second input associated with the second user, to cause the generative machine-learned model to modify the first virtual object to generate a modified first virtual object (Goncalves in view of Geraghty, Misung, and Singh fails to teach, but Kapur teaches generating content using a language model (¶0457), modifying and/or replacing content (second object ¶0429) by a trigger event (input ¶0430). It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Goncalves in view of Geraghty, Misung, and Singh with the content generation of Kapur, in order to further ease of user customization and attribution.), the modified first virtual object being embedded with a third unique digital signature assigned to the second user. (Goncalves, in view of the combination and in further view of Misung teaches (Misung; 1.3.1 and 2.2) embedding a user-assigned watermark/signature into a user’s contribution/work using that user’s secret key.), causing, by the computing system, a fourth unique digital signature to be assigned the modified virtual 3D environment including the modified first virtual object, the fourth unique digital signature being assigned to the first user and the second user (Goncalves, in view of the combination and in further view of Misung teaches (Misung; 2.1) generating a shared secret key from multiple users’ keys and (2.2) inserting joint watermark W into the entire work I T, using SSk.), in response to the generation of the first virtual object or the modified virtual 3D environment, providing access to one or more of a real-world experience or another virtual environment experience to the first user, using the first unique digital signature or the second unique digital signature (Goncalves in view of Geraghty, Misung, Singh, and Kapur fails to explicitly teach, but Quigley teaches teaches (¶1080) that NFTs may be used to control access to digital content including video game levels and VR locations (another virtual environment).) It would have been obvious to one or ordinary skill in the art, to use the (user-assigned) signature/identifier as an access-control credential as taught by Quigley as there is a reasonable expectation of success and because doing so merely combines prior art elements according to known methods to yield predictable results. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DAN F KALHORI whose telephone number is (571)272-5475. The examiner can normally be reached Mon-Fri 8:30-5:30 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached on (571) 272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-/DEVONA E FAULK/ https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAN F KALHORI/Examiner, Art Unit 2618 /DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618
Read full office action

Prosecution Timeline

May 11, 2023
Application Filed
Jun 26, 2025
Non-Final Rejection — §103
Sep 17, 2025
Interview Requested
Sep 23, 2025
Examiner Interview Summary
Oct 01, 2025
Response Filed
Dec 15, 2025
Final Rejection — §103
Jan 20, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12567392
METHOD FOR A TELEVISION TO ASSIST A VIEWER IN IMPROVING WATCHING EXPERIENCE IN A ROOM, AND A TELEVISION IMPLEMENTING THE SAME
2y 5m to grant Granted Mar 03, 2026
Patent 12469152
SYSTEM AND METHOD FOR THREE-DIMENSIONAL MULTI-OBJECT TRACKING
2y 5m to grant Granted Nov 11, 2025
Patent 12456181
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR VERIFYING VIRTUAL AVATAR
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+0.0%)
2y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 3 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month