Prosecution Insights
Last updated: April 19, 2026
Application No. 18/391,498

TECHNIQUES FOR SAMPLING AND REMIXING IN IMMERSIVE ENVIRONMENTS

Non-Final OA §103
Filed
Dec 20, 2023
Examiner
LIU, ZHENGXI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Autodesk, Inc.
OA Round
3 (Non-Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
225 granted / 354 resolved
+1.6% vs TC avg
Strong +40% interview lift
Without
With
+40.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
31 currently pending
Career history
385
Total Applications
across all art units

Statute-Specific Performance

§101
13.2%
-26.8% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
5.1%
-34.9% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 354 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/16/2026 has been entered. Claim Status The response and arguments filed on 1/16/2026 have been received and have been considered. No Claim has been added or cancelled. Claims 1, 11, and 20 have been amended. Claims 1-20 are pending. Claims 1-20 are rejected. Response to Arguments Applicant’s arguments and response have been entered and considered. The arguments are moot in view of the Examiner’s new ground of rejection based on a new additional reference. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 6, 8, 11-12, 16, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Stanley et al. (US 20180114369 A1) in view of Berretti et al. (“3D Mesh decomposition using Reeb graphs”). . Regarding Claim 1, Stanley teaches A computer-implemented method for applying one or more samples in a three-dimensional (3D) immersive environment ( [BRI on the record] With respect to claimed “immersive environment,” the Examiner is reading the limitation to mean: an environment that contains a computer-generated 3D environment that includes one or more selectable 3D objects, e.g., a VR scene or AR scene. This interpretation is consistent with the specification: [0047] As used herein, an “immersive environment” (IE) comprises a computer-generated 3D environment that includes one or more selectable 3D objects. The 3D display can display a 3D immersive scene (such as a VR scene or AR scene) comprising a particular view of the immersive environment, depending on the position/location of the user viewpoint within the immersive environment. An immersive environment comprises one or more IE scenes, each IE scene comprising a particular sub-portion of the immersive environment that is currently displayed and viewed in the 3D display. Examples of a 3D immersive environment include a virtual environment generated by a VR interface, an augmented environment generated by an AR interface, augmented spaces with projections or displays (such as the immersive Van Gogh experience), and the like. Spec. ¶ 47. [Mapping Analyses] “According to aspects of the technology described herein, the manner in which an object behaves in lighting conditions, the shininess of the object, and the texture of the object may all be selected, sampled, and transferred to other target surfaces, such as a canvas or the surfaces of other objects, in the virtual 3D drawing space.” Stanley ¶ 4. Stanley also teaches the “virtual 3D drawing space” contains a selectable 3D object, stating “The user interface 100 includes a canvas 102 on which a virtual object 104 has been created. In this instance, the virtual object 104 is a crown. The virtual object 104 may be a 2D or a 3D object.” Stanley ¶ 15. “The computing device can take the form of . . . a holographic display, a virtual reality headset, an augmented reality headset . . ..” Stanley ¶ 28. PNG media_image1.png 454 710 media_image1.png Greyscale ), the method comprising: displaying a first 3D immersive environment that includes a first 3D object (Stanley fig. 3; . “For example, in the user interface 300 of FIG. 3, the crown 204 is the source object, and the cube 302 is the target object” Stanley ¶21. The first 3D object is mapped to disclosed the cube 302 in Fig. 3.); receiving a second 3D object Stanley fig. 3; . “For example, in the user interface 300 of FIG. 3, the crown 204 is the source object, and the cube 302 is the target object” Stanley ¶21. The second 3D object is mapped to disclosed the crown 204 in Fig. 3, which is different from the cube 302, mapped to the first 3D object. PNG media_image2.png 508 444 media_image2.png Greyscale ); selecting a first sample from a sub-part “As discussed with respect to FIG. 2, a portion of the surface of the crown 204 has been sampled. The color gradient in the color bar 216 indicates that both the color and other material properties of the crown 204 have been sampled.” Stanley ¶21. PNG media_image3.png 452 696 media_image3.png Greyscale ); and applying the first sample to a first property of the first 3D object to generate a new 3D object ( “According to aspects of the technology described herein, the manner in which an object behaves in lighting conditions, the shininess of the object, and the texture of the object may all be selected, sampled, and transferred to other target surfaces, such as a canvas or the surfaces of other objects, in the virtual 3D drawing space.” Stanley ¶ 12; see fig. 5 510. the claimed “first property” includes disclosed “texture” or “shininess.” The “other objects,” target objects, after the property modification become the “modified/new 3D object” generated.), wherein the first sample was captured from a different 3D object ( “According to aspects of the technology described herein, the manner in which an object behaves in lighting conditions, the shininess of the object, and the texture of the object may all be selected, sampled, and transferred to other target surfaces, such as a canvas or the surfaces of other objects, in the virtual 3D drawing space.” Stanley ¶ 12. the claimed “different 3D object” is mapped to disclosed “an object” that has been sampled. The object could be a 3D object as Stanley states, “A method of transferring material properties of a three-dimensional (3D) source object to a 3D target object in a virtual 3D drawing space generated by a computer . . ..” Stanley Claim 1. The source objects (fig. 5 504, 506) is a different 3D object from the target object (fig. 5 508, 510) with respect to sampling.). Stanley does not explicitly disclose deconstructing the received a second 3D object into a plurality of sub-parts, and selecting the first sample from the sub-part in the plurality of sub-parts. Berretti teaches: deconstructing the received a second 3D object into a plurality of sub-parts ( “Decomposition of complex 3D objects into simpler sub-parts is a challenging research subject with relevant outcomes for several application contexts.” Berretti Abstract. “Several of these applications can benefit from the possibility to cut a 3D object model into simpler parts, a process that in the literature is referred to as 3D object decomposition.” Berretti 1. Introduction.), and selecting the first sample from the sub-part in the plurality of sub-parts ( After the combination of Stanley and Berretti, the sampling as taught by Stanley is applied is taken from any sub-part generated according to Berretti.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Berretti’s model decomposition with Stanley. One of ordinary skill in the art would be motivated to provide guidance/information related to areas for sampling. “Differently, in applications dealing with object modeling and retrieval, object decomposition may target the identification of parts with a clear semantics or parts that are perceptually meaningful, so as to enable reuse of object components in the development of new models or support searching by parts in content based retrieval of 3D objects.” Berretti 1. Introduction. The semantic information could be provided to a user. Further, Berretti teaches wide range of benefits and application of model decomposition: PNG media_image4.png 228 428 media_image4.png Greyscale , Berretti 1. Introduction. Claim 11 recites similar limitations as Claim 1. The rejection analyses for Claim 1 is also applied to Claim 11. In addition, Claim 11 recites, “One or more non-transitory computer-readable media including instructions that, when executed by one or more processors, cause the one or more processors to . . . (Stanley fig. 6, ¶¶ 28,38). Claim 20 recites similar limitations as Claim 1. The rejection analyses for Claim 1 is also applied to Claim 20. In addition, Claim 20 recites, “A computer system comprising: a memory that includes instructions; and at least one processor that is coupled to the memory and, upon executing the instructions, . . .” (Stanley fig. 6, ¶¶ 28,38). Regarding Claim 2, Stanley further teaches The computer-implemented method of claim 1, wherein the first sample comprises a texture, an animation, a motion path, or a set of physical parameters associated with the different 3D object (“According to aspects of the technology described herein, the manner in which an object behaves in lighting conditions, the shininess of the object, and the texture of the object may all be selected, sampled, and transferred to other target surfaces, such as a canvas or the surfaces of other objects, in the virtual 3D drawing space.” Stanley ¶ 12.). Claim 12 recites similar limitations as Claim 2. The rejection analyses for Claim 2 is also applied to Claim 12. Regarding Claim 6, Stanley further teaches The computer-implemented method of claim 1, further comprising: before applying the first sample to the first 3D object, displaying the different 3D object in the first 3D immersive environment (Stanley fig. 3); and receiving a selection of the first 3D object and the different 3D object ( Stanley discloses “selection of . . . the different 3D object,” stating “In order to assist a user in selecting a source object or portion thereof to sample, this preview may be provided before a user actually selects the source object or portion thereof. Subsequent to selecting the source object or a portion thereof, the window may provide a representation of the material properties that have been sampled.” Stanley ¶ 33. Stanley discloses “selection the first 3D object,” because in fig. 3, cube 302 is the target object that has been selected and inserted into the scene, and cube 302 is also selected to be a target for the material properties sampled from the source object. “At step 508, a second input indicating a command to apply the material properties of the source object to a target object is received. In response, at step 510, the material properties of the source object are applied to the target object.” Stanley ¶ 32.). Claim 16 recites similar limitations as Claim 6. The rejection analyses for Claim 6 is also applied to Claim 16. Regarding Claim 8, Stanley further teaches The computer-implemented method of claim 6, further comprising: upon receiving the selection of the first 3D object and the different 3D object ( PNG media_image5.png 460 716 media_image5.png Greyscale ), displaying a first selectable option corresponding to the first sample ( “For example, in the user interface 300 of FIG. 3, the crown 204 is the source object, and the cube 302 is the target object. As discussed with respect to FIG. 2, a portion of the surface of the crown 204 has been sampled. The color gradient in the color bar 216 indicates that both the color and other material properties of the crown 204 have been sampled.” Stanley ¶ 21. “For example, if a user selects a paintbrush, the user may then paint the sampled material onto the target object. In this way, the target object receives both the color and the material properties of the source object.” Stanley ¶ 16. The claimed “first selectable option” could be mapped to the option of using “paintbrush” to apply sampled material); receiving a selection of the first selectable option; and in response to receiving the selection of the first selectable option, initiating one or more operations to apply the first sample to the first 3D object ( “For example, if a user selects a paintbrush, the user may then paint the sampled material onto the target object. In this way, the target object receives both the color and the material properties of the source object.” Stanley ¶ 16. “The sampled material properties from the crown 204 may be applied to the cube 302, as illustrated by the applied paint 304. In this way, the portion of the cube 302 that is covered by the applied paint 304 may be the same or similar in appearance to the sampled portion of the crown 204. For example, the painted portion of the cube may be gold in color, have a particular shine and a metallic appearance, and may interact with light in the same manner in which the sampled portion of the crown 204 interacts with light. Thus, the crown 204 and the painted portion of the cube 302 may have the same material properties.” Stanley ¶ 21.). Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti as applied to Claims 1 and 11, in further view of Herrity et al. (US 20230252733 A1). Regarding Claim 3, Stanley in view of Berretti teaches The computer-implemented method of claim 1. Stanley in view of Berretti does not explicitly disclose wherein applying the first sample to the first 3D object comprises replacing metadata associated with the first property of the first 3D object with metadata associated with the first sample. Herrity teaches wherein applying the first sample to the first 3D object comprises replacing metadata associated with the first property of the first 3D object with metadata associated with the first sample ( Herrity teaches changing the first property of the first 3D object by replacing metadata about color, stating “Other examples of update events include, but are not limited to, attribute changes to the metadata stored on or linked to the three-dimensional digital object itself such as the color of the three-dimensional digital object, the size of the three-dimensional digital object, etc.” Herrity ¶ 23. Stanley teaches the first sample could be color attributes of a source object, stating “For example, if a user selects a paintbrush, the user may then paint the sampled material onto the target object. In this way, the target object receives both the color and the material properties of the source object.” Stanley ¶ 16. As Herrity teaches that color attributes of an object could be recorded in metadata stored on or linked to the object. Therefore, the first sample could be captured by metadata stored on or linked to the source object that has been sampled. Stanley in view of Berretti teaches replacing a target 3D object’s relevant color attribute by the source object’s relevant color attributes. Stanley ¶ 16. Therefore, after Stanley in view of Berretti is combined with Herrity, it teaches replacing metadata associated with the first property of the first 3D object with metadata, from the source object, associated with the first sample.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Herrity’s metadata representing computer-generated object’s attributes with Stanley in view of Berretti. One of ordinary skill in the art would be motivated to quickly and convenient transfer attributes from one object to another. In addition, Herrity teaches the potential security aspect of the attributes transferring, stating “Embodiments herein provide means of synchronizing, authenticating, and displaying blockchain-based certified ownership and other object metadata upon virtual objects in augmented reality or other forms of 3D virtual representation. Three-dimensional digital objects that can be represented in AR or other three-dimensional virtual representations are created and available for transfer between owners using blockchain (e.g., in response to a sale). Each three-dimensional digital object, when viewed, includes a tap-to-view certificate that presents the blockchain-backed certification metadata of that discreet object (e.g., the artist and release information, the current owner, and metadata about the edition itself (which number it is, how many there are, etc.)).” Herrity ¶ 28. Claim 13 recites similar limitations as Claim 3. The rejection analyses for Claim 3 is also applied to Claim 13. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti as applied to Claims 1 and 11, in further view of Huo et al. (“Window-Shaping: 3D Design Ideation by Creating on, Borrowing from, and Looking at the Physical World”).1 Regarding Claim 4, Stanley in view of Berretti teaches The computer-implemented method of claim 1, wherein the different 3D object was resident within a second 3D immersive environment when the first sample was captured from the different 3D object ( [BRI on the record] Does the claim allow an interpretation that the “second 3D immersive environment” is the same as the “first 3D immersive environment”? The specification suggests that it may be the case. The specification recites, “As such, a first immersive environment 130 having a different set of associated 3D objects from a second immersive environment 130 can be considered a separate and distinct immersive environment 130, even when the set of associated 3D objects only differ by a single 3D object.” Here, the specification appears to state that if the first and second immersive environments have different sets of associated 3D objects, the environments are distinct and separate, which suggests the first and second immersive environments may not have to be distinct and separate. [Mapping Analyses] PNG media_image5.png 460 716 media_image5.png Greyscale “For example, in the user interface 300 of FIG. 3, the crown 204 is the source object, and the cube 302 is the target object. As discussed with respect to FIG. 2, a portion of the surface of the crown 204 has been sampled. The color gradient in the color bar 216 indicates that both the color and other material properties of the crown 204 have been sampled.” Stanley ¶ 21. “The same color gradient is displayed in the color bar 216. This color gradient may provide an indication to the user that not only the color, but other material properties of a particular object within the drawing space have been sampled and may be applied to another object in the drawing space.” Stanley ¶ 20.). If the claim requires that the “second 3D immersive environment” must be distinct and separate from the “first 3D immersive environment,” Stanley in view of Berretti does not teach the different 3D object was resident within a second 3D immersive environment based on this narrower interpretation. However, Huo teaches the different 3D object was resident within a second 3D immersive environment ( PNG media_image6.png 402 568 media_image6.png Greyscale The claimed “different 3D object” is mapped to the table in Fig. 6 (a), which resides in a 3D mixed/augmented reality environment (Abstract).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Hua’s sampling in a different environment with Stanley in view of Berretti. One of ordinary skill in the art would be motivated to borrow more inspirations from a wider ranges of sources. Stanley teaches sampling data from sources as shown its fig. 3; and, after the combination of Hua, information, including texture, could be sampled from other environments, including physical environments. See Huo Abstract. Claim 14 recites similar limitations as Claim 4. The rejection analyses for Claim 4 is also applied to Claim 14. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti as applied to Claims 1 and 11, in further view of Kudo (US 20230101386 A1). Regarding Claim 5, Stanley in view of Berretti teaches The computer-implemented method of claim 1, further comprising: before applying the first sample to the first 3D object, displaying, in the first 3D immersive environment, a sample collection user interface that includes a first sample icon that visually represents the first sample ( PNG media_image5.png 460 716 media_image5.png Greyscale “For example, in the user interface 300 of FIG. 3, the crown 204 is the source object, and the cube 302 is the target object. As discussed with respect to FIG. 2, a portion of the surface of the crown 204 has been sampled. The color gradient in the color bar 216 indicates that both the color and other material properties of the crown 204 have been sampled.” Stanley ¶ 21. The claimed “first sample icon” is mapped to the disclosed “the color bar 216.”); and receiving a selection of the first sample represented by the first sample icon and a selection of the first 3D object ( “The same color gradient is displayed in the color bar 216. This color gradient may provide an indication to the user that not only the color, but other material properties of a particular object within the drawing space have been sampled and may be applied to another object in the drawing space.” Stanley ¶ 20. “At step 508, a second input indicating a command to apply the material properties of the source object to a target object is received. In response, at step 510, the material properties of the source object are applied to the target object.” Stanley ¶ 32.). However, Stanley in view of Berretti does not explicitly disclose a selection of the first sample icon. Kudo teaches a selection of the first sample icon (“When a color sample icon is selected, the color of the part changes.” Kudo ¶ 91.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kudo’s selection of icon with Stanley in view of Berretti. One of ordinary skill in the art would be motivated to provide convenience to a user when interacting with computer graphics. Claim 15 recites similar limitations as Claim 5. The rejection analyses for Claim 5 is also applied to Claim 15. Claim 7 is rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti as applied to Claim 6, in further view of Eldridge et al. (US 20090118845 A1). Regarding Claim 7, Stanley in view of Berretti teaches The computer-implemented method of claim 6. Stanley in view of Berretti does not explicitly disclose wherein the first 3D object and the different 3D object are selected when the different 3D object is dragged onto the first 3D object within the first 3D immersive environment. Eldridge teaches wherein the first 3D object and the different 3D object are selected when the different 3D object is dragged onto the first 3D object within the first 3D immersive environment (“. . . functionality coupled to the graphical user interface that transfers characteristics of the first object to the second object in response to a user command whereby by the depiction of one of the objects is graphically dragged and dropped onto the depiction of the other object.” Eldridge ¶ 135.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Eldridge’s characteristics transferring between objects with Stanley in view of Berretti. One of ordinary skill in the art would be motivated to quickly transfer attributes between objects. Eldridge ¶ 135. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti as applied to Claim 1, in further view of Shin (US 20200285355 A1). Regarding Claim 9, Stanley in view of Berretti suggests The computer-implemented method of claim 1, further comprising applying a color-palette sample to the first 3D object and to a third 3D object displayed in the first 3D immersive environment to modify a color property of the first 3D object and the third 3D object ( Stanley states, “The virtual object 104 may be a 2D or a 3D object. The user interface 100 also includes a toolbar 110. The toolbar 110 provides various tools for working in the virtual drawing space. For example, a user may select an artistic tool from a variety of options, including markers, pencils, ink pens, paintbrushes, and 3D input options. A user may also select a color from the color palette 112. The color palette 112 may include a number of predefined color options. The selected tool and the selected color may then be associated with user inputs in the virtual drawing space.” Stanley ¶ 15. Stanley teaches applying attributes to multiple objects, mapped to “first 3D object” and “third 3D object,” in a virtual drawing space, stating “According to aspects of the technology described herein, the manner in which an object behaves in lighting conditions, the shininess of the object, and the texture of the object may all be selected, sampled, and transferred to other target surfaces, such as a canvas or the surfaces of other objects, in the virtual 3D drawing space.”). Stanley in view of Berretti is not explicitly about the process of applying a color-palette sample to an object. Shin teaches the process of applying a color-palette sample to an object ( “The method comprises configuring a palette including at least one palette color, displaying a color list of the palette, and customizing the color of the object by applying a palette color selected by a user from the color list to the object, wherein the at least one palette color includes a first partial color and a second partial color, which is distinguished from the first partial color.” Shin Abstract.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shin’s coloring method with Stanley in view of Berretti. One of ordinary skill in the art would be motivated to easily apply color from color palette to an object in an image. Stanley already teaches color palette and, more likely than not, Stanley’s color palette is used in this fashion as well. Shin makes the teaching explicit. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti and Shin as applied to Claim 9, in further view of Bhatti et al. (US 20120263379 A1) and Huo et al. (“Window-Shaping: 3D Design Ideation by Creating on, Borrowing from, and Looking at the Physical World”). Regarding Claim 10, Stanley in view of Berretti and Shin teaches The computer-implemented method of claim 9. Stanley in view of Berretti and Shin does not explicitly disclose wherein the color-palette sample comprises a plurality of colors sampled from a plurality of 3D objects included within a second 3D immersive environment. Bhatti wherein the color-palette sample comprises a plurality of colors sampled from a plurality of 3D objects “In one example, the color palette 120 may be determined by the colors associated with multiple features of an object or objects.” Bhatti ¶ 17.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bhatti’s color sampling method with Stanley in view of Berretti and Shin. One of ordinary skill in the art would be motivated to allow a user to select a wider range of colors from a wider range of objects. It provides more versatility in color choices. However, Stanley in view of Berretti, Shin, and Bhatti does not explicitly teach 3D objects included within a second 3D immersive environment, if the “second 3D immersive environment” must be distinct and separate from the “first 3D immersive environment.” Huo teaches 3D objects included within a second 3D immersive environment ( PNG media_image6.png 402 568 media_image6.png Greyscale The claimed “different 3D object” is mapped to the table in Fig. 6 (a), which resides in a 3D mixed reality environment (Abstract).). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Hua’s sampling in a different environment with Stanley in view of Berretti, Shin, and Bhatti. One of ordinary skill in the art would be motivated to borrow more inspirations from a wider ranges of sources. Stanley teaches sampling data from sources as shown its fig. 3; and, after the combination of Hua, information, including texture, could be sampled from other environments. See Huo Abstract. Claims 17 are rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti as applied to Claim 11, in further view of Bhatti et al. (US 20120263379 A1) and Shin (US 20200285355 A1). Regarding Claim 17, Stanley in view of Berretti teaches The one or more non-transitory computer-readable media of claim 16, wherein, further comprising: upon receiving the selection of the first 3D object and the different 3D object (Stanley fig. 3 PNG media_image5.png 460 716 media_image5.png Greyscale ). However, Stanley in view of Berretti does not explicitly disclose displaying a first selectable option corresponding to the first sample and a second selectable option corresponding to a second sample that was captured from the different 3D object; and initiating the first sample and the second sample to be applied to the first 3D object in response to selections of the first selectable option and the second selectable option. Bhatti teaches displaying a first selectable option corresponding to the first sample and a second selectable option corresponding to a second sample that was captured from the different 3D object (“In one example, the color palette 120 may be determined by the colors associated with multiple features of an object or objects.” Bhatti ¶ 17. Each color in the palette is a selectable option.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bhatti’s color sampling method with Stanley in view of Berretti. One of ordinary skill in the art would be motivated to allow a user to select a wider range of colors from a wider range of objects. It provides more versatility in color choices. However, Stanley in view of Berretti and Bhatti does not explicitly teach initiating the first sample and the second sample to be applied to the first 3D object in response to selections of the first selectable option and the second selectable option. Shin teaches initiating the first sample and the second sample to be applied to the first 3D object in response to selections of the first selectable option and the second selectable option ( “The method comprises configuring a palette including at least one palette color, displaying a color list of the palette, and customizing the color of the object by applying a palette color selected by a user from the color list to the object, wherein the at least one palette color includes a first partial color and a second partial color, which is distinguished from the first partial color.” Shin Abstract. In addition, Stanley’s fig. 3 304 shows color attribute(s) could be applied to a sub region of a 3D object: PNG media_image7.png 92 114 media_image7.png Greyscale . Stanley states, “In this way, the portion of the cube 302 that is covered by the applied paint 304 may be the same or similar in appearance to the sampled portion of the crown 204.” Stanley ¶ 21. After Stanley in view of Berretti and Bhatti is combined with Shin, multiple areas like 304 may have been colored according to Shin’s method. Therefore, Stanley in view of Berretti, Bhatti and Shin teaches the first sample and the second sample are applied to the first 3D object on different regions of target object or overwrite each other on target object.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Shin’s coloring method with Stanley in view of Berretti and Bhatti. One of ordinary skill in the art would be motivated to easily apply color from color palette to an object in an image. Stanley already teaches color palette and, more likely than not, Stanley’s color palette is used in this fashion as well. Shin makes the teaching more explicit. Claims 18 are rejected under 35 U.S.C. 103 as being unpatentable over Stanley in view of Berretti as applied to Claim 11, in further view of Wang et al. (“CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications”) and Amidon et al. (US 20080306817 A1). Regarding Claim 18, Stanley in view of Berretti teaches The one or more non-transitory computer-readable media of claim 11, further comprising: displaying the different 3D object within the first 3D immersive environment (Stanley fig. 3). Stanley in view of Berretti does not explicitly disclose receiving a selection of a revisit function to be applied to the different 3D object ( [BRI on the record] With respect to “revisit function,” the Examiner is reading it to require: a function allowing the user to view the sampling immersive environment from which the selected 3D object was originally sampled. This interpretation is in light of the specification, which states: [0170] In some embodiments, the SR engine 140 also provides a “revisit” function during the remix stage. When selected for a particular sampled 3D object displayed within the remix immersive environment 134, the revisit function allows the user to view the sampling immersive environment 132 from which the selected 3D object was originally sampled. In some embodiments, the revisit function can be mapped to a particular button on the IE controllers 176 to allow the user to easily access the revisit function at any time during the remix stage. Spec. ¶ 170.); and in response, displaying at least a portion of a second 3D immersive environment within the first 3D immersive environment. Wang teaches receiving a selection of a revisit function to be applied to the different 3D object (“To this end, we propose CAPturAR, an AR authoring work-fow, which allows users to record their daily activities, revisit the recorded scenarios, create and improve their personal con-text models, then build and deploy their own customized CAPs onto AR-HMD platforms.” Wang p. 329 left col. The context is sampling the different 3D object as taught by Stanley.); and in response, displaying at least a portion of a second 3D immersive environment “We design the interface of CAPturAR to allow fast navigation through the timeline and precise selection of the activity clips from the cluttered recordings. Then, based on users’ understanding of their past behaviors, they interpret the selected demonstration clips as contexts and generate detection models with the motion data. Users can also designate necessary contextual information (e.g., time, location, objects) to disambiguate the activities. Further, users test human action detection performance and refine the context models through iterations.” Wang p. 329 right col.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wang’s context aware applications with Stanley in view of Berretti. One of ordinary skill in the art would be motivated to (a) to create a sequential task tutorial (Wang p. 333 right col.) and/or locate the context of a creation event to be assessed and improved. However, Stanley in view of Berretti and Wang does not explicitly disclose displaying at least a portion of the second 3D immersive environment within the first 3D immersive environment. Amidon teaches providing additional information within the first 3D immersive environment (“For example, the ad may be displayed in conjunction with the virtual world, such as an ad presented alongside the virtual world (such as a picture-in-picture). As another example, the ad may be incorporated into the virtual world, such as presented in the stadium on a simulated jumbo screen or on a blimp visible above the stadium.” Amidon ¶ 46.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Amidon’s approach to display contextual/supplemental information within virtual world with Stanley in view of Berretti, and Wang. One of ordinary skill in the art would be motivated to provide additional information without being visually disruptive to user. Allowable Subject Matter Claim 19 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and address relevant objection and 112(b) rejection. Claim 19 is distinguished from of Stanley in view of Wang and Amidon, because Claim 19 recites: upon receiving the selection of the revisit function, retrieving context information for the different 3D object that is captured in a second sample of the different 3D object, the context information specifying the second 3D immersive environment from which the different 3D object was captured. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZHENGXI LIU whose telephone number is (571)270-7509. The examiner can normally be reached M-F 9 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ZHENGXI LIU/Primary Examiner, Art Unit 2611 1 Huo was included in Applicant’s IDS.
Read full office action

Prosecution Timeline

Dec 20, 2023
Application Filed
Jul 11, 2025
Non-Final Rejection — §103
Oct 10, 2025
Response Filed
Nov 15, 2025
Final Rejection — §103
Jan 16, 2026
Response after Non-Final Action
Feb 18, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Mar 02, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602865
METHODS FOR DEPTH CONFLICT MITIGATION IN A THREE-DIMENSIONAL ENVIRONMENT
2y 5m to grant Granted Apr 14, 2026
Patent 12599463
COLOR MANAGEMENT PROCESS FOR CUSTOMIZED DENTAL RESTORATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597402
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM FOR APPLICATION WINDOW HAVING FIRST DISPLAY MODE AND SECOND DISPLAY MODE
2y 5m to grant Granted Apr 07, 2026
Patent 12567193
PARTICLE RENDERING METHOD AND APPARATUS
2y 5m to grant Granted Mar 03, 2026
Patent 12561929
METHOD AND ELECTRONIC DEVICE FOR PROVIDING INFORMATION RELATED TO PLACING OBJECT IN SPACE
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+40.1%)
3y 4m
Median Time to Grant
High
PTA Risk
Based on 354 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month