Prosecution Insights
Last updated: April 19, 2026
Application No. 18/954,183

EXTENDED REALITY AUTHORING SYSTEM AND METHOD

Non-Final OA §103§112
Filed
Nov 20, 2024
Examiner
BADER, ROBERT N.
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Beamm Technologies Inc.
OA Round
3 (Non-Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
3y 1m
To Grant
70%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
173 granted / 393 resolved
-18.0% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
32 currently pending
Career history
425
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
48.7%
+8.7% vs TC avg
§102
13.9%
-26.1% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 393 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 1/20/26 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-13, 21-23 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Amended independent claim 1 recites “sending the timeseries of asset parameters to a remote computing system without sending the digital asset to the remote computing system”. Applicant’s remarks indicate support for the claim 1 amendment is found in paragraphs 161 and 56, however these paragraphs do not discuss sending asset parameters to the remote computing system “without sending the digital asset to the remote computing system”. That is, while paragraphs 160-163 describe variations of content information including the asset parameters being sent to the remote system, they do not contemplate or disclose the negative limitation of “without sending the digital asset to the remote computing system”, because these paragraphs describe what is sent, not what isn’t sent. Furthermore, Applicant’s disclosure does not appear to distinguish the scope of a “digital asset” from the digital asset’s parameters, i.e. although there are additional components to a digital asset as in paragraphs 41-48, the parameters of the digital asset are part of the dataset defining a digital asset, of which there are many variants having different parameters, attributes, formats, etc. Therefore the negative limitation “without sending the digital asset to the remote computing system” was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Depending claims are rejected under the same rationale, and in particular it is noted that claim 22 substantially repeats the limitation by reciting that the custom data object does not store the digital asset. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-13, 21-23 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Amended independent claim 1 recites “sending the timeseries of asset parameters to a remote computing system without sending the digital asset to the remote computing system”. Applicant’s remarks indicate support for the claim 1 amendment is found in paragraphs 161 and 56, however these paragraphs do not discuss sending asset parameters to the remote computing system “without sending the digital asset to the remote computing system”. That is, while paragraphs 160-163 describe variations of content information including the asset parameters being sent to the remote system, they do not contemplate or disclose the negative limitation of “without sending the digital asset to the remote computing system”, because these paragraphs describe what is sent, not what isn’t sent. Furthermore, Applicant’s disclosure does not appear to distinguish the scope of a “digital asset” from the digital asset’s parameters, i.e. although there are additional components to a digital asset as in paragraphs 41-48, the parameters of the digital asset are part of the dataset defining a digital asset, of which there are many variants having different parameters, attributes, formats, etc. This leaves the scope of “without sending the digital asset to the remote computing system” indefinite, i.e. the parameters of the digital asset are part of the dataset defining the digital asset, and the claim both requires that the parameters of the digital asset are sent and without sending the digital asset comprising said parameters, such that only one requirement is possible to satisfy. Depending claims do not clarify this issue and are therefore rejected under the same rationale. It is additionally noted that claim 22 repeats substantially the same limitation. For purposes of applying prior art, the sending limitation of claim 1 will be interpreted using the scope of depending claim 23, i.e. “sending a custom data object storing the timeseries of asset parameters to a remote computing system, wherein the custom data object does not include the digital asset 3D model(s)”, corresponding to the examples of paragraph 163. Claims 8, 9, and 12 recite the limitation "the set of asset parameters". There is insufficient antecedent basis for this limitation in the claims, because claim 1 was amended to recite “timeseries” of asset parameters. For purposes of applying prior art, claims 8-12 will be interpreted as reciting “timeseries” instead of “set”. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 8-10, 12, 13, 22, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over “DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments” by Zeyu Wang, et al. (hereinafter Wang) in view of U.S. Patent Application Publication 2021/0134076 A1 (hereinafter Ramani) in view of “DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences” by Blair MacIntyre, et al. (hereinafter MacIntyre) in view of “Augmented and Virtual Reality Object Repository for Rapid Prototyping” by Ivan Jovanovikj, et al. (hereinafter Jovanovikj) in view of “How to load a model and textures from a remote server using ARKit?” by stackoverflow.com (hereinafter Stack Overflow). Regarding claim 1, the limitations “A method for extended reality content generation, comprising: at a mobile device located within a real world scene, generating low-fidelity content, comprising: sampling a set of measurements of the real world scene” is taught by Wang (Wang, e.g. abstract, sections 1-9, describes DistanciAR, a system for authoring augmented reality content for particular real world scene. Wang, e.g. sections 1, 3, 5, teaches that the system includes three interfaces, including a scanning interface, e.g. section 3.3, an authoring interface, e.g. sections 3.2, 5.2, 5.3, and a viewing interface, e.g. section 3.3. Further, Wang, e.g. section 3.1, teaches that the scanning interface is used to scan the real world scene with the mobile device's camera and LiDAR scanner and generate a textured mesh model of the environment, i.e. the claimed sampling a set of measurements of the real world scene at a mobile device.) The limitations “rendering a mobile version of a digital asset relative to a view of the real world scene, based on the set of measurements; receiving a set of asset parameters from a user for the digital asset; modifying the rendered digital asset based on the asset parameters in real time” are taught by Wang (Wang, e.g. section 3.2, teaches that after the real world scene model is generated the authoring interface is used to display viewpoints of the real world scene model and receive input from the author specifying the location of an anchor, i.e. the focus square of section 3.2, paragraph 2, shown in figure 3, selecting an object from a list of virtual objects to add to the anchor/focus square, and manipulating the position, orientation, and size of the displayed virtual object, i.e. as claimed, rendering a mobile version of a digital asset relative to a view of the real world scene based on the set of measurements, receiving asset parameters from the user for the digital asset, and modifying the rendered digital asset based on the asset parameters in real time. It is additionally noted that although the claim does not specify that the “view of the real world scene” relative to which the digital asset is rendered is actually a captured image as opposed to a rendering of a scene model as in Wang, section 3.2, Wang, section 5.3, describes the Peek mode, which allows the author to view the scene model and AR content using an actually captured image of the scene. Finally, while Wang’s stated purpose of the system, e.g. section 1, paragraph 3, is at least in part to simulate the experience of designing in the real world scene without actually being there, Wang does not disclose implementing any mechanism which would prevent the authoring interface from being used within the real world scene, i.e. as claimed, a user could use Wang’s mobile device’s authoring interface to perform the rendering of the digital asset and receiving modified asset parameters while located within the real world scene.) The limitations (addressed out of order) “sending the … asset parameters to a remote computing system” is implicitly taught by Wang (Wang, e.g. section 3.2, paragraph 3, indicates that the result of authoring is a .scn file containing the real world scene model and augmented reality content generated by the author/authoring interface, and the viewing interface, e.g. section 3.3, allows viewers to consume the AR content using their mobile device when present in the real world scene location. While not explicitly stated by Wang, one of ordinary skill in the art would understand Wang is teaching that the saved scene/content .scn file can be transmitted to other mobile devices, i.e. “viewers”, plural, at the remote location can consume the AR content created by the author, suggesting multiple viewers using multiple mobile devices distinct from the author’s mobile device, i.e. sending the scene/content file comprising the claimed asset parameters to a second/remote computing system which generates augmented reality content based on the real world scene model determined based on the claimed set of measurements, the digital asset, and the asset parameters contained in the scene/content file.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to implement Wang’s DistanciAR system to include a feature for sharing authored scene/content .scn files with other devices to support Wang’s intended use scenario where other users’ mobile devices can consume the AR content when located in the real world scene as in section 3.1. In the modified system, instead of a single tablet testing device performing all of the functions, i.e. scanning, authoring, and viewing, for a given scene/content .scn file, the scene/content file generated using the authoring interface of a first mobile device could be shared with another mobile device for consumption, i.e. as claimed, sending the asset parameters to a remote computing system. The limitations “determining a timeseries of asset parameters, wherein each asset parameter is associated with a timestamp determined when sampling the set of measurements; sending the … timeseries of asset parameters to a remote computing system” are not explicitly taught by Wang (As discussed above, in Wang’s modified system, instead of a single tablet testing device performing all of the functions, i.e. scanning, authoring, and viewing, for a given scene/content .scn file, the scene/content file generated using the authoring interface of a first mobile device could be shared with another mobile device for consumption, i.e. as claimed, sending the asset parameters to a remote computing system. While Wang’s asset parameters are transmitted to the remote system in the modified system, Wang does not teach that the .scn file includes a timeseries of asset parameters for the content/digital assets. It is additionally noted that Wang, e.g. section 8, final paragraph, suggests that developing an HMD version of the DistanciAR system is an interesting direction but left for future work.) However, this limitation is taught by Ramani (Ramani, e.g. abstract, paragraphs 26-69, describes a system for generating augmented reality demonstrations by recording the author’s interactions with objects in the real world scene, where the objects are represented with corresponding virtual object models, and each demonstration is saved as a self-contained file describing a timeseries of object coordinates and orientations, e.g. paragraphs 26, 43, 51-54, where the demonstrations are used for later playback by a novice/learning user, e.g. paragraphs 51, 57. That is, each demonstration is recorded by the author starting the recording, demonstrating the task by manipulating the virtual objects in the scene, and ending the recording process, causing the system to generate a timeseries of asset parameters corresponding to the script of paragraph 54.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, to substitute an HMD type device for Wang’s mobile type device as taught by Ramani and suggested by Wang, and include Ramani’s augmented reality demonstration recording feature in Wang’s authoring interface in order to support augmented reality demonstration authoring. In the modified system, as noted above, Wang’s mobile tablet type device would be substituted with an HMD type device as used by Ramani, and suggested by Wang as an interesting direction for modification. Further, Wang’s modified authoring interface would include Ramani’s augmented reality demonstration recording interface, e.g. paragraphs 49-56, where each recorded procedural task would be a self-contained unit included in Wang’s scene/content .scn file used to share the augmented reality scene, i.e. the claimed timeseries of asset parameters for the virtual object(s)/digital asset(s) being manipulated in the demonstration. The limitations (addressed out of order) “at the remote computing system different from the mobile device and located outside the real world scene: generating high-fidelity content based on the set of measurements, a high fidelity version of the digital asset, and the timeseries of asset parameters” are not explicitly taught by Wang in view of Ramani (Wang, e.g. section 3.1, paragraph 4, teaches using a remote computing system, i.e. a MacBook Pro, for processing scan data into a scene model. While Wang does not explicitly indicate the location of the MacBook, one of ordinary skill in the art would have recognized that it could have been located outside the real world scene, i.e. the MacBook runs a service which receives input data from the mobile device, performs processing, and returns a result, which could be performed remotely over a network. Further, as noted above, Wang’s stated purpose of the system, e.g. section 1, paragraph 3, is at least in part to simulate the experience of designing in the real world scene without actually being there. However, while Wang teaches using a computing system, which could be outside the real world scene, to perform processing supporting the mobile device running the DistanciAR application, and Wang’s purpose is to simulate the experience of designing in the real world scene without actually being there, Wang does not teach using a remote computing system for further modifying/authoring an existing scene/content .scn file, which in the modified system comprise Ramani’s recorded procedural task demonstrations comprising the claimed timeseries of asset parameters, to use higher-fidelity version(s) of the selected virtual object(s) corresponding to the claimed generating high-fidelity content.) However, this limitation is taught by MacIntyre (MacIntyre, e.g. pages 197-206, describes DART, the Designer’s Augmented Reality Toolkit, a system intended for supporting augmented reality authoring with rapid prototyping performed both in the real world scene of the AR project and remotely from the real world scene, e.g. sections Working in the Real World, Towards a Design Process for AR Experiences, Examples, Conclusions, indicating that the DART system can be used both at a physical site and away from the physical site. MacIntyre, e.g. section Examples, teaches that DART allowed different students to work on the same AR projects without needing to timeshare AR equipment, i.e. as shown in figure 8(a), a laptop located remotely from the real world scene of the AR project is able to review a student’s authored AR project file, and would also be able to further edit said AR project file. That is, MacIntyre teaches that it is advantageous for an AR content authoring system to share AR project files with other, remote, computing devices, e.g. to allow other authors to contribute to/edit an AR project, as well as to avoid requiring authors to always be located at the real world location to perform authoring. Further, MacIntyre, e.g., section Design and Programming, paragraph 6, section, A Motivating Example, teaches that content creation takes a substantial amount of time, e.g. months in the example of the initial Three Angry Men implementation, and further updates and fixes to the design of the Three Angry Men AR project resulted in requiring recreating all of the content for the project, in contrast to the improved design technique for the subsequent Four Angry Men implementation, which began with using animatic versions of the content for developing and testing the experience, rather than performing the time consuming content creation first. MacIntyre further describes this improved design technique in section Actors for AR: Physical/Virtual Interplay and Animatics, paragraphs 3-4, where an animatic actor is a low-fidelity hand sketched mock-up which serves as a placeholder for later replacement with corresponding video actor content having a higher fidelity representation, as shown in figure 7. That is, MacIntyre teaches that the DART system supports efficient/rapid design prototype generation by preparing and using a set of low-fidelity versions of a digital asset for an AR project, where at a later time in the development process after the high-fidelity version of the content/digital asset are generated, the DART system can be used to edit/modify the AR project to use the high-fidelity version of the content/digital asset instead of the low-fidelity version of the content/digital asset used during previous authoring/editing operations, corresponding to the claimed generating high-fidelity content based on a high-fidelity version of the digital asset. It is additionally noted that MacIntyre’s DART system includes sketched and video actors, and timing information associated with content/digital assets, e.g. sections Director Development Overview, Actors for AR: Physical/Virtual Interplay and Animatics, i.e. MacIntyre anticipates the digital asset parameters being a timeseries of parameters associated with a low-fidelity rapid prototyping asset version, i.e. sketched actors, and a high-fidelity asset version, i.e. video actors.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, to further share authored scene/content .scn files with remote computing devices located outside the real world scene as taught by MacIntyre, wherein the remote computing devices located outside the real world scene provides an authoring interface for editing/modifying a shared authored scene/content .scn file, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets in order to support efficient/rapid prototyping, as well as allowing other authors to contribute to/edit an AR project, as discussed above. As noted above, Wang teaches using a computing system, which could be outside the real world scene, to perform processing supporting the mobile device running the DistanciAR application, such that in the modified system including MacIntyre’s remote computing system providing an authoring interface for editing/modifying a shared authored scene/content file, Wang’s MacBook could additionally be used to provide said authoring interface. Further, in the modified system where Wang’s MacBook, located outside the real world scene, provides MacIntyre’s authoring interface, the set of virtual objects from which Wang allows the author to select for inclusion in the scene, e.g. section 3.2, paragraphs 2-3, would correspond to MacIntyre’s prepared low-fidelity versions used for efficient/rapid prototyping of the AR project, where, at a later time, an author using MacIntyre’s authoring interface provided by Wang’s MacBook to edit/modify a shared authored scene/content file would replace the low-fidelity version(s) of the selected virtual object(s) with high-fidelity version(s) of the selected virtual objects, thereby generating the claimed high-fidelity content based on the set of measurements, i.e. the scanned 3D model generated by Wang’s mobile device in section 3.1, the asset parameters, i.e. the virtual object parameters specified by the author using the mobile device authoring interface to initially generate the scene/content .scn file, and the high-fidelity version(s) of the selected virtual object(s) replacing the low-fidelity version(s). Further, as noted above, MacIntyre anticipates the digital asset parameters being a timeseries of parameters associated with a low-fidelity rapid prototyping asset version, i.e. sketched actors, and a high-fidelity asset version, i.e. video actors, meaning that in the modified system MacIntyre’s authoring interface would also be capable of replacing low-fidelity rapid prototyping asset versions used for recording Ramani’s procedural task demonstrations with high-fidelity asset versions using the same recorded timeseries of asset parameters. The limitation “generating high-fidelity content based on the set of measurements, a high fidelity version of the digital asset, and the set of asset parameters, wherein the high-fidelity version of the digital asset has a higher resolution than the mobile version of the digital asset” is implicitly taught by Wang in view of MacIntyre (As noted above, in the modified system where Wang’s MacBook, located outside the real world scene, provides MacIntyre’s authoring interface, the set of virtual objects from which Wang allows the author to select for inclusion in the scene, e.g. section 3.2, paragraphs 2-3, would correspond to MacIntyre’s prepared low-fidelity versions used for efficient/rapid prototyping of the AR project, where, at a later time, an author using MacIntyre’s authoring interface provided by Wang’s MacBook to edit/modify a shared authored scene/content file would replace the low-fidelity version(s) of the selected virtual object(s) with high-fidelity version(s) of the selected virtual objects, thereby generating the claimed high-fidelity content based on the set of measurements, i.e. the scanned 3D model generated by Wang’s mobile device in section 3.1, the timeseries of asset parameters, i.e. the timeseries of virtual object parameters specified by the author using the mobile device authoring interface to record one of Ramani’s procedural task demonstrations stored in the scene/content .scn file, and the high-fidelity version(s) of the selected virtual object(s) replacing the low-fidelity version(s). While it is implicit that MacIntyre’s teaching of preparing low-fidelity versions of content for efficient/rapid prototyping of an AR project, and later replacing the low-fidelity versions with high-fidelity versions, when applied to Wang’s system wherein the content is virtual 3D objects rather than animated/video captured performances by an actor, the replacement high-fidelity versions of the content/digital assets would be higher resolution 3D models, in the interest of compact prosecution, Jovanovikj is cited for describing an augmented and virtual reality object repository inspired by MacIntyre’s low-fidelity prototyping/higher-fidelity replacement design technique, but applied to 3D virtual objects rather than animation/video of an actor.) However, this limitation is taught by Jovanovikj (Jovanovikj, e.g. Abstract, sections 1-5, describes an Augmented and Virtual Reality Object Repository, e.g. Jovanovikj, section 4, indicates that DART’s approach supporting prototyping and development of AR applications is part of Jovanovikj’s solution for assisting developers performing prototyping or developing. Jovanovikj, e.g. sections 1-3, describes the system design, which includes a mobile device performing operations including scanning and editing, a server comprising the object repository wherein an object can comprise multiple versions having different levels of detail and used for different purposes, and a web client which allows a remote computing device to import and edit objects to provide higher-fidelity versions for inclusion in the repository. That is, as in Jovanovikj’s example of section 3, describing a prototyping scenario where initial object versions are added to the repository and used to construct a prototype AR scene, e.g. as in figure 4 the mobile device performs authoring using low-fidelity versions of the objects stored in the repository, where as development continues, high-quality models are created and the web client/remote computing device is used to replace the mock-ups used for prototyping, i.e. as in figures 6-7, the low-fidelity version of the monitor generated using a captured 2D image is replaced with a high-quality 3D monitor model, corresponding to the claimed remote computing system different from the mobile device and located outside the real world scene which replaces the low-fidelity mobile version of a 3D modeled object with a high-fidelity version of the 3D modeled object having a higher resolution than the low-fidelity mobile version.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, to further include Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution in order to support applying MacIntyre’s replacement technique to Wang’s system wherein the content is virtual 3D objects rather than animated/video captured performances by an actor, i.e. as taught by MacIntyre and Jovanovikj, initial design/prototyping performed by an author using Wang’s authoring interface to record one of Ramani’s procedural task demonstrations would rely on low-fidelity versions of one or more of Wang’s set of virtual objects, and later in the development process the remote computing system would use the provided authoring interface to replace the low-fidelity version(s) of the set of virtual object(s) with high-fidelity versions, where as taught and shown by Jovanovikj, high-fidelity versions of 3D virtual objects would have higher resolution than the low-fidelity versions. The limitation “sending a custom data object storing the timeseries of asset parameters to a remote computing system, wherein the custom data object does not include the digital asset 3D model(s)” is partially taught by Wang in view of Ramani and MacIntyre (As noted above, in the modified system Wang’s MacBook, located outside the real world scene, provides MacIntyre’s authoring interface for editing/modifying a shared authored scene/content .scn file, wherein each procedural task demonstration recorded using Ramani’s interface would be a self-contained unit included in Wang’s scene/content .scn file used to share the augmented reality scene, i.e. the claimed timeseries of asset parameters for the virtual object(s)/digital asset(s) being manipulated in the demonstration is stored in a custom data object which is sent to the remote computing system. Further, in the modified system using Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, the different fidelity digital asset models would be stored in an network-accessible database. Wang does not explicitly state whether the scene/content .scn file comprises the 3D virtual object/digital asset models, per se, such that although one of ordinary skill in the art would recognize that Jovanovikj’s different fidelity object models could be accessed remotely as needed rather than stored as part of the scene/content .scn file, neither reference explicitly addresses a .scn file which loads 3D virtual object files by referencing a network-accessible database.) However, this limitation is taught by Stack Overflow (Stack Overflow, e.g. pages 1-3, presents a question and solution regarding using an ARKit app which presents AR content by loading corresponding .scn files from a server database using a URL, rather than loading a .scn model stored locally. Stack Overflow, e.g. page 2, indicates that one advantage of this technique is allowing for dynamic loading of scene objects.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, including Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, to use Stack Overflow’s dynamic scene object loading technique in order to allow the devices editing/consuming the shared authored scene/content .scn files to dynamically load a version of each virtual object/digital asset from Jovanovikj’s repository rather than relying on static scene objects defined in the scene/content .scn file, as suggested by Stack Overflow. As indicated by Stack Overflow, e.g. page 2, as long as the developer has control over the models/server, this technique works. Further, one of ordinary skill in the art would recognize the advantages of dynamically loading model versions in comparison to static model versions, e.g. any updates to the version(s) of the model stored on the server would be automatically acquired by the AR application at runtime, avoiding any requirement to separately update static model versions stored with the scene/content .scn file. Regarding claims 8-10, the limitations “wherein the timeseries of asset parameters comprise asset audio-visual attributes”, “wherein the timeseries of asset parameters identify a subset of the asset audio-visual attributes to be used to generate the content”, “wherein the asset audio-visual attributes comprise an animation” are taught by Wang in view of Ramani (As discussed in the claim 1 rejection above, Wang’s modified authoring interface would include Ramani’s augmented reality demonstration recording interface, e.g. paragraphs 49-56, where each recorded procedural task would be a self-contained unit included in Wang’s scene/content .scn file used to share the augmented reality scene, i.e. the claimed timeseries of asset parameters for the virtual object(s)/digital asset(s) being manipulated in the demonstration. Ramani, e.g. paragraph 54, indicates that the timeseries of asset parameters include both visual attributes, i.e. the position and orientation for displaying virtual objects at each timestep, and audio attributes, e.g. paragraph 55, indicating that the voice recordings of the user are captured and stored with synchronized time stamps for the demonstration. That is, as in claim 8, the timeseries of asset parameters comprise audio-visual attributes, with the visual attributes synchronized to the voice recordings, as in claim 9 the timeseries of asset parameters identify a subset of the attributes, i.e. the timestamps are each associated with a subset of audio and visual attributes, and as in claim 10, the attributes comprise an animation, i.e. the animation is the replaying of the recorded demonstration.) Regarding claim 12, the limitation “wherein the timeseries of asset parameters are stored when a record button is selected at the mobile device” is taught by Wang in view of Ramani (Ramani, e.g. paragraph 54, indicates that recording of a demonstration is initiated by actuating a button or trigger, i.e. as claimed a record button is selected at the mobile device.) Regarding claim 13, the limitation “at the remote computing system, determining a set of modified asset parameters” is taught by Wang in view of MacIntyre (As discussed in the claim 1 rejection above, in the modified system where Wang’s MacBook provides MacIntyre’s authoring interface, the set of virtual objects from which Wang allows the author to select for inclusion in the scene, e.g. section 3.2, paragraphs 2-3, would correspond to MacIntyre’s prepared low-fidelity versions used for efficient/rapid prototyping of the AR project, where, at a later time, an author using MacIntyre’s authoring interface provided by Wang’s MacBook to edit/modify a shared authored scene/content file would replace the low-fidelity version(s) of the selected virtual object(s) with high-fidelity version(s) of the selected virtual objects, thereby generating the claimed high-fidelity content. MacIntyre’s authoring interface provided by Wang’s MacBook to edit/modify a shared authored scene/content file would also allow an author to modify the asset parameters of the selected virtual object(s), i.e. Wang, section 3.2, indicates that the authoring interface allows, in addition to selection of virtual object(s) for placement, the already placed virtual object(s) can be moved and/or reoriented, as well as MacIntyre, e.g. sections Event-based Programming Using Cue and Actions, Handling of Tracking Data, indicates that the authoring interface supports modifying position, as well as orientations, e.g. figure 6, right, comprising fields for rotation, such that the author using MacIntyre’s authoring interface provided by Wang’s MacBook to edit/modify a shared authored scene/content file to replace the low-fidelity version(s) of the selected virtual object(s) with high-fidelity version(s) of the selected virtual objects, thereby generating the claimed high-fidelity content, could also choose to edit/modify the asset parameters of the virtual object(s) in the shared authored scene/content file, corresponding to the claimed determining a set of modified asset parameters.) The limitations “at the mobile device located within the real world scene: identifying an anchor feature in the scene, based on secondary measurements of the scene; and rendering the digital asset based on the set of modified asset parameters based on a pose of the anchor feature relative to the mobile device” are taught by Wang in view of MacIntyre (As discussed in the claim 1 rejection above, in the modified system where Wang’s MacBook provides MacIntyre’s authoring interface, allowing an author using MacIntyre’s authoring interface provided by Wang’s MacBook to edit/modify a shared authored scene/content file to replace the low-fidelity version(s) of the selected virtual object(s) with high-fidelity version(s) of the selected virtual objects, thereby generating the claimed high-fidelity content, as well as perform the above noted asset parameter editing/modifications, generating the high-fidelity content having the edited/modified asset parameters, said generated high-fidelity content having the edited/modified asset parameters would be shared with other users’ mobile computing devices in order to allow consumption of the high-fidelity content having the edited/modified asset parameters when located within the real world scene, as described in Wang, section 3.3. Further, Wang indicates the author specifies the location of an anchor, i.e. the focus square of section 3.2, paragraph 2, shown in figure 3, before selecting an object from a list of virtual objects to add to the anchor/focus square, and Wang, e.g. section 3.3, indicates that to consume the AR content/scene file, the viewer device roughly scans the environment to relocalize the device to match the real world scene model saved in the high-fidelity content/scene file having the edited/modified asset parameters, i.e. as claimed, the mobile device identifies anchor feature(s) in the scene based on secondary measurements of the scene, and renders the digital content/asset based on the pose of the anchor relative to the mobile device, and the modified asset parameters. It is additionally noted that like Wang’s anchor/focus squares and corresponding virtual objects, Ramani, e.g. paragraphs 51-54, 57, teaches that the virtual objects are overlayed on corresponding physical objects, i.e. anchors used to determine the relative pose of the digital asset using recorded/modified parameters. Finally, it is noted that the same mobile device used by an author for initial scanning and authoring of a scene/content .scn file which is shared and modified by the remote computing device authoring interface to generate the high-fidelity content/scene file having the edited/modified asset parameters, could also be used for consumption of the high-fidelity content/scene file having the edited/modified asset parameters, i.e. the same mobile device located in the real world scene performing the operations as discussed in claim 1, could consume the modified high-fidelity content/scene file as in claim 13, by identifying the anchor features as noted above.) Regarding claim 22, the limitation “wherein the timeseries of asset parameters is stored in a custom data object, wherein the custom data object does not include the digital asset 3D model(s), and wherein sending the timeseries of asset parameters to the remote computing system comprises sending the custom data object to the remote computing system, wherein the custom data object comprises a custom data structure” is taught by Wang in view of Ramani and MacIntyre (As discussed in the claim 1 rejection above, in the modified system Wang’s MacBook, located outside the real world scene, provides MacIntyre’s authoring interface for editing/modifying a shared authored scene/content .scn file, wherein each procedural task demonstration recorded using Ramani’s interface would be a self-contained unit included in Wang’s scene/content .scn file used to share the augmented reality scene, i.e. the claimed timeseries of asset parameters for the virtual object(s)/digital asset(s) being manipulated in the demonstration is stored in a custom data object which is sent to the remote computing system. Further, in the modified system using Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, and using Stack Overflow’s dynamic scene object loading technique, the different fidelity digital asset models would be stored in an network-accessible database allowing the devices editing/consuming the shared authored scene/content .scn files to dynamically load a version of each virtual object/digital asset from Jovanovikj’s repository rather than relying on static scene objects defined in the scene/content .scn file, as taught by Stack Overflow, i.e. as claimed the custom data object sent to the remote computing system does not include the digital asset 3D model(s). Finally, the scene/content .scn file, corresponding to the claimed custom data object, comprises the claimed custom data structure, i.e. the scene file comprises a data structure including the asset parameters and other scene data as discussed in the claim 1 rejection.) Regarding claim 23, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 1 above, i.e. as discussed in the 35 U.S.C. 112(b) rejection of claim 1, the sending step including the negative limitation is interpreted to correspond to the scope of claim 23, addressed by the modification in view of Stack Overflow. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over “DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments” by Zeyu Wang, et al. (hereinafter Wang) in view of U.S. Patent Application Publication 2021/0134076 A1 (hereinafter Ramani) in view of “DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences” by Blair MacIntyre, et al. (hereinafter MacIntyre) in view of “Augmented and Virtual Reality Object Repository for Rapid Prototyping” by Ivan Jovanovikj, et al. (hereinafter Jovanovikj) in view of “How to load a model and textures from a remote server using ARKit?” by stackoverflow.com (hereinafter Stack Overflow) as applied to claim 1 above, and further in view of “Spatiotemporally Consistent HDR Indoor Lighting Estimation” by Zhengqin Li, et al. (hereinafter Li) Regarding claim 2, the limitation “further comprising generating a set of high dynamic range (HDR data) from the set of measurements, wherein the mobile version of the digital asset is rendered using the set of HDR data” is not explicitly taught by Wang (Wang does not explicitly address generating HDR data from the measurements captured during the scanning phase, or by extension rendering the digital content/asset in the authoring stage using HDR data.) However, this limitation is taught by Li (Li, e.g. abstract, sections 1-6, figures 1, 2, discloses a system for generating an HDR environment map for rendering virtual objects in AR applications, where the HDR map is generated based on LDR video sequences or images. Li, e.g. sections 3, 4, describe details of the system, which uses a mobile device capturing LDR video and depth maps using ARKit, e.g. section 4.3, to predict a temporally consistent HDR environment map for the scene, e.g. sections 4.1, 4.2, which is used for lighting virtual objects in an AR rendering system as is known in the art, e.g. section 2, paragraph 1, figures 6, 7, 9-13, showing examples of virtual objects such as spheres and bunnies rendered with HDR environment map lighting.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, including Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, using Stack Overflow’s dynamic scene object loading technique, to include Li’s HDR environment map generation system in order to generate HDR environment maps of the real world scene models to support rendering virtual AR objects with lighting matching the real world scene as is conventional in the art. In the modified system, analogous to Li, section 4.3, where a subset of frames from the video sequence are selected, along with their depth maps and camera poses, for generating the HDR environment map, Wang’s modified DistanciAR system, e.g. section 3.1, paragraph 2-4, which already selects a subset of frames, along with the camera pose, would also determine the depth map as suggested by Wang, e.g. section 8, paragraph 7, section 9, paragraph 8, and provide the selected keyframes to Li’s HDR environment map generation system to continuously refine/improve the HDR environment map, where the HDR environment map would then be used by the authoring interface on the author’s mobile device to render the digital content/assets using lighting matching the real world scene. Claims 3 and 7 are rejected under 35 U.S.C. 103 as being unpatentable over “DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments” by Zeyu Wang, et al. (hereinafter Wang) in view of U.S. Patent Application Publication 2021/0134076 A1 (hereinafter Ramani) in view of “DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences” by Blair MacIntyre, et al. (hereinafter MacIntyre) in view of “Augmented and Virtual Reality Object Repository for Rapid Prototyping” by Ivan Jovanovikj, et al. (hereinafter Jovanovikj) in view of “How to load a model and textures from a remote server using ARKit?” by stackoverflow.com (hereinafter Stack Overflow) in view of “Spatiotemporally Consistent HDR Indoor Lighting Estimation” by Zhengqin Li, et al. (hereinafter Li) as applied to claim 2 above, and further in view of “MR360: Mixed Reality Rendering for 360° Panoramic Videos” by Taehyun Rhee, et al. (hereinafter Rhee). Regarding claim 3, the limitation “wherein the set of HDR data comprise pre-convolved HDR environment maps with different levels of blur, each associated with a different surface roughness” is not explicitly taught by Wang in view of Li (As discussed in the claim 2 rejection above, in the modified system, the HDR environment map generated by Li’s HDR environment map system would be used by the authoring interface on the mobile device to render the digital content/assets using lighting matching the real world scene. While Li’s HDR environment map is used for lighting virtual objects in an AR rendering system as is known in the art, e.g. section 2, paragraph 1, figures 6, 7, 9-13, Li does not discuss rendering in detail, and further does not explicitly address generating pre-convolved HDR environment maps with different levels of blur associated with different surface roughnesses.) However, this limitation is taught by Rhee (Rhee, e.g. abstract, sections 1-8, describes a system for rendering mixed reality for panoramic videos. Rhee, section 3.1, figure 4, describes performing image-based lighting (IBL) for virtual objects in mixed reality, which includes convolving the environment map to generate a diffuse radiance map and multiple specular radiance maps having different roughness values as noted in section 3.1, paragraph 4, and the Figure 4 caption. Further, Rhee, section 3.1, paragraphs 2-3, section 6.2, describe how the environment maps are sampled for diffuse and specular reflections, where the specular component is based on a weighted combination of the specular radiance maps, and used to apply lighting to each rendered pixel of the virtual object(s).) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, including Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, using Stack Overflow’s dynamic scene object loading technique, including Li’s HDR environment map generation system, to use Rhee’s environment map based lighting technique for rendering the digital content/assets because Li does not discuss rendering in detail, and Rhee does describe details of rendering digital content/assets using environment map based lighting in an augmented reality system, such that one of ordinary skill in the art would look to Rhee when implementing the modified system. As noted above, Rhee teaches that the environment maps are convolved prior to rendering, i.e. pre-convolved, to generate a set of pre-convolved environment maps having different levels of blur associated with different surface roughnesses. Furthermore, with respect to the limitations of claim 7, Rhee’s environment map based lighting technique, e.g. section 3.1, paragraphs 2-3, section 6.2, includes determining the visual parameters of the digital content/asset at each pixel, i.e. the exemplary diffuse color, surface normal, roughness, etc., which are then used to sample the diffuse and specular environment maps to determine the diffuse and specular components for rendering each pixel, i.e. the claimed selecting an HDR datum (while Rhee suggests a plurality of samples are acquired, each sample is an individual HDR datum) based on the visual parameters at a pixel/component of the digital content/asset, which is used to render the pixel/component of the digital content/asset. Regarding claim 7, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 3 above. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over “DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments” by Zeyu Wang, et al. (hereinafter Wang) in view of U.S. Patent Application Publication 2021/0134076 A1 (hereinafter Ramani) in view of “DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences” by Blair MacIntyre, et al. (hereinafter MacIntyre) in view of “Augmented and Virtual Reality Object Repository for Rapid Prototyping” by Ivan Jovanovikj, et al. (hereinafter Jovanovikj) in view of “How to load a model and textures from a remote server using ARKit?” by stackoverflow.com (hereinafter Stack Overflow) in view of “Spatiotemporally Consistent HDR Indoor Lighting Estimation” by Zhengqin Li, et al. (hereinafter Li) as applied to claim 2 above, and further in view of “LitAR: Visually Coherent Lighting for Mobile Augmented Reality” by Yiqin Zhao, et al. (hereinafter Zhao). Regarding claim 4, the limitation “wherein the set of HDR data is generated at the remote computing system, wherein the set of HDR data is sent in real-time to the mobile device, wherein the mobile version of the digital asset is rendered using the set of HDR data” is not explicitly taught by Wang in view of Li (As discussed in the claim 2 rejection above, Wang’s modified DistanciAR system would select a subset of frames, along with the camera pose and determined depth maps, and provide the selected keyframes to Li’s HDR environment map generation system to continuously refine/improve the HDR environment map, where the HDR environment map would then be used by the authoring interface to render the digital content/assets using lighting matching the real world scene. Li, e.g. section 4.1, paragraph 5, indicates that the implemented HDR environment map generation system was designed with reduced complexity in order to be operable by the mobile device’s processor, whereas prior art systems such as the exemplary Lighthouse use a more powerful GPU to process a more complex network which would not necessarily require the reduced complexity design choices, i.e. Li teaches, as one of ordinary skill in the art would know, that mobile device processors, having less computational power than common laptop or desktop computers, may require reduced complexity computing models in order to achieve the desired performance results. Further, Li, e.g. section 4.2, paragraph 6, section 4.3, paragraph 6, indicates that the system is capable of processing 13 frames per second, which is corresponds to running in real-time. While Wang, e.g. sections 3.1, paragraph 4, teaches that a remote computing device, i.e. the MacBook corresponding to the claimed remote computing device located outside the real world scene, as discussed in the claim 1 rejection above, having a more powerful processor than the mobile device can be used to perform operations supporting the mobile device, and one of ordinary skill in the art would have recognized in view of Li’s discussion of the prior art Lighthouse system that increased system complexity could result in increased quality of results, Li does not explicitly suggest that the HDR environment map generation system be performed by a remote computing system providing the resulting HDR environment map data to the mobile device in real-time.) However, this limitation is taught by Zhao (Zhao, e.g. abstract, sections 1-8, describes the LitAR system, which is directed to generating environment maps for use in AR applications. Zhao, e.g. sections 4, 5, discusses the design and implementation of LitAR, including a remote server performing environment map generation using keyframes received from a mobile device scanning the environment, e.g. section 5.1, figure 8. Further, Zhao, e.g. section 6.1.1, paragraph 3, indicates that the system operates in real time, providing updated environment maps for AR virtual object lighting at roughly 22 frames per second. Finally, Zhao, e.g. section 6.1.2, discusses the same tradeoff referred to by Li as discussed above, i.e. higher quality environment map generation requires more processing capability. That is, Zhao teaches that an environment map generation system analogous to Li’s may be implemented using a remote computer to perform the environment map generation at high quality in real-time for transmission to the mobile device.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, including Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, using Stack Overflow’s dynamic scene object loading technique, including Li’s HDR environment map generation system, to implement Li’s HDR environment map generation system using the remote computing device having more processing capability to perform the processing of the captured images to generate the HDR environment map and transmit the HDR environment map data to the mobile device in real-time as taught by Zhao, in order to implement the HDR environment map system with a higher complexity design which produces higher quality results, i.e. the design tradeoff noted by Li and Zhao as discussed above. In the modified system, Wang’s remote computing device would receive the selected keyframes from the mobile device, where a higher complexity version of Li’s HDR environment map model is used to generate higher quality HDR environment maps relative to Li’s unmodified system, and provide the HDR environment map data in real-time to the mobile device. As noted above, Li teaches that the system continuously refines/updates the HDR environment map as additional frames are received, such that in the modified system, each respective update would be transmitted to the mobile device in real-time, as it is computed by the remote computing device. Claims 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over “DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments” by Zeyu Wang, et al. (hereinafter Wang) in view of U.S. Patent Application Publication 2021/0134076 A1 (hereinafter Ramani) in view of “DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences” by Blair MacIntyre, et al. (hereinafter MacIntyre) in view of “Augmented and Virtual Reality Object Repository for Rapid Prototyping” by Ivan Jovanovikj, et al. (hereinafter Jovanovikj) in view of “How to load a model and textures from a remote server using ARKit?” by stackoverflow.com (hereinafter Stack Overflow) in view of “Spatiotemporally Consistent HDR Indoor Lighting Estimation” by Zhengqin Li, et al. (hereinafter Li) as applied to claim 2 above, and further in view of “Real-time large-scale dense RGB-D SLAM with volumetric fusion” by Thomas Whelan, et al. (hereinafter Whelan) in view of U.S. Patent Application Publication 2023/02090177 (hereinafter Inazawa). Regarding claim 5, the limitations “wherein generating the set of HDR data comprises: determining an initial set of camera parameters; sampling low dynamic range (LDR) data of the real-world scene using a camera with settings locked to the initial set of camera parameters; and predicting the set of HDR data based on the LDR data using a machine learning model” are partially taught by Wang in view of Li (Wang, e.g. section 3.1, paragraph 3, indicates that ARKit does not allow the camera exposure to be fixed/locked during scanning, requiring texture maps to be created from blended partial texture maps. That is, Wang indicates it is a drawback that the implemented system does not allow the camera exposure to be locked while scanning, which suggests that the system could be improved by being modified to fix the camera exposure parameters during scanning. Further, Li’s system accepts either a single LDR image or multiple frames from an unconstrained LDR video as input, i.e. Li does not indicate any requirement for varying exposure during capture of the LDR images, meaning that Li’s HDR environment map generation system would work for LDR video frames having a fixed exposure. Finally, Li, e.g. section 4, describes the system as comprising CNN and RNN components for predicting the HDR environment map, i.e. as claimed the HDR data is predicted using a machine learning model. That is, while Wang’s modified system is capable of predicting the HDR environment maps with a machine learning model using frames of an LDR video of the real scene having fixed camera exposure parameters, Wang indicates that a drawback of the unmodified system is an inability to fix the camera exposure parameters during scanning, and therefore in the interest of compact prosecution, Whelan and Inazawa are cited for teaching this feature.) However, this limitation is taught by Whelan in view of Inazawa (Whelan, e.g. abstract, sections 1-6, describes a system for performing 3D scanning of a real world scene using RGB-D sensors, i.e. analogous to Wang and Li, capturing images and depth data to reconstruct a real world scene model. Further, Whelan, section 3.2, paragraph 1, indicates that it is known that automatic exposure and white balance features may be selectively enabled, and that sometimes it is desirable to enable the features during scanning, and sometimes it is not, i.e. Whelan indicates it would be advantageous for a user to be able to selectively enable the automatic exposure feature. Additionally, Inazawa, e.g. abstract, paragraph 14-90, describes embodiments for a digital camera capable of capturing video, which includes an Auto-Exposure (AE) feature having a lock button for fixing/locking the camera exposure settings, e.g. paragraph 16. Inazawa, e.g. paragraph 89, also indicates that the disclosed digital camera features are applicable to mobile devices such as tablets and smartphones, i.e. Inazawa teaches that an AE lock button feature can be added to a mobile device performing video capture using an auto-exposure system to fix/lock the camera exposure settings.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, including Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, using Stack Overflow’s dynamic scene object loading technique, including Li’s HDR environment map generation system, to add Inazawa’s AE lock button feature to Wang’s mobile device used for scanning the real world scene in order to allow the user to manually lock the auto-exposure settings prior to scanning the scene as suggested by Whelan and Wang, i.e. as noted Whelan indicates it is sometimes desirable to fix the exposure settings during scanning, and Wang indicates that it is a drawback that the exposure settings cannot be fixed during capture in the unmodified system. In the modified system, as taught by Inazawa, an AE lock button would be available to the user in order to fix/lock the exposure settings as desired, such that, as claimed, prior to scanning the unlocked auto exposure system would determine camera exposure settings until locked by the user to determine the claimed initial set of camera parameters, such that when capturing the LDR video the camera exposure settings would be locked to said initial set of camera parameters. Further, with respect to claim 6, neither Wang nor Whelan suggest that the exposure settings should be locked during the authoring stage, and the user would still be able to unlock the AE setting manually, such that during the authoring stage where the mobile version of the digital asset is rendered, as discussed in the claim 1 rejection above, the camera settings could be unlocked. Regarding claim 6, the limitations are similar to those treated in the above rejection(s) and are met by the references as discussed in claim 5 above. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over “DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments” by Zeyu Wang, et al. (hereinafter Wang) in view of U.S. Patent Application Publication 2021/0134076 A1 (hereinafter Ramani) in view of “DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences” by Blair MacIntyre, et al. (hereinafter MacIntyre) in view of “Augmented and Virtual Reality Object Repository for Rapid Prototyping” by Ivan Jovanovikj, et al. (hereinafter Jovanovikj) in view of “How to load a model and textures from a remote server using ARKit?” by stackoverflow.com (hereinafter Stack Overflow) as applied to claim 9 above, and further in view of “Paper Trail: An Immersive Authoring System for Augmented Reality Instructional Experiences” by Shwetha Rajaram, et al. (hereinafter Rajaram). Regarding claim 11, the limitation “wherein a low-fidelity version of the asset audio-visual media is displayed at the mobile device while generating the low-fidelity content, wherein a high fidelity version of the asset audio-visual media is used to generate the high-fidelity content” is partially taught by Wang in view of Ramani and MacIntyre (As discussed in the claim 1 rejection above, MacIntyre anticipates the digital asset parameters being a timeseries of parameters associated with a low-fidelity rapid prototyping asset version, i.e. sketched actors, and a high-fidelity asset version, i.e. video actors, meaning that in the modified system MacIntyre’s authoring interface would also be capable of replacing low-fidelity rapid prototyping versions used for recording Ramani’s procedural task demonstrations with high-fidelity asset versions using the same recorded timeseries of asset parameters. Further, Wang’s exemplary digital content/assets are 3D models, and while MacIntyre teaches using animated/video actors as AR content/digital assets, the modification of the claim 1 rejection above does not include using MacIntyre’s animated/video actor content/digital assets, per se, in Wang’s modified system. Finally, while Ramani, e.g. paragraph 56, teaches that the expert user recording the demonstrations may also edit the recordings using the same interface, i.e. the low-fidelity version of the assets could be used during editing performed by the expert user, whereas the high-fidelity versions could be used during playback/consumption, Ramani does not describe the details of this interface. Although Ramani’s timeseries of asset parameters including pose, rotation, and synchronized audio read on the timeseries of audio-visual parameters as discussed in the claim 8-10 rejections above, in the interest of compact prosecution, Rajaram is cited for teaching that the digital asset, per se, may be an audio-visual media such as a video.) However, this limitation is taught by Rajaram (Rajaram, e.g. abstract, sections 1-8, describes the Paper Trail system, which is a system allowing a user to author augmented reality content relative to paper present in the real world scene. Rajaram, e.g. sections 3, 4, describes various digital content/asset types available to the author, e.g. as in figure 4, the digital content/assets may include digital media such as videos, audio clips, animations, and video bookmarks linking to particular timestamps within the video. Rajaram’s videos are audio-visual media, e.g. section 4.4, paragraph 5 indicates an exemplary video is a lecture, which would include both video and the audio of the lecturer. Finally, as in section 4.3, paragraph 4, a visual digital asset can also be linked to an audio clip, i.e. Rajaram teaches that any visual digital content/asset can be linked to an audio clip, creating an audio-visual digital content/asset.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, including Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, using Stack Overflow’s dynamic scene object loading technique, to include Rajaram’s Paper Trail tools for generating/linking to additional types of digital content/assets in order to allow the Wang’s authoring interface to support additional types of digital content/assets beyond the exemplary 3D models in Wang’s unmodified system. In Wang’s modified system, in addition to being able to select 3D models using the authoring interface, as discussed in the claim 1 rejection above, the author would be able to add Rajaram’s digital content/assets as in section 4, figure 4, as discussed above, i.e. digital media such as videos, audio clips, animations, and video bookmarks, corresponding to the claimed audio-visual media. Further, as taught by MacIntyre, low-fidelity audio-visual media/content assets can be prepared for the efficient/rapid design prototyping stage, and later replaced with high-fidelity audio-visual media/content assets, corresponding to the claimed low-fidelity version displayed at the mobile device while generating the low-fidelity content, and high-fidelity version used to generate the high-fidelity content. That is, in Wang’s modified system including Rajaram’s Paper Trail tools for generating/linking to additional types of digital content/assets, as taught by MacIntyre, a low-fidelity version of an audio-video asset could be used during the authoring/prototyping stage, including as one of the manipulated digital assets in one of Ramani’s recorded procedural task demonstrations, and later replaced with a high fidelity version to generate the high fidelity scene/content file. Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over “DistanciAR: Authoring Site-Specific Augmented Reality Experiences for Remote Environments” by Zeyu Wang, et al. (hereinafter Wang) in view of U.S. Patent Application Publication 2021/0134076 A1 (hereinafter Ramani) in view of “DART: A Toolkit for Rapid Design Exploration of Augmented Reality Experiences” by Blair MacIntyre, et al. (hereinafter MacIntyre) in view of “Augmented and Virtual Reality Object Repository for Rapid Prototyping” by Ivan Jovanovikj, et al. (hereinafter Jovanovikj) in view of “How to load a model and textures from a remote server using ARKit?” by stackoverflow.com (hereinafter Stack Overflow) as applied to claim 1 above, and further in view of “Time Travellers: An Asynchronous Cross Reality Collaborative System” by Hyunwoo Cho, et al. (hereinafter Cho) Regarding claim 21, the limitations “transmitting [the] custom data object to the remote computing system, wherein the custom data object comprises: a scene geometry representation derived from the set of measurements; a scene lighting representation derived from sensor settings and Red-Green-Blue (RGB) measurements” are taught by Wang (As discussed in the claim 1 and 22 rejections above, in the modified system, Wang’s MacBook, located outside the real world scene, provides MacIntyre’s authoring interface for editing/modifying a shared authored scene/content .scn file, wherein each procedural task demonstration recorded using Ramani’s interface would be a self-contained unit included in Wang’s scene/content .scn file used to share the augmented reality scene, i.e. the claimed timeseries of asset parameters for the virtual object(s)/digital asset(s) being manipulated in the demonstration is stored in a custom data object which is sent to the remote computing system. Further, Wang, e.g. sections 3.1, 3.2, indicates that the entire scene is saved as a .scn file once authoring is complete, where the scene comprises the textured 3D scene model generated using the scanning interface as described in section 3.1, i.e. the .scn file includes the scene geometry 3D model derived from the scanning/video captured using the scanning interface, e.g. figure 2(d) top shows an image of the untextured 3D scene geometry model, as well as the color texture map derived from the colors of the scanning/video projected onto the geometry model. That is, the 3D scene model corresponds to the claimed scene geometry derived from the set of measurements, and the texture map corresponds to the scene lighting representation derived from the sensor settings (camera pose) and RGB measurements (pixel colors), such that the .scn file corresponding to the custom data object comprises the claimed scene geometry and scene lighting representation. It is noted that although Wang’s color texture map does not necessarily provide lighting information for virtually lighting virtual objects, the claim merely requires that a “scene lighting representation” is included in the custom data object, and as shown in figure 2(d), bottom, the lighting present in the scene is represented in the texture map.) The limitations “a set of takes, each take comprising: … a timeseries of asset parameters for the digital asset” are taught by Wang in view of Ramani (As discussed in the claim 1 rejection above, Wang’s modified authoring interface would include Ramani’s augmented reality demonstration recording interface, e.g. paragraphs 49-56, where each recorded procedural task would be a self-contained unit included in Wang’s scene/content .scn file used to share the augmented reality scene, i.e. the claimed timeseries of asset parameters for the virtual object(s)/digital asset(s) being manipulated in the demonstration. Ramani, teaches that each demonstration is saved as a self-contained file describing a timeseries of object coordinates and orientations, e.g. paragraphs 26, 43, 51-54, where the demonstrations are used for later playback by a novice/learning user, e.g. paragraphs 51, 57. That is, each recorded demonstration corresponds to the claimed “take” in “a set of takes”, wherein the author starts the recording, demonstrates the task by manipulating the virtual objects in the scene, and ends the recording process, causing the system to generate a take comprising a timeseries of asset parameters corresponding to the script of paragraph 54.) The limitations “a set of takes, each take comprising: a depth video; sensor pose data; RGB video; and a timeseries of asset parameters for the digital asset” are partially taught by Wang in view of Ramani (As discussed above, Ramani’s recorded demonstrations each correspond to the claimed “take” in “a set of takes”, comprising a timeseries of asset parameters and sensor pose data. Further, Ramani, e.g. paragraph 55, indicates that additional data may be included in the procedural task/take, suggesting including voice recordings of the author performing the demonstration. Finally, Ramani, e.g. paragraphs 27-33, teaches that the system records both color and depth video, e.g. paragraph 29, and tracks the pose of the head mounted AR device, e.g. paragraph 33, i.e. each procedural task/take is recorded with synchronized RGB and depth video, and sensor pose data. While Ramani does not explicitly teach storing the recorded video and sensor pose video as part of the procedural task/take, one of ordinary skill in the art would have recognized that this recorded data could be included, analogous to the included audio as in paragraph 55, e.g. Cho, abstract, sections 1-6, describes an analogous augmented reality demonstration recording system, where Cho teaches, e.g. section 3.2, recording author avatar pose data, trajectory parameters of demonstration objects/models, gaze point trajectories, and the video capturing the authors perspective, i.e. as taught by Cho, one of ordinary skill in the art would have understood that augmented reality demonstration recordings could include other video and tracking data captured synchronously with the demonstration to support different demonstration playback modalities.) Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Wang’s DistanciAR system, including the feature for sharing authored scene/content .scn files with other devices, substituting an HMD type device for Wang’s mobile type device, including Ramani’s augmented reality demonstration recording feature, sharing authored scene/content .scn files with remote computing devices located outside the real world scene providing an authoring interface for editing/modifying a shared authored scene/content .scn file as taught by MacIntyre, including performing MacIntyre’s replacement of low-fidelity versions of digital assets in an authored scene/content .scn file used for prototyping with high-fidelity versions of the digital assets, including Jovanovikj’s augmented and virtual reality object repository for storing/providing the virtual object/digital asset models at multiple levels of detail/fidelity/resolution, using Stack Overflow’s dynamic scene object loading technique, to additionally store the recorded camera pose data and video color and depth data in Ramani’s augmented reality demonstration recordings as taught by Cho in order to support different demonstration playback modalities. In the modified system, each recorded procedural task would include, in addition to the timeseries of parameters for the virtual object(s)/digital asset(s) which are manipulated in the recorded demonstration, the recorded pose data of the HMD/camera, and the recorded video color and depth data, i.e. as claimed, each take in the set of takes includes a depth video, sensor pose data, RGB video, and a timeseries of asset parameters for the digital asset(s). Response to Arguments Applicant's arguments filed 1/20/26 have been fully considered but they are not persuasive. In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986). Applicant’s remarks do not actually address the combinations as mapped in the rejections, and instead address what each reference individually discloses, and Applicant’s opinion of how the references could be combined, without actually acknowledging the specifically explained modifications of the rejections, or otherwise identifying distinctions between the claim limitations and the corresponding combination as mapped and explained in the rejection. As Applicant’s remarks are not actually showing error in the proposed modifications of the rejections, and instead alleging error of hypothetical combinations of the references, they cannot be considered persuasive. Specific examples are discussed in the following paragraphs. Pages 8-13 of the remarks discuss Wang, MacIntyre, Ramani, Cho, and Rajaram, separately, concluding that because no single reference anticipates the timeseries limitations, the references fail to teach said limitations. Applicant’s remarks do not actually discuss the combination of references with respect to the timeseries of asset parameters as mapped in the previous rejection of claim 21. With respect to Ramani, Applicant acknowledges the disclosure of a timeseries of asset parameters. Applicant does not actually address the combined teachings of the disclosures, i.e. Ramani teaching the timeseries of asset parameters, and Stack Overflow’s dynamic scene object loading technique allowing 3D models to be loaded dynamically rather than requiring the 3D models are stored with their display parameters in a scene. Instead, Applicant erroneously asserts that Ramani teaches away from the claimed limitation, as addressed further below, without actually addressing the combination of references as mapped in the above rejections, which do teach the claimed timeseries limitations. On page 14 of the remarks, Applicant argues that Li and Zhao do not teach the limitations of claim 4, without addressing the combination of references as mapped in the claims 2 and 4 rejections, i.e. Wang teaches rendering the mobile/low-fidelity versions of the asset as discussed in the claims 1 and 2 rejections, which is modified to use environments including Li’s HDR environment maps, with Zhao teaching an analogous system may be implemented on a remote computer. Rather than acknowledge or address the combination of references, Applicant argues that neither reference anticipates the entire limitation. Applicant further argues that Rhee does not anticipate the limitations of claim 4, but Rhee is not cited in the claim 4 rejection. As Applicant’s remarks do not address the combination of references, and instead only address what each reference anticipates, Applicant’s argument cannot be considered persuasive. Applicant asserts, e.g. page 15, that the references do not teach a “custom data object” comprising “a custom data structure”. Applicant’s argument is semantic, i.e. Applicant’s argument appears to be based on the lack of literal use of the term “custom”. However, Applicant’s remarks do not identify any additional requirement for “custom” beyond those recited in the claims, i.e. any anticipated or prior art data structure comprising the claimed data components as recited in claims 21 and 22, respectively, is necessarily the claimed custom data object, and by comprising multiple types of data, is also necessarily the claimed custom data structure. Therefore, these assertions cannot be considered persuasive. In response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). Applicant makes multiple assertions of impermissible hindsight, but in each instance fails to acknowledge that the modification of the rejection cites explicit motivations for the combinations, and further fails to provide any reason why said motivations would not have been apparent to one of ordinary skill in the art and could only have been conceived with knowledge gleaned from Applicant’s disclosure. On page 15, Applicant asserts that combining the data types suggested by Cho into a single data object would be impermissible hindsight, despite this being the literal teaching of Cho, i.e. that Cho did in fact combine these data types into a combined data object, and for the purpose of supporting different demonstration playback modalities, contradicting Applicant’s assertion that the combination relies on impermissible hindsight. Therefore, this argument cannot be considered persuasive. Applicant argues on pages 15-16 that combining Wang, Ramani, and Cho relies on impermissible hindsight because it combines static and dynamic information, suggesting there is “no reason or motivation” to modify Wang’s system with Ramani’s virtual object manipulation and recording. Applicant’s argument fails to acknowledge that the rejection cites the motivation of supporting augmented reality demonstration authoring, where Ramani and Cho both disclose systems showing the benefits of augmented reality demonstrations. Rather than show that the rejection relies on a motivation disclosed by Applicant, Applicant’s remarks continue by arguing the modification would require “significant modification” and “experimentation past the teachings of the other references”, where such factors, in addition to simply being Applicant’s opinion that is not supported by any factual evidence or detailed rationale, are not indicators of impermissible hindsight, or non-obviousness in general, i.e. many obvious modifications to prior art references are complex to implement, and many non-obvious claimed inventions are trivially implemented modifications to prior art references. Therefore, this argument cannot be considered persuasive because the cited motivation is found in the references. Applicant further argues that Wang, MacIntyre, and Rajaram are directed to rapid prototyping and design exploration whereas Li and Zhao aim for computationally expensive methods. Applicant’s remarks fail to address the actual modification, which accounts for both the separate rapid prototyping phase and later use of high-fidelity replacement assets, and the use of either mobile or remote computing resources for generating the HDR lighting data, and instead just conclude without any analysis of the proposed modification of the rejection that the references do not motivate their combination as in the rejection. Applicant further asserts that the references do not motivate “creating this specific real-time loop”, without actually acknowledging or addressing the cited motivations, which cannot be considered persuasive. Applicant additionally alleges that the proposed combination would render Li and Zhao’s methods “inoperable for their intended purpose” because of the computing power of the mobile device, again without actually acknowledging or addressing the specific combinations as explained by the rejections, which do address these computing details. Further, the requirement for a modified system to remain operable for its intended purpose only applies to the primary reference being modified, i.e. Wang’s modified system remains a system for performing augmented reality authoring, contradicting Applicant’s assertion that this is an indicator of impermissible hindsight reconstruction. Applicant’s remarks additionally allege several instances of a reference teaching away from the proposed modification or the claim limitations. Applicant is reminded that the standard for a prior art reference teaching away from the claimed invention requires that the reference explicitly criticize, discredit, or otherwise discourage the solution claimed. Applicant’s remarks, page 12, suggest that Ramani teaches away from the claimed invention, but Applicant’s citation is simply what Ramani anticipates in terms of storage, and does not amount to criticizing, discrediting, or otherwise discouraging the claimed solution. Applicant’s remarks, page 13, suggest Rajaram teaches away from the claimed invention because Rajaram’s states an intent to leave paper at the core of the interaction, which does not amount to criticizing, discrediting, or otherwise discouraging the claimed solution. Applicant’s remarks, page 18, suggest that Jovanovikj teaches away from combination with MacIntyre, but the combination of the rejection does not combine “Jovanovikj with DART”, i.e. the specific modification is merely that one of ordinary skill in the art would recognize, as anticipated by Jovanovikj, that MacIntyre’s replacement of low-fidelity rapid prototyping asset versions with high-fidelity production asset versions could include low-fidelity and high-fidelity 3D polygonal models, and Applicant’s citation of Jovanovikj does not discuss, much less teach away from, this specific modification. Further, Jovanovikj does not actually teach away from combination with DART, i.e. observing that a previous approaches may be too restrictive does not amount to criticizing, discrediting, or otherwise discouraging the claimed solution. Finally, Applicant’s remarks, page 17, allege that Stack Overflow teaches away form the limitations as claimed. Applicant describes the response on page 4 of Stack Overflow as “a final conclusion”. The rejection specifically cited pages 1-3 of Stack Overflow, wherein the original user posted the question, and the top rated answer, also by the original user, described the solution they ended up using for the problem, which describes the dynamic scene object loading technique. First, Applicant’s assertion of teaching away is clearly contradicted by the fact that pages 1-3 do disclose the dynamic scene object loading technique, and Applicant’s remarks fail to suggest otherwise. Second, as is apparent from reading the reference, Applicant is citing a response from a different user, which not only fails to answer the posed question, but received 0 upvotes indicating it is not actually a solution recommended by Stack Overflow. Finally, even if the portion of Stack Overflow cited by Applicant was actually a serious attempt to answer the posed question, suggesting the original user “save the trouble” does not amount to criticizing, discrediting, or otherwise discouraging the claimed solution, especially in light of the fact that the suggestion to “save the trouble” was posted 3 years after the original user had already posted their implemented solution, preventing any trouble from being saved. Therefore, this argument cannot be considered persuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT BADER whose telephone number is (571)270-3335. The examiner can normally be reached 11-7 m-f. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at 571-272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ROBERT BADER/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Nov 20, 2024
Application Filed
Feb 26, 2025
Non-Final Rejection — §103, §112
Jun 12, 2025
Examiner Interview Summary
Jun 12, 2025
Applicant Interview (Telephonic)
Jul 07, 2025
Response Filed
Oct 15, 2025
Final Rejection — §103, §112
Jan 20, 2026
Request for Continued Examination
Jan 27, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586334
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12586335
SYSTEMS AND METHODS FOR RECONSTRUCTING A THREE-DIMENSIONAL OBJECT FROM AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12541916
METHOD FOR ASSESSING THE PHYSICALLY BASED SIMULATION QUALITY OF A GLAZED OBJECT
2y 5m to grant Granted Feb 03, 2026
Patent 12536728
SHADOW MAP BASED LATE STAGE REPROJECTION
2y 5m to grant Granted Jan 27, 2026
Patent 12505615
GENERATING THREE-DIMENSIONAL MODELS USING MACHINE LEARNING MODELS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
70%
With Interview (+26.4%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 393 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month