Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 05/31/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1, 2, 4-5, 10-12, 14-15, and 20 are rejected under 35 U.S.C. 102(a)(2) as being unpatentable over Little et al. (US 12450662 B1) (Hereinafter referred to as Little).
Regarding claim 1, Little discloses a method comprising:
receiving, using control circuitry, first image data of an environment captured during a first time period, the first image data characterizing the environment in three dimensions during the first time period;
accessing, using control circuitry, stored second image data of the environment captured during a second time period earlier than the first time period, the second image data characterizing the environment in three dimensions during the second time period; and (see Col 8 Line 2-7, “Certain computing devices may additionally or alternatively include one or more hardware components (e.g., application specific integrated circuits, field programmable gate arrays, systems on a chip, and the like) to implement some or all of the functionalities they are described as performing.”;
Col 1 Line 65 - Col 2 Line 10, “a system, comprising one or more processors and memory coupled to the one or more processors. The memory stores instructions executable by the one or more processors to perform operations. The operations include receiving first three-dimensional image data at a first time, the first three-dimensional image data representing an environment and a first plurality of objects disposed within the environment at the first time. The operations further include receiving second three-dimensional image data at a second time, the second time being later than the first time, and the second three-dimensional image data representing the environment at the second time.”)
causing, using control circuitry, a display image to be rendered at an extended reality device during the first time period, based on the first image data; (see Col 8 Line 18-20, “the depiction of the user 302 and VR/AR/MR rendering device 304 is provided to show the perspective of the view of the environment 300.”;
Col 1 Line 53-59, “The techniques further include causing an electronic device to display a three-dimensional image of the environment based at least in part on at least one of the first three-dimensional image data and the second three-dimensional image data, the three-dimensional image including a three-dimensional rendering of the object.”;
Col 13 Line 41-46, “The VR/AR/MR rendering device 1004 may be capturing an image of an actual environment in real time, such as obtaining point cloud or other data from which a three-dimensional image representative of the actual environment 1000 may be displayed.”)
wherein the display image comprises an object from the second image data. (see Col 2 Line 12-14, “The operations further include identifying an object of the first plurality of objects based at least in part on the difference.”;
ABSTRACT “An electronic device displays a three-dimensional image of the environment based at least in part on at least one of the first and second three-dimensional image data, including a three-dimensional rendering of the object.).
Regarding claim 2, Little discloses wherein the object is disposed at a position in the environment in the second image data, wherein the rendering comprises rendering the object at the position in the environment. (see Col 1 Line 51-59, “The techniques further include identifying an object of the first plurality of objects based at least in part on the difference. The techniques further include causing an electronic device to display a three-dimensional image of the environment based at least in part on at least one of the first three-dimensional image data and the second three-dimensional image data, the three-dimensional image including a three-dimensional rendering of the object.”).
Regarding claim 4, Little discloses the method of claim 1, wherein accessing the stored second image data comprises: determining that the environment characterized by the first image data is the same as the environment characterized by the second image data. (see FIG. 21 and Col 3 Line 56-58, “Fig. 21 illustrates a process in which differences to an environment are automatically detected.”).
PNG
media_image1.png
843
592
media_image1.png
Greyscale
Col 20 Line 55-58, “At 2106, the one or more processors identify an item of the first plurality of items based on a difference between the first three-dimensional image data and the second three-dimensional image data.”).
Regarding claim 5, Little discloses the method of claim 1, further comprising:
extracting a first plurality of features from the first image data; and
comparing the first plurality of features with a second plurality of features extracted from the stored second image data to identify at least one selected feature from:
one or more matched features of the first image data and the stored second image data;
or
one or more unmatched features of the stored second image data; or
one or more unmatched features of the first image data;
wherein the display image comprises the at least one selected feature. (see claim 1, “determining, based on the first three-dimensional point cloud data and using geometric processing, first feature data corresponding to the object, wherein the first feature data comprises one or more of: an object surface geometry, a combination of object surface geometries, or an object color value;… determining, based on the second three-dimensional point cloud data and using geometric processing, second feature data corresponding to the object; retrieving the first feature data from the inventory database; determining that a difference between the first feature data and the second feature data meets or exceeds a difference threshold;… a three-dimensional rendering of the object generated based on the first feature data and the second feature data, an indicia representing the difference presented over the three-dimensional rendering of the object,").
Regarding claim 10, Little discloses the method of claim 1, wherein the accessing is performed in response to detecting an interaction with a virtual object in the environment via the extended reality device. (see Col 10 Line 39-46, “That is, the VR/AR/MR processor 102 may process the user utterance 502 and providing an indication or representation of the indication 504 and the indication 508 to the VR/AR/MR rendering device 302, for display to the user 302. By displaying information about the objects and interacting with the user via a virtual environment, an accurate and complete inventory of objects may be generated.”).
Regarding claim 11, the claim 11 is similar in scope to claim 1 and is rejected under the same rationale.
Regarding claim 12, the claim 12 is similar in scope to claim 2 and is rejected under the same rationale.
Regarding claim 14, the claim 14 is similar in scope to claim 4 and is rejected under the same rationale.
Regarding claim 15, the claim 15 is similar in scope to claim 5 and is rejected under the same rationale.
Regarding claim 20, the claim 20 is similar in scope to claim 10 and is rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3, 6, 13, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Little et al. (US 12450662 B1) (Hereinafter referred to as Little) in view of Hasselman et al. (“ARephotography: Revisiting Historical Photographs using Augmented Reality." Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems. 2023.) (Hereinafter referred to as Hasselman).
Regarding claim 3, Little does not explicitly disclose further comprising modifying the object based on the first image data.
Hasselman more explicitly teaches further comprising modifying the object based on the first image data. (see Fig 1., “Our approach takes a historical photograph (Left) of a building and produces a textured 3D model that is visually overlaid over the current view of the building using AR”; ABSTRACT, “where you can view buildings and street views from historical photography seamlessly embedded in the present environment.").
As both Little and Hasselman from the same filed of endeavor, it would have been obvious to one of ordinary skill in the art before the effective fling date of the claimed invention to include obtaining further comprising modifying the object based on the first image data, in context of extended reality, disclosed by Little according to the teachings of Hasselman in order to enable a more immersive way in AR with content that is more realistic (see 5 CONCLUSION AND FUTURE WORK of Hasselman).
Regarding claim 6, Little discloses comparing the first plurality of features with the second plurality of features to identify the one or more matched features of the first image data and the stored second image data; (see claim 1 of Little, “determining, based on the first three-dimensional image data, first three-dimensional point cloud data for an object of the first plurality of objects; determining, based on the first three-dimensional point cloud data and using geometric processing, first feature data corresponding to the object,...determining, based on the second three-dimensional point cloud data and using geometric processing, second feature data corresponding to the object; retrieving the first feature data from the inventory database; determining that a difference between the first feature data and the second feature data meets or exceeds a difference threshold;”).
However, Little does not explicitly disclose and spatially aligning the stored second image data with the first image data using the one or more matched features.
Hasselman more explicitly teaches and spatially aligning the stored second image data with the first image data using the one or more matched features. (see 4 USER STUDY of Hasselman, “The AR view with our 3D reconstruction provides a better understanding of the spatial relationship and alignment between the historic photograph and the real building”)
As both Little and Hasselman from the same filed of endeavor, it would have been obvious to one of ordinary skill in the art before the effective fling date of the claimed invention to include obtaining further comprising modifying the object based on the first image data, in context of extended reality, disclosed by Little according to the teachings of Hasselman in order to enable a more immersive way in AR with content that is more realistic and provides a better spatial experience (see 5 CONCLUSION AND FUTURE WORK of Hasselman).
Regarding claim 13, the claim 13 is similar in scope to claim 3 and is rejected under the same rationale.
Regarding claim 16, the claim 16 is similar in scope to claim 6 and is rejected under the same rationale.
Claim(s) 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Little et al. (US 12450662 B1) (Hereinafter referred to as Little) in view of Jerripothula et al. (US 12530748 B2) (Hereinafter referred to as Jerripothula).
Regarding claim 7, Little discloses and at least one selected from: wherein the object is segmented from the stored second image data; or wherein the object is identified based on the one or more unmatched features. (see Col1 line 51-59 of Little, “The techniques further include identifying an object of the first plurality of objects based at least in part on the difference. The techniques further include causing an electronic device to display a three-dimensional image of the environment based at least in part on at least one of the first three-dimensional image data and the second three-dimensional image data, the three-dimensional image including a three-dimensional rendering of the object.”).
However, Little does not explicitly disclose identifying the object from the stored second image data based on a saliency evaluation;
Jerripothula more explicitly teaches identifying the object from the stored second image data based on a saliency evaluation; (see ABSTRACT of Jerripothula, “A region of interest (ROI) in the at least one of the plurality of images may be determined based on the saliency parameters and the co-saliency parameters.”)
As both Little and Jerripothula from the same filed of endeavor, it would have been obvious to one of ordinary skill in the art before the effective fling date of the claimed invention to include identifying the object from the stored second image data based on a saliency evaluation, in context of extended reality, disclosed by Little according to the teachings of Jerripothula in order to improve automated region of interest detection in large-scale image processing scenarios (see Col1 Line 62-65 of Jerripothul).
Regarding claim 17, the claim 17 is similar in scope to claim 7 and is rejected under the same rationale.
Claim(s) 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Little et al. (US 12450662 B1) (Hereinafter referred to as Little) in view of Chen et al. (US 20240144620 A1) (Hereinafter referred to as Chen).
Regarding claim 8, Little discloses wherein the display image comprises the modified object. (see Col 14 Line 58-63 of Little, “The object (designated as 1102c is shown as the object may be displayed to a user of a VR/AR/MR rendering device, such as a user of the VR/AR/MR rendering device 104. An image of the object 1102c is displayed to the user as an image of the undamaged object 1102a in a virtual overlay on top of the damaged object 1102b.”).
However, Little does not explicitly disclose determining one or more environmental context values from the first image data; and modifying the object based on the one or more environmental context values;
Chen more explicitly teaches determining one or more environmental context values from the first image data; (see para. [0087] of Chen, “The input images may be the output representative data (accessed video essence and metadata) of the moment in VR. Additional visual data may provide texture, color, and context information to reduce inaccuracy and inconsistency in depth estimation and inpainting. Improved depth values may help in rendering new views.”)
modifying the object based on the one or more environmental context values; (see para. [0091] of Chen, “In some embodiments, having additional texture, color, context and depth information from capturing the VR moment may improve depth estimation and may improve synthesizing new views to complete the point cloud.”)
As both Little and Chen from the same filed of endeavor, it would have been obvious to one of ordinary skill in the art before the effective fling date of the claimed invention to include determining one or more environmental context values from the first image data; and modifying the object based on the one or more environmental context values, in context of extended reality, disclosed by Little according to the teachings of Chen in order to enable creation of an enhanced image (e.g., 2D or 3D images, photos, videos) of a view of a VR environment (see para. [0005] of Chen).
Regarding claim 18, the claim 18 is similar in scope to claim 8 and is rejected under the same rationale.
Claim(s) 9 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Little et al. (US 12450662 B1) (Hereinafter referred to as Little) in view of Cansizoglu et al. (US 12307605 B2) (Hereinafter referred to as Cansizoglu).
Regarding claim 9, Little does not explicitly disclose generating third image data comprising the first image data and the object; and storing the third image data.
Cansizoglu more explicitly teaches generating third image data comprising the first image data and the object; and storing the third image data. (see Col 31 Line 35-37 of Cansizoglu, “third image data based on the first metadata and the second image processing operation performed on the second image data.”).
As both Little and Cansizoglu from the same filed of endeavor, it would have been obvious to one of ordinary skill in the art before the effective fling date of the claimed invention to include disclose generating third image data comprising the first image data and the object; and storing the third image data, in context of extended reality, disclosed by Little according to the teachings of Cansizoglu in order to enable real and virtual environments to be combined in varying degrees to facilitate interactions from a user in a real time manner (see Col 2 Line 36-38 of Cansizoglu).
Regarding claim 19, the claim 9 is similar in scope to claim 8 and is rejected under the same rationale.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hyorim Park whose telephone number is (571)272-3859. The examiner can normally be reached Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jason Chan can be reached at (571) 272-3022. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Hyorim Park/Examiner, Art Unit 2619
/JASON CHAN/Supervisory Patent Examiner, Art Unit 2619