Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Drawings
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(4) because reference character 110 in Fig. 1 has been used to designate both an MR headset [0004] and a spatial computing device [0014]. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they do not include the following reference sign(s) mentioned in the description: reference character 522 in Fig. 5 mentioned in [0046]. Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: reference characters 235 and 240 in Fig. 2. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
The drawings are objected to because Figs. 3 and 4 are confusing in light of the disclosure:
Fig. 3 is described in the disclosure to depict a mobile device “generally aimed towards an area corresponding to the position of the QR code” [0034] which implies the mobile device should be in front of window 320. The figure does not depict this.
Fig. 4 is described to depict a flowchart with operation 410 described in the disclosure as “receiving a first depth map of a mixed reality (MR) headset field of view” [0035]. Fig. 4 shows operation 410 as “receive first depth map of spatial computing device field of view.” An MR headset and a spatial computing device are not synonymous.
Fig. 4 is also described to depict operation 440 described in the disclosure as “An spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device is determined” [0036]. Fig. 4 shows operation 440 as “determine location of spatial computing device generated content displayed by the MR headset.” An MR headset and a spatial computing device are not synonymous.
Fig. 4 is also described to depict operation 450 described in the disclosure as “The spatial computing device generated content and spatial computing device field of view location is sent at operation 450 to the mobile device” [0036]. Fig. 4 shows operation 450 as “send the MR headset generated content and spatial computing device field of view location to the mobile device.” An MR headset and a spatial computing device are not synonymous.
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency. Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Specification
Applicant is reminded of the proper language and format for an abstract of the disclosure.
The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details.
The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided.
The abstract of the disclosure is objected to because "int eh" should read "in the" in the final sentence. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b).
The disclosure is objected to because of the following informalities:
In paragraph [0003], “int eh” should read “in the”.
In paragraph [0003], there is no period at the end of the final sentence of the paragraph.
In paragraph [0012], “by an spatial computing device” should read “by a spatial computing device”.
In paragraph [0013], “into an spatial computing device” should read “into a spatial computing device”.
In paragraph [0014], “Mobile device 125 include any type” should read “Mobile device 125 includes any type”.
In paragraph [0015], “WiFi” should read “Wi-Fi” as it reads as “Wi-Fi” in paragraph [0045].
In paragraph [0017], “users physical environment” should read “user’s physical environment”.
In paragraph [0024], “110vin” should read “110 in”.
In paragraph [0025] “cloud based computing” should read “cloud-based computing” as it reads “cloud-based” in paragraphs [0019] and [0043].
In paragraph [0026], “two point” should read “two-point”.
In paragraph [0030], “convolutional neural networks” should read “Convolutional Neural Networks”.
In paragraph [0030], “Generative adversarial networks” should read “Generative Adversarial Networks”.
In paragraph [0034], “illustrating an spatial” should read “illustrating a spatial”.
In paragraph [0034], “includes an spatial” should read “includes a spatial”.
In paragraph [0036], “An spatial” should read “a spatial”.
In paragraph [0041], “HR” should read “MR”.
In paragraph [0046], “machine readable” should read “machine-readable” as it reads “machine-readable” in paragraph [0058].
In paragraph [0048], “int eh” should read “in the”.
In paragraph [0060], “stored on computer readable media” should read “stored on computer-readable media” as it reads “computer-readable” in paragraphs [0044], [0046], and [0062].
In paragraph [0060] ,“computer readable storage device” should read “computer-readable storage device” as it reads “computer-readable” in paragraphs [0044], [0046], and [0062].
Appropriate correction is required.
Claim Objections
Claims 1, 9, 11, and 19 are objected to because of the following informalities:
In Claim 1, “of spatial computing device” requires an article preceding the word “spatial,” this could be “a” or “the”.
In Claim 9, ”wherein first” should read “wherein the first”.
In Claim 11, “of spatial computing device” requires an article preceding the word “spatial,” this could be “a” or “the”.
In Claim 19, “of spatial computing device” requires an article preceding the word “spatial,” this could be “a” or “the”.
In Claim 19, ”comprising.” should read “comprising:”.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 8 and 18 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 8 recites “selecting an area around the spatial computing device generated content,” while the claim is recited in the specification in [0039] and [0055], the specification does not support describe limitation in sufficient detail so that one of ordinary skill in the art can reasonably conclude the inventor has possession of the claimed invention. Claim 18 also recites “selecting an area around the spatial computing device generated content,” while the claim is recited in the specification in [0039] and [0055], the specification does not support describe limitation in sufficient detail so that one of ordinary skill in the art can reasonably conclude the inventor has possession of the claimed invention.
The following is a quotation of 35 U.S.C. 112(b):
CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3, 8, 11-13, 18, 19, and 20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation “determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device,” this is unclear as content does not have a field of view. For the sake of further prosecution, Examiner will interpret “determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device” as determining the location of the generated content displayed by the spatial computing device. Claim 1 also recites an MR headset and spatial computing device, an MR headset is a spatial computing device so it is unclear whether these are the same devices. For the sake of further prosecution, Examiner will interpret the spatial computing device and MR headset as one. Claims 2-10 are rejected based on dependency on Claim 1.
Claim 2 recites “a mobile device,” Claim 1, to which it depends, also recites a mobile device, it is unclear whether this is a new mobile device or referring to the same one. For the sake of further prosecution, Examiner will interpret both devices as the same. Claims 3 and 4 are rejected based on dependency on Claim 2.
Claim 3 recites "the MR generated content" in line 2. There is insufficient antecedent basis for this limitation in the claim. For the sake of further prosecution, Examiner will interpret “the MR generated content” as the content generated by the spatial computing device. Claim 4 is rejected on dependency on Claim 3.
Claim 8 recites “an area around the spatial computing device” in line 2. “Around” is a visually subjective term and considered relative terminology. For the sake of further prosecution, Examiner will interpret “an area around the spatial computing device” as anywhere in the field of view containing spatial computing device generated content.
Claim 11 recites the limitation “determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device,” this is unclear as content does not have a field of view. For the sake of further prosecution, Examiner will interpret “determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device” as determining the location of the generated content displayed by the spatial computing device. Claim 11 also recites an MR headset and spatial computing device, an MR headset is a spatial computing device so it is unclear whether these are the same devices. For the sake of further prosecution, Examiner will interpret the spatial computing device and MR headset as one. Claims 12-18 are rejected based on dependency on Claim 11.
Claim 12 recites “a mobile device,” Claim 1, to which it depends, also recites a mobile device, it is unclear whether this is a new mobile device or referring to the same one. For the sake of further prosecution, Examiner will interpret both devices as the same. Claims 13 and 14 are rejected based on dependency on Claim 12.
Claim 13 recites "the MR generated content" in line 2. There is insufficient antecedent basis for this limitation in the claim. For the sake of further prosecution, Examiner will interpret “the MR generated content” as the content generated by the spatial computing device. Claim 14 is rejected on dependency on Claim 13.
Claim 18 recites “an area around the spatial computing device” in line 2. “Around” is a visually subjective term and considered relative terminology. For the sake of further prosecution, Examiner will interpret “an area around the spatial computing device” as anywhere in the field of view containing spatial computing device generated content.
Claim 19 recites the limitation “determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device,” this is unclear as content does not have a field of view. For the sake of further prosecution, Examiner will interpret “determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device” as determining the location of the generated content displayed by the spatial computing device. Claim 19 also recites an MR headset and spatial computing device, an MR headset is a spatial computing device so it is unclear whether these are the same devices. For the sake of further prosecution, Examiner will interpret the spatial computing device and MR headset as one. Claims 20 is rejected based on dependency on Claim 19.
Claim 20 recites “a mobile device,” Claim 19, to which it depends, also recites a mobile device, it is unclear whether this is a new mobile device or referring to the same one. For the sake of further prosecution, Examiner will interpret both devices as the same.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 11, and 19 are rejected under U.S.C. 103 as being unpatentable over Zeng et al. (US 2025/0299351 A1), hereinafter referenced as Zeng, in view of Fradet et al. (US 11651576 B2), hereinafter referenced as Fradet.
Regarding Claim 1, Zeng discloses a computer implemented method (“Processes and methods according to the above-described examples can be implemented using computer-executable instructions” [0184]) comprising:
receiving a first depth map of a mixed reality (MR) headset field of view ("obtaining first depth data <read on depth map> from a first depth data source, wherein the first depth data is associated with a first field of view (FOV)," [Abs]; "comprises a camera, a mobile device (e.g., a mobile telephone or so-called “smart phone” or other mobile device), a wireless communication device, a wearable device, an extended reality device (e.g., a virtual reality (VR) device, an augmented reality (AR) device, or a mixed reality (MR) device)" [0008]; Fig. 6);
receiving a second depth map of a mobile device camera field of view ("obtaining second depth data <read on depth map> from a second depth data source, wherein the second depth data is associated with a second FOV," [Abs]; "comprises a camera, a mobile device (e.g., a mobile telephone or so-called “smart phone” or other mobile device)," [0008]; Fig. 6);
aligning the first and second depth maps ("generating a fused depth seed based on the FOV adjusted depth data and at least one of the first depth data or an additional FOV adjusted depth data, and determining a depth map based on the fused depth seed. <read on aligning depth maps> " [Abs]; Fig. 6);
PNG
media_image1.png
686
484
media_image1.png
Greyscale
A person having ordinary skill in the art before the effective filing date of the claimed invention would recognize depth data associated with a field of view as a depth map. A person having ordinary skill in the art before the effective filing date of the claimed invention would also recognize fusing a depth seed on the FOV adjusted depth data and an additional FOV adjusted depth data as aligning a first and second depth map.
Zeng does not explicitly teach
determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device;
and sending the spatial computing device generated content and spatial computing device field of view location to the mobile device.
However, Fradet teaches
determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device (This limitation is being interpreted as: determining the location of the generated content displayed by the spatial computing device.”; “… the method may also include: rendering the content to be shared on the first user device using the viewing position and/or orientation of the first user device and the 3D position of the content to be shared. In some embodiments, said enabling may also include receiving a user input for selecting the content to be shared in the rendered mixed reality scene." [Col 5, ln 12]; "the first user UA has to input the selection of the content to be shared, which is received by his user device DA in step 3. While various input means for receiving the user input can be implemented in the user device, the use of a touchscreen is particularly suitable for mobile devices such as a mobile phone or a tablet computer." [Col 6, ln 34]; the 3D position of the content to be shared reads as location of the generated content);
and sending the spatial computing device generated content and spatial computing device field of view location to the mobile device ("... user device DA sends in step 7 directly or via the server the required information, which can be one or more of the following data: an identifier that identifies the object to be shared, the location, the orientation, the scale, the color, or the texture of the virtual content" [Col. 6, ln 65]; "a mixed reality scene rendered on at least two user devices having different viewing positions and/or orientations onto the mixed reality scene, may include: means for receiving from a first user device information related to a virtual content in the mixed reality scene which is shared by the first user device with the second user device, wherein the received information comprises the 3D position of the virtual content to be shared; and means for rendering the shared virtual content with regard to the viewing position and/or orientation of the second user device onto the mixed reality scene." [Col. 12, ln 37]; where required information that is being sent includes the object to be shared reads as the spatial computing device generated content and the location reads as spatial computing device field of view location. This information is received by second user device reads as the mobile device).
Regarding Claim 11, it recites the limitations that are similar in scope to claim 1, but as a machine-readable storage device. As shown in the rejection, the combination of Zeng and Fradet disclose the limitations of Claim 1. Additionally, Zeng discloses
a machine-readable storage device having instructions for execution by a processor of a machine to cause the processor to perform operations to perform a method (“the code may be stored on a computer-readable or machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors.” [0141]) the operations comprising….
Regarding Claim 19, it recites the limitations that are similar in scope to claim 1, but as a device. As shown in the rejection, the combination of Zeng and Fradet disclose the limitations of Claim 1. Additionally, Zeng discloses
a device (“an apparatus” [0005]) comprising:
a processor;
and a memory device coupled to the processor (“that includes at least one memory and at least one processor (e.g., implemented in circuitry) coupled to the at least one memory.” [0005]) and having a program stored thereon for execution by the processor to perform operations (“when executed by one or more processors, perform the recited operations” [0140]) comprising….
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or devices as taught by Zeng by determining a spatial computing device field of view location of spatial computing device generated content displayed by the spatial computing device; and sending the spatial computing device generated content and spatial computing device field of view location to the mobile device as taught by Fradet. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification in order to share or synchronize with an external device for tasks such as content sharing or collaborative interactions.
Claims 2, 3, 12, 13, and 20 are rejected under U.S.C. 103 as being unpatentable over Zeng in view of Fradet, and in further view of Caswell et al. (US 11386627 B2), hereinafter referenced as Caswell.
Regarding Claims 2, 12, and 20, the combination of Zeng and Fradet disclose the method and devices of Claims 1, 11, and 19 respectively. They do not expressly disclose the limitations of Claims 2, 12, and 20; however, Caswell discloses a system for cross reality, enabling multiple devices to efficiently render shared location-based content, with a world reconstruction component which contains a perception module that receives and fuses depth maps and other sensor data from devices such as a wearable XR device or a handheld mobile device.
PNG
media_image2.png
484
758
media_image2.png
Greyscale
PNG
media_image3.png
428
744
media_image3.png
Greyscale
Caswell also teaches wherein
adding the spatial computing device generated content as an overlay to a mobile device camera feed such that the spatial computing device generated content appears in the camera feed at a synchronized location (“A localization process, which may be used to identify location-based virtual content, may be used for of the functions of the XR system, such as to provide realistic shared experiences for multiple users. To provide realistic XR experiences to multiple users, an XR system must know the users' physical surroundings in order to correctly correlate locations of virtual objects in relation to real objects. An XR system may build an environment map of a scene, which may be created from image and/or depth information collected with sensors that are part of XR devices worn by users of the XR system.” [Col 9, ln 51]; “AR contents may also be presented on the display 508, overlaid on the see-through reality 510” [Col 14, ln 58]);Fig 57; where AR contents reads as spatial computing device generated content and overlaid on the display reads on an overlay to a mobile device camera feed such that the spatial computing device generated content appears in the camera feed and a localization process … to provide realistic shared experiences for multiple users reads on at a synchronized location).
PNG
media_image4.png
522
682
media_image4.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet by adding the spatial computing device generated content as an overlay to a mobile device camera feed such that the spatial computing device generated content appears in the camera feed at a synchronized location as taught by Caswell. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to provide users a collaborative experience where they can view the same virtual content synchronized with their mobile device.
Regarding Claims 3 and 13, the combination of Zeng, Fradet, and Caswell disclose the method and device of the Claims 2, and 12 respectively. Zeng and Fradet do not expressly disclose the limitations of Claims 3 and 13. However, Caswell further discloses wherein:
and further comprising executing interactions with the MR generated content via the mobile device (“That orientation may change from session to session as a user interacts with the XR system, whether different sessions are associated with different users, each with their own wearable device with sensors that scan the environment, or the same user who uses the same device at different times.” [Col 9, ln 47]; “Users of XR devices may interact with that virtual content as they pass through the park playing the game.” [Col. 81, ln 21]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet, in further view of Caswell, by further comprising executing interactions with the MR generated content via the mobile device as taught by Caswell. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification to provide users an immersive experience where they can interact with virtual content within a game, simulation, or interactive learning situation.
Claims 4 and 14 are rejected under U.S.C. 103 as being unpatentable over Zeng in view of Fradet in further view of Caswell, and in further view of Holland (US 2023/0013539 A1).
Regarding Claims 4 and 14, the combination of Zeng, Fradet, and Caswell disclose the method and device of Claims 3 and 13 respectively. They do not expressly disclose the limitations of Claims 4 and 14; however, Holland discloses wherein
the MR generated content comprises a QR code ("XR generated content comprising a QR code" (Abstract); "Where mixed reality is an example of XR" [0002]; Fig. 5A).
PNG
media_image5.png
302
430
media_image5.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet, in further view of Caswell, by including MR generated content comprising a QR code as taught by Holland. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification because MR content comprising a QR code would allow devices to directly interact with the QR code without needing a physical copy of it, or holding their device up to another screen. QR codes allow instant access to shared resources like a hyperlink that automatically navigates to a webpage.
Claims 5 and 15 are rejected under U.S.C. 103 as being unpatentable over Zeng in view of Fradet, and in further view of Bloch et al. (US 11417001 B1), hereinafter referenced as Bloch.
Regarding Claims 5 and 15, the combination of Zeng and Fradet disclose the method and device of Claims 1 and 11 respectively. They do not expressly disclose the limitations of Claims 5 and 15; however, Bloch discloses wherein
the first depth map is generated by a LIDAR sensor of the spatial computing device and the second depth map is generated by a LIDAR sensor of the mobile device ("receiving a first depth map of the real scene, wherein data for the first depth map is acquired by a ranging system using a laser; and receiving a second depth map of the real scene, wherein data for the second depth map is acquired by the ranging system" [Col. 18, ln 35]; "the depth sensor 116 is a laser-ranging system, such as a LiDAR (Light Detection and Ranging) system" [Col 6, ln 45]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet with the technique of using a LIDAR sensor to generate depth maps as taught by Bloch. One of ordinary skill in the art would have been motivated to make this modification because LIDAR directly measures precise distances using laser pulses, allowing it to generate detailed and accurate depth maps.
Claims 6 and 16 are rejected under U.S.C. 103 as being unpatentable over Zeng in view of Fradet, and in further view of Son et al. (US 2015/0109415 A1), hereinafter referenced as Son.
Regarding Claims 6 and 16, the combination of Zeng and Fradet disclose the method and device of Claims 1 and 11 respectively. They do not expressly disclose the limitations of Claims 6 and 16; however, Son discloses wherein
aligning the first and second depth maps comprises synchronizing the first and second depth maps using an iterative closest point algorithm ("… aligning plurality of depth maps to reconstruct a 3D model in an embedded system." [0006] "It matches point cloud data in the first depth map with point cloud data in a previous first depth map by using an iterative closest point (ICP) algorithm" [0012]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet by aligning depth maps using an ICP algorithm as taught by Son. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification because ICP algorithms iteratively find the best rigid transformation between two point clouds, derived from depth maps, by minimizing distance errors. An ICP algorithm is an obvious choice for aligning depth maps because it can handle overlap, is precise, and its’ simple implementation has led to widespread use within the art.
Claims 7 and 17 are rejected under U.S.C. 103 as being unpatentable over Zeng in view of Fradet, and in further view of Wei et al. (US 2016/0012633 A1), hereinafter referenced as Wei. Regarding Claims 7 and 17, the combination of Zeng and Fradet disclose the method and device of Claims 1 and 11 respectively. They do not expressly disclose the limitations of Claims 7 and 17; however, Wei discloses wherein
aligning the first and second depth maps comprises synchronizing the first and second depth maps by selecting multiple portions of the first depth map and searching for corresponding portions of the second depth map (Figure 4; "At (403) a plurality of correspondences between each of a plurality of pairs of depth maps can be identified. More particularly, in some embodiments, the plurality of depth maps obtained at (402) can be organized into a plurality of pairs of depth maps. For example, each pair of depth maps that exhibit some overlap in their corresponding portions of the scene can be considered together as a pair. Each pair of depth maps can consist of a source depth map and a target depth map. At (403) a plurality of correspondences between each of such pairs can be identified. Each correspondence can consist of a pair of points <read on portion> (e.g. one point from the source depth map and one point from the target depth map) that are close in distance and likely to correspond to the same object in the scene." [0070]).
PNG
media_image6.png
384
354
media_image6.png
Greyscale
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet by aligning depth maps by selecting multiple portions of the first depth map and searching for corresponding points in the second depth map as taught by Wei. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make this modification because selecting multiple portions in the first depth map then searching for corresponding portions in the second depth map would improve accuracy by reducing the influence of outlying points because a larger set of correspondences are available, as well as better geometric representation in the alignment by enforcing localized consistency within the map.
Claims 8 and 18 are rejected under U.S.C. 103 as being unpatentable over Zeng in view of Fradet in further view of Newman (US 2024/0087094 A1), hereinafter referenced as Newman. Regarding Claims 8 and 18, the combination of Zeng and Fradet disclose the method and device of Claims 1 and 11 respectively. Fradet does not expressly disclose the limitations of Claims 8 and 18; however, the combination of Newman and Zeng do.
Newman discloses a method and system combining multiple depth maps from different fields of view. Newman further teaches wherein
aligning the first and second depth maps comprises synchronizing the first and second depth maps (“combining depth information collected from at least two source depth maps” [0010])
selecting an area (“select at least a first selected location” [0012])
searching for a corresponding area of the second depth map (“locate corresponding locations and depth values in at least two of the source depth maps and by using zero or at least one of the corresponding depth values” [0012]).
Newman does not disclose
area around the spatial computing device generated content
However, Zeng discloses within
an area around the spatial computing device generated content (This limitation is being interpreted as: as anywhere in the field of view containing spatial computing device generated content; “the XR system 200 can generate a map (e.g., a three-dimensional (3D) map) of an environment in the physical world, track a pose (e.g., location and position) of the XR system 200 relative to the environment (e.g., relative to the 3D map of the environment), position and/or anchor virtual content in a specific location(s) on the map of the environment, and render the virtual content on the display 209 such that the virtual content appears to be at a location in the environment corresponding to the specific location on the map of the scene where the virtual content is positioned and/or anchored.” [0067]; where the map generated by the XR system of the scene including the position and/anchor of the virtual content reads on anywhere in a field of view containing spatial computing device generated content and virtual content reads on spatial computing device generated content)
A person having ordinary skill in the art before the effective filing date could recognize “an area around the spatial computing device generated content” as the anchor point of the content within the field of view.
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet by synchronizing the first and second depth maps by selecting a location and searching for a corresponding area of the second depth map as taught by Newman. One of ordinary skill in the art would have been motivated to make this modification would improve position and orientation accuracy of the spatial computing device generated content by focusing on the area of the generated content.
Claims 9 and 10 are rejected under U.S.C. 103 as being unpatentable over Zeng in view of Fradet, and in further view of Islam et al. (US 10510155 B1), hereinafter referenced as Islam.
Regarding Claim 9, the combination of Zeng and Fradet disclose the method of Claim 1. They do not expressly disclose the limitations of Claim 9; however, Islam discloses wherein
first and second depth maps have resolutions of 640x480 pixels ("... of the first depth map 382 may correspond with a surface of the object 360. While FIG. 3B depicts the first depth map 382 as having a resolution of 12×15 pixels, the first depth map 382 may have a different resolution in other examples, such as a resolution of 1280×1024 pixels, 320×240 pixels, 640×480 pixels, or a higher or lower resolution (e.g., 64×48 pixels or 204×204 pixels)." [Col. 13, ln 39]; "While the second depth map 392 in this example has a resolution of 4×5 pixels, it may have a different resolution in other examples, such as 1280×1024 pixels, 320×240 pixels, 640×480 pixels, or a higher or lower spatial resolution..." [Col. 14, ln 65]).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet by setting depth map resolution as 640x480 as taught by Islam. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make these modifications because a 640x480 resolution matches a traditional 4:3 aspect ratio, known as VGA (Video Graphics Array) Resolution. This mid-resolution depth capture balances computational efficiency and high data rates which is important when capturing multiple frames for seamless feed synchronization between devices.
Regarding Claim 10, the combination of Zeng and Fradet disclose the method of Claim 1. They do not expressly disclose the limitations of Claims 9 and 10; however, Islam discloses wherein
the first and second depth maps have resolutions that are variable to adjust processing resources required to perform aligning of the first and second maps ("In an embodiment, up-sampling may be performed as part of updating the first depth map, so as to enhance a quantity of empty pixels of the first depth map that are updated. In some cases, the up-sampling may be performed in a situation in which, e.g., the first depth map has a higher resolution than the second depth map. In such a situation, a pixel from the second depth map may be used to update multiple empty pixels of the first depth map. For instance, the pixel from the second depth map may be used to update a corresponding empty pixel in the first depth map as well as a set of adjacent empty pixels. If up-sampling is not performed, the number of empty pixels in the first depth map that are updated may be small relative to a total number of empty pixels or a total number of pixels of the first depth map in a scenario in which the resolution of the first depth map is much higher than the resolution of the second depth map. Thus, updating the empty pixels may have only a limited impact on the first depth map as a whole if the up-sampling is not performed. Accordingly, the up-sampling may be performed when updating empty pixels of the first depth map so as to have a greater impact on how much depth information is in the first depth map." [Col. 4, ln 41]; “ Embodiment 18 of the present disclosure relates to a method of updating one or more depth ” [Col 31, ln 61]; “ Embodiment 19 includes method of embodiment 18. In Embodiment 19, the method the first depth map has a first resolution higher than a second resolution of the second depth map, and the method further comprises: identifying, for at least one pixel that belonged or belongs to the one or more empty pixels, a respective set of one or more adjacent empty pixels of the first depth map which are adjacent to the at least one pixel and which have no assigned depth values; and assigning to the respective set of one or more adjacent empty pixels a depth value that was assigned or is to be assigned to the at least one pixel.” [Col 32, ln 50]; where up-sampling reads on adjusting the resolutions to perform aligning and updating one or more depth maps reads on first and second depth maps).
It would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to apply and/or modify the method or device as taught by Zeng in view of Fradet by having resolutions that are variable to adjust processing resources required to perform aligning the depth maps as taught by Islam. One of ordinary skill in the art before the effective filing date of the claimed invention would have been motivated to make these modifications to adapt to situations where the depth maps have different resolutions, or if a user would like to enhance or limit either the speed or accuracy of the claimed invention.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure:
Lee et al. (US 2024/0144496 A1) discloses image alignment using multiple cameras on a head mounted device.
Kroeger (US 11555903 B1) discloses calibrating sensors on an autonomous vehicle where image data from multiple sensors are compared to a depth map.
Varekamp et al. (US 2022/0148207 A1) discloses a method of processing depth maps by receiving corresponding depth maps.
Shintani et al. (US 2018/0288385 A1) discloses generating depth maps of a field of view from a plurality of devices in the same area and stitches images together.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ISABELLA OCHSNER whose telephone number is (571)272-9322. The examiner can normally be reached 7:30 - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached at (571) 272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ISABELLA OCHSNER/ Examiner, Art Unit 2618
/DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618