Prosecution Insights
Last updated: April 19, 2026
Application No. 18/026,472

ALIGNING REPRESENTATIONS OF 3D SPACE

Non-Final OA §102§103§112
Filed
Apr 07, 2023
Examiner
SONNERS, SCOTT E
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
OA Round
3 (Non-Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
81%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
258 granted / 375 resolved
+6.8% vs TC avg
Moderate +12% lift
Without
With
+12.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
25 currently pending
Career history
400
Total Applications
across all art units

Statute-Specific Performance

§101
7.9%
-32.1% vs TC avg
§103
39.2%
-0.8% vs TC avg
§102
29.4%
-10.6% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 375 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/4/2024 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 4-5, 14, 17-27 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the representation" in line 14. There is insufficient antecedent basis for this limitation in the claim. The claim previously recites “a first representation” as well as “a second representation” such that recitation of only “the representation” makes it unclear as to which representation is being referred to, as it could refer to either the first or second representation. In the interest of compact prosecution, the Examiner will interpret the claim as if it recites “the first representation” instead which would render the claim definite. Independent claims 14 and 27 recite the same indefinite claim language above in a similarly indefinite manner and are rejected for the same reasons as claim 1 above. In the interest of compact prosecution these claims will be interpreted in the same manner as claim 1. Claims 4 and 5 both recite dependence on claims that are cancelled and are thus indefinite. In the interest of compact prosecution, the claims will be interpreted as if they are dependent on claim 1. Claim 5 recites the limitation, “the first set of geometric representation” in lines 4-5. There is insufficient antecedent basis for this limitation in the claim. The claim does not previously recite any “first set of geometric representation” nor even any “geometric representation”, rather only previously reciting a “first representation”. Thus it is unclear what is being referred to in the claim. In the interest of compact prosecution, the Examiner will interpret the claim as if it recites “the first representation” which would render the claim definite. Claims 17-22 all recite dependence on claims that are cancelled and are thus indefinite. In the interest of compact prosecution, the claims will be interpreted as if they are dependent on claim 14. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1, 4-5, 14, 17-23 and 25-27 is/are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Price et al1 (“Price”). Regarding claim 1, as rendered definite as explained above, Price teaches a method for aligning representations of a three-dimensional, (3D) space, the method being performed in an alignment device and comprising (note that the method for aligning is addressed by the limitations below, and an “alignment device” is given its broadest reasonable interpretation as functionally claimed whereby any device that is involved in alignment or requires alignment in any manner related to the claim language is an alignment device and such device could be a processor used to perform alignment or could be a device that utilizes such a processor or for example a device used to obtain alignment information such as a camera could be an alignment device, and finally an alignment device may comprise multiple devices working together to achieve some alignment in accordance with the method such as a system of device or components; see Price, paragraphs 0035-0042 and figure 1 and 17 teaching an alignment device in which the method is performed where “It will be appreciated that any of the methods described herein may be performed by a specially configured computer system, such as the computer system shown later in FIG. 17" where given that such a device perform alignment by performing the technique below then such a computer system is such an alignment device): obtaining a first set of geometric restrictions for a first representation of a 3D physical space, the first set of geometric restrictions corresponding to capturing restrictions of a first capturing device (note that “geometric restrictions” that correspond to “capturing restrictions of a first capturing device” is extremely broad and as generally and functionally claimed, is interpreted to refer to any restrictions dealing with geometry in any manner of a 3D physical space and could be a size of a space, a shape of a space, capturing conditions of a space, sensor limitations such as field of view (FOV), noise characteristics introduced by capture device or environment, , effective range and resolution, lens parameters, distance from a capturing device, pose of a capturing device, and for example positions and poses of objects relative to the capturing device given that the geometry of the space restricts the 3D representation of the space and the geometry used to represent such space and finally for example a capturing device is any device used to obtain some capture of the environment such as any type of camera, LIDAR, or other type of scanning device and the like; thus see Price, paragraphs 0044-0046 teaching geometric restrictions for a first representation of a 3D physical space being obtained which corresponds to the captured data from the capturing device in the form of the “spatial mapping” that “refers to a digital representation or construct of an environment. In some scenarios, a spatial mapping may include, but is not limited to, any number of depth maps, 3D dot/point clouds, and/or a 3D mesh comprised of polygons (e.g., triangles)” and “Spatial mappings can also include information about object mapping as well as information obtained from scanning an individual object, group of objects, or even an entire environment” and “some portions of the spatial mapping may have different quality levels than other portions. For instance, consider a scenario where a spatial mapping describes a living room. For one area of the living room, the spatial mapping may include highly detailed data describing every feature and part of that area, while for a different area, the spatial mapping may include only rudimentary data that just minimally describes the area (e.g., a skeletal representation). In this regard, the spatial mapping may have quality discrepancies in how it describes (three-dimensionally) a mixed-reality environment, which may include any real-world objects and/or virtual objects/holograms” such that here a spatial mapping may be considered more restrictive in how the 3D space is reconstructed as by giving different quality levels where a higher quality level restricts the reconstruction and mapping to more detailed geometry and such detail corresponds to capturing restrictions of a first capturing device as explained in paragraphs 0055-0059 and figures 7a-7b teaching “spatial mapping 705 is not uniform across its entirety, meaning that some portions of spatial mapping 705 are more detailed (and thus higher quality) than other portions of spatial mapping 705. To further clarify, the spatial mapping portion describing area 705 a has less detail (and thus is of a lower quality) than the other portions of spatial mapping 705. One reason as to why a scan may include less detail is because of timing constraints. Longer scans can produce more detailed spatial mappings while shorter scans typically produce less details spatial mappings. Additionally, levels of detail may be influenced by longer scanned depth distances (i.e. the distance between the scanning camera(s) and the object being scanned) and even to reduced sensor qualities” such that here capturing restrictions such as scanned depth distances or reduced sensor qualities correspond to the geometric restrictions as based on capturing restrictions the geometric restrictions for the 3D representation of the space will be increase if more geometry restricts the reconstruction for example; see further Price, paragraphs 0076-0081 teaching various examples of geometric restrictions for representations of a 3D space which correspond to capturing restrictions of a capturing device where “situations will arise where one spatial mapping describes a particular area in a significantly different manner than another spatial mapping” based on restrictions such as “differences in perspective 1205 with regard to where mixed-reality systems are physically located in an environment when they are used to perform their respective 3D scanning operation, thereby resulting in different levels of detail for the objects” and “Differences in hardware 1215 may also cause a conflict” or “different 3D scanning operations may be used (e.g., active or passive stereo, time of flight, etc.), differences may result in the appearance of an object's geometric surfaces. Differences in a 3D sensor's field of view (FOV) 1225 may also cause conflicts. For example, one mixed-reality system may use 3D sensing cameras with a wider or narrower lens. Similarly, the sensor's pixels size (i.e. angular resolution) 1230 may also cause conflicts. In some cases, one 3D sensing technology may have more or less pixel noise (i.e. measurement noise) 1235 than another spatial mapping, thereby resulting in conflicts” and for example as denoted in figure 12, “ellipsis 1265 demonstrates that other conflicts may occur as well (e.g., differences in color, texture, color texturing, other types of texturing information, situations where a same 3D sensor configuration is used but where the associated depth compute and filtering processes are different, and even use of synthetic data (i.e. data obtained from a map or from the architect's three-dimensional working model))” such that any of such things that impact the 3D representation of the “spatial mapping” may be considered the geometric restrictions corresponding to the capturing restrictions ), the first representation forming part of a map (note that the first representation forming part of a map is interpreted to mean that the representation in some manner maps the space and could take any form of mappi ng data that provides a data structure that could be used for spatial reference, location determination, or navigation or viewing, such as a 3D point cloud, an image of the 3D space, an RGBD image, a mesh, etc.,; see Price, paragraphs 0044-0046 teaching as above that the representation is part forming part of a map as the representation is a “spatial mapping”) for positioning of capturing devices (note that the manner in which the representation is for positioning of capturing devices is not specified nor is positioning of capturing devices produced as recited, and furthermore a representation for positioning of capturing devices could be any of the aforementioned representations and the limitation is met if the representation has the characteristics, data, or structure making it suitable for any purpose of determining the position and/or orientation of capture devices and note for example that a view of a representation of a captured environment could be for such positioning of capturing devices as such a view necessarily gives positioning info of capturing devices; see Price, paragraphs 0043-0054 teaching “HMD 405 is being used to map environment 400 three-dimensionally via the scan 410” and “result of this 3D mapping process is a spatial mapping, such as spatial mapping 210A or 210B from FIGS. 2A and 2B, respectively, comprising a virtual representation of the environment” so that after “generating the spatial mapping, objects associated with that environment can be identified” which then allows that “a hologram can be “placed” at a certain area or region within the mixed-reality environment” from the positioning of any of the capturing devices corresponding to the HMDs for example and where “the spatial mapping may (at least initially) include information about all of the holograms that are placed/located within a particular region” such that this allows the system to understand the positioning of capturing devices such as the HMD in the space in order to present the map from the position of the capture device in connection with the spatial mapping of the entire 3D space ); obtaining a second set of geometric restrictions, the second set of geometric restrictions corresponding to capturing restrictions of a second capturing device for providing a second representation of a 3D physical space for positioning the second capturing device using the map (here the geometric restrictions and capturing restrictions are interpreted in the same manner as the previous clause but of course must be a second set of restrictions and must be from a second capturing device which may be the same type or different type of capturing device and may have different restrictions even if the same type; see Price paragraphs 0035-0042 and figure 1 and 17 teaching that the system obtains the same type of restrictions and data for a second capturing device where “In addition to accessing the first spatial mapping data, second spatial mapping data is also accessed (act 110). This second spatial mapping data describes at least a particular range or area of the environment three-dimensionally. Because both the first and second spatial mapping data describe the environment, it means that both the first and second spatial mapping data each include an “overlapping” portion that concurrently describes at least the same particular range of the environment” and as in paragraphs 0033-0034 “it is noted that spatial mappings can be transferred from one mixed-reality system to another” and a “spatial mapping” can be “transferred or received from another mixed-reality system with 3D sensing/mapping capabilities” such that here it is clear a second capturing device is utilized and could be either another camera of an HMD where the first camera is or could be capturing devices of another HMD of another user); determining that a difference between the first set of geometric restrictions and the second set of geometric restrictions exceeds a threshold amount (note that as the “geometric restrictions” do not necessarily take the form of a scalar value necessarily then determining a difference between such restrictions that exceeds a threshold amount can refer to any decision by which a difference compared to another difference causes some difference in function such that at such a point this would exceed a threshold amount; additionally note that this determination of a difference between the restrictions exceeding a threshold amount is not positively tied to the functions of further steps as the determination is not utilized in the next determining step nor in the final triggering limitations, regardless the limitations are required by the claim; see Price, paragraphs 0038-0042 teaching “a comparison between certain quality levels can be performed (act 115). Specifically, a first quality level of the overlapping portion of the first spatial mapping data is compared against a second quality level of the overlapping portion of the second spatial mapping data” and “a determination can be made as to whether the overlapping portion of the second spatial mapping data is to augment the overlapping portion of the first spatial mapping data in the stored spatial mapping of the environment (act 120). The process of augmenting is more fully described later, but in general, it means to supplement (i.e. complement/add to) and/or replace one portion of data with another portion of data” and as detailed in paragraphs 0082-0104 and figures 13 and 14, there are “processes that may be utilized in order to determine and score the quality levels of a spatial mapping and/or the quality levels of a contribution to a spatial mapping” and for example “To determine quality, some embodiments utilize the scoring algorithm 1300. Additionally, or alternatively, the scoring algorithm 1300 can be used to grade or otherwise evaluate the quality of a contribution that was made to a spatial mapping” and “ranking or determining the quality level of a particular spatial mapping, the scoring algorithm 1300 may consider any one or combination of the following attributes: depth modality 1305, object proximity 1310, target reflectivity 1315, ambient light 1320, motion blue 1325, environment motion 1330, and timestamp 1335. The ellipsis 1340 demonstrates that the scoring algorithm 1300 may consider other parameters as well. These factors may be used to influence the determination of a spatial mapping's quality” and then “Determining whether to perform this augmentation process is based, at lest in part, on the quality levels of those spatial mappings, including on one or more pre-selected quality thresholds 1400, as shown in FIG. 14” where a “quality threshold can be established for any criteria. Some non-limiting examples of quality thresholds include, but are not limited to, an authenticity threshold 1405, a transience threshold 1410, a flagged threshold 1415, a conflict resolution threshold 1420, and a machine learning threshold 1425. The ellipsis 1430 demonstrates that other thresholds may be established as well” such that “quality thresholds can be established to ensure that spatial mapping data satisfies certain quality assurances prior to that data being merged with other spatial mapping data” which means that if certain quality thresholds indicate that spatial mapping data cannot be merged a difference threshold is exceeded and as in paragraphs 0105-0115 for example “in act 1530 a, the second spatial mapping data is purposefully delayed from being incorporated into the spatial mapping until a second quality level for the second spatial mapping data reaches the quality threshold” where such reaching of the quality threshold would include exceeding it as well or exceeding the condition needed to perform the corresponding next function); determining which one of the first set of geometric restrictions and the second set of geometric restrictions is most restrictive and which one is least restrictive (note that the claims do not define the manner in which geometric restrictions may be considered “most restrictive” as compared to “least restrictive” and as such any such determination which utilizes some standard for ascertaining a ranking between the relative data satisfies the determining of the claim; here as noted above the geometric restrictions correspond to the way the geometry of the representation of 3D physical space is restricted based on capturing restrictions as explained above where it can be considered that geometry which is represented at a higher level of detail and quality is more restrictive than geometry represented at a lower level of detail or quality and for example higher quality data in some cases might require data that has the same level of restrictions in terms of quality making merging requirement more restrictive for example—see Price, paragraphs 0060-0069 teaching “the first quality level 810 is being compared against the second quality level 825 to determine which of the two has the higher or lower quality level” and such determination determines whether the geometric restrictions of one are greater than another where for example a more highly detailed and quality representation would be more restrictive as it must meet stricter criteria than lower quality representations to qualify as high quality and for example as noted above may be more restrictive in that the details therein must be reconstructed according to the more detailed and restrictive representation of the geometry ); and triggering a restrictive determination of the first representation that is based on the least restrictive set of geometric restrictions, the determination being based on the most restrictive set of geometric restrictions (here a restrictive determination of the representation that is triggered could be another version of the representation or a decision made about the representation which is in some manner restrictive and for example could restrict the manner in which the two data sets relate to one another or are combined or otherwise merged, such that there initiating or starting or otherwise triggering of some kind of limiting assessment or processing (restrictive determination) on the representation associated with fewer limits (least restrictive) using constraints/parameters/limitations from the set with more limits (most restrictive) as the basis for the assessment/ processing); thus see Price, paragraphs 0038-0042 teaching “a comparison between certain quality levels can be performed (act 115). Specifically, a first quality level of the overlapping portion of the first spatial mapping data is compared against a second quality level of the overlapping portion of the second spatial mapping data” and “a determination can be made as to whether the overlapping portion of the second spatial mapping data is to augment the overlapping portion of the first spatial mapping data in the stored spatial mapping of the environment (act 120). The process of augmenting is more fully described later, but in general, it means to supplement (i.e. complement/add to) and/or replace one portion of data with another portion of data” and “the disclosed embodiments are able to progressively modify the quality level of a spatial mapping by augmenting the spatial mapping's data to eventually achieve a desired quality level” such that here there is a triggering of actions where this may result in augmentation or replacing or supplementing where this is more fully detailed in paragraphs 0055-0059 and figures 7A-7B teaching “how multiple different spatial mappings may be merged or fused together” where this merger is a restrictive determination which has been triggered that is based on the least restrictive set of geometric restrictions as for example a less detailed or lower quality portion of a representation can be combined with other representations in certain places based on comparisons of quality between regions of the representations such that higher quality more restrictive data is the basis for determining replacement on the least restrictive representation of that data ); wherein triggering the restrictive determination comprises: obtaining a 3D data structure corresponding to the least restrictive set of geometric restrictions (see Price, paragraphs 0054-0059 teaching “an example environment 700A, which is an example representation of environment 600 from FIG. 6. Here, the cross hatchings symbolically represent a spatial mapping 705 that describes the environment 700A in a three dimensional (3D) manner. Additionally, the configuration of the cross hatchings (i.e. the spacing, size, and orientation) symbolically represent the quality of the spatial mapping 705. In this and the examples to follow, cross hatchings that are tighter-knit correspond to a higher quality representation of an environment (i.e. they include a more detailed three-dimensional description of the environment) as compared to cross hatchings that are loosely-knit, which correspond to lower quality descriptions. As shown by the area 705 a, spatial mapping 705 is not uniform across its entirety, meaning that some portions of spatial mapping 705 are more detailed (and thus higher quality) than other portions of spatial mapping 705” and “the criteria for generating spatial mapping 705 may have been to get as broad of 3D coverage as possible while the criteria for generating spatial mapping 710 may have been to get detailed and highly specific 3D coverage for only a particular area in the environment. As such, different criteria may be used when generating the different spatial mappings, where the criteria may influence the level of detail or quality for the resulting spatial mapping” where here this corresponds to the least restrictive set of geometric restrictions relating to the first capture device which is restricted less in capturing geometry of the scene compared to the second device capturing a smaller and/or more detailed or different FOV of the scene and an obtaining of a 3D data structure by the first device corresponds to the capture of the 3D data relating to the scene and objects being mapped in relation to the devices such that the obtaining of the data from the 3D camera corresponds to obtaining of such a 3D data structure); applying a filter corresponding to the most restrictive set of geometric restrictions to the 3D data structure to crop the 3D data structure (see Price, paragraphs 0058-0059 teaching to “selectively merging portions of one (or more) spatial mappings with portions of one (or more) other spatial mappings” where selectively merging the accurate data from the capture device with the least restrictive set of geometric restrictions with other data is applying of a filter corresponding to the most restrictive set of geometric restrictions as it selectively filters the already accurate data from the least restrictive set of geometric restrictions by removing the inaccurate data caused by the geometric restrictions, and replaces it with accurate data from the second device with the most restrictive set of geometric restrictions. where based on the second device having such different restrictions in comparison to the first device this filtering results in a cropping of the 3D data structure where the cropped 3D data structure corresponds to the area captured by the first device in view of the area captured by the second device as in figure 7C for example where the first data 705 is cropped in relation to the most restrictive geometric restrictions corresponding to area 710, where since such area 710 will be replacing the data of the first capture device, this means that the remaining area 710 which corresponds to the 3D data structure from the least restrictive device is a cropped 3D data structure; see paragraphs 0069-0074 teaching “embodiments are able to selectively identify and isolate specific portions of spatial mapping data from a current spatial mapping and replace those specific portions with data from a lower quality spatial mapping or simply delete those portions of data and not replace it with other data. In this regard, the embodiments are able to dynamically adjust (in any manner, either up or down) the quality of a spatial mapping in order to achieve a desired quality level” and “replace process 1110 refers to a technique where specific spatial mapping data from one spatial mapping is deleted and then entirely replaced by corresponding spatial mapping data from another spatial mapping” such that this isolation and selective identification again corresponds to a filtering operation in which data can be cropped and then replaced or modified using data from a differently restricted device that will give different capture data of the same scene); and performing segmentation based on the filtered 3D data structure (see Price, paragraphs 0052-0053 teaching “After generating the spatial mapping, objects associated with that environment can be identified” and “FIG. 5 shows an environment 500 that is an example representation of environment 400 from FIG. 4. Objects included within the resulting spatial mapping have been segmented. For instance, FIG. 5 shows a door object 505, a painting object 510, a table object 515” and “embodiments are able to identify a type/classification for each of the objects, and the resulting spatial mapping may include object labeling, where the object labeling/segmentation information includes classifiers to identify objects within the environment (e.g., a “door” classifier, a “table” classifier, etc.)” such that here after the spatial mapping is generated and the scene has been scanned then the filtered 3D data structure can have segmenting performed on it to identify objects and note that such segmenting would be performed on both the filtered 3D data structure as well as the data from the second capture device). Regarding claim 4, as rendered definite as explained above, Price teaches all that is required as applied to claim 1 and further teaches, wherein the step of triggering restrictive determination comprises the sub-step of: performing incremental segmentation based on the filtered 3D data structure (note that “segmentation” is any manner of determining or identifying or creating segments or regions or areas or divisions of some sort from some whole and incremental refers to any process or function relating to increments which are some changes of increase or decrease in something where an incremental segmentation then would be any segmentation that can be considered to be in increments or incremental in any manner; thus see Price, paragraphs 0052-00559 and figures 5-7C teaching an incremental segmentation performed based on the filtered 3D data structure where an initial segmentation can be considered to occur with respect to the data points corresponding to the different capturing devices as in figures 7A-7C as explained above such that the points of the first device and second device can be segmented from each other which then leads to a combined representation where such combined data from the first and second representation are now subjected to “object labeling/segmentation” which further segments the points from each representation into actual object segments such that this further and incrementally segments the data). Regarding claim 5, as rendered definite as explained above, Price teaches all that is required as applied to claim 1 above and further teaches wherein the step of triggering restrictive determination comprises, when the first set of geometric restrictions is determined to be the least restrictive and the second set of geometric restrictions is determined to be the most restrictive (note that the parent claim does not specify whether the first or second set of restrictions is most or least restrictive, simply a decision is made in that regard and for example when a difference is determined to compare to a threshold, given that the difference may be an absolute value difference between the two; see Price, paragraphs 0063-0069 and figure 10 for example teaching “a first spatial mapping 1005, a second spatial mapping 1010, a third spatial mapping 1015, a fourth spatial mapping 1020, and a fifth spatial mapping 1025, all of these different spatial mappings have been merged together to thereby form an overall spatial mapping for the environment 1000. As shown by the differing tightness of the cross hatchings in each of the different spatial mappings, each spatial mapping is of a different quality level” and in this example “spatial mappings 1010, 1015, 1020, and 1025 are all “augmenting” spatial mapping 1005. As will described in more detail in connection with FIG. 11, the augmenting process may include “supplementing” data with additional data (i.e. adding to the existing data) or, alternatively, “replacing” data with other data. In FIG. 10, the data from spatial mappings 1010 and 1015 have replaced the corresponding data in spatial mapping 1005 (as shown by the fact that spatial mapping 1005 is not included in the background of these two spatial mappings). In contrast, the data from spatial mappings 1020 and 1025 has been selected to supplement (i.e. add to) the data included in spatial mapping 1005 (as shown by the fact that spatial mapping 1005 is present in the background of these two spatial mappings)” and “a single combined coherent spatial mapping may describe an environment in varying degrees of detail resulting in varying degrees of quality” and it may be determined that either a first or second or any set is least or most restrictive as explained where “it may be beneficial to combine spatial mappings of different quality levels is that in some cases, more or less quality is desired when a mixed-reality system is operating in a particular area of an environment” and for example a first set of geometric restrictions may be considered to be least restrictive when “it may be advantageous to use a spatial mapping that is somewhat lower in quality. By using a lower quality spatial mapping, it means that the resulting resolution of the thematic type of hologram will be projectable on all of the guests' mixed-reality systems, even in light of the fact that those systems have different rendering capabilities” such that here the merging would be based on a first least restrictive set and would be modified with the portions of other geometrically restricted sets which contain suitably matching data for the augmentation in the overlapping portions), selecting from a plurality of sub-representations of the first note that a plurality of sub-representation of the first set of geometric restrictions can be any portion of the representation based on the first set of geometric restrictions and a second set of geometric representations could be geometric representations related to the second set of geometric restrictions; see Price, paragraphs 0063-0069 as explained above wherein different spatial mappings can augment a base spatial mapping with a certain desired quality level in the overlapping matching portions such that this selects the plurality of sub-representations where for example figure 10 shows geometric restrictions of different quality levels being combined with a base more restrictive layer and thus if the goal is lower quality the situation logically switches such that the sub representation of the lower quality portions would be selected based on geometric restrictions of the least restrictive set that match the portion of the second set of geometric restrictions and representation which can be merged). Regarding claims 14-18, the instant claims recite the invention as an “alignment device” which comprises “a processor” and “memory storing instructions that, when executed by the processor, cause the alignment device to” perform the same functions as recited in claims 1-5, respectively. Price teaches such functions and teaches such an alignment device as explained above in the computer system already addressed above which teaches such a processor and memory arrangement in an alignment device. In light of this, the limitations of claims 14-18 correspond to the limitations of claims 1-5, respectively; thus they are rejected on the same grounds as claims 1-5, respectively. Regarding claim 19, Prices teaches all that is required as applied to claim 14 above and further teaches to perform the restrictive determination only for data, within a 3D data structure, that is within a threshold distance of an estimated position of the second capturing device (see Price, paragraph 0055 teaching that more restrictive data may be tied to capturing distance as “levels of detail may be influenced by longer scanned depth distances (i.e. the distance between the scanning camera(s) and the object being scanned) and even to reduced sensor qualities” and see also paragraph 0066 teaching “often the orientation and distance of the scanning hardware (i.e. during the 3D capture) can impact the quality of the capture. For instance, if the capture is performed at a far distance, the physical space between pixels and depth accuracy may be compromised” and here then in order to use the data from a secondary device its distance would affect the greater quality such that the portions of the second set that qualify as high quality could be those of a capture device near to the second capturing device, meaning that the restrictive determination is performed only for data within a 3D data structure corresponding to the second spatial mapping where the data is high quality given its closer proximity to the second capturing device which is within the threshold distance that allows the high quality capture of that portion). Regarding claim 20, Price teaches all that is required as applied to claim 14 above and further teaches wherein the instructions to trigger restrictive determination instructions that, when executed by the processor, cause the alignment device to perform the restrictive determination for a plurality of different ways of applying the most restrictive set of geometric restrictions (see Price, paragraphs 0063-0069 teaching as above “a first spatial mapping 1005, a second spatial mapping 1010, a third spatial mapping 1015, a fourth spatial mapping 1020, and a fifth spatial mapping 1025, all of these different spatial mappings have been merged together to thereby form an overall spatial mapping for the environment 1000. As shown by the differing tightness of the cross hatchings in each of the different spatial mappings, each spatial mapping is of a different quality level” and in this example “spatial mappings 1010, 1015, 1020, and 1025 are all “augmenting” spatial mapping 1005. As will described in more detail in connection with FIG. 11, the augmenting process may include “supplementing” data with additional data (i.e. adding to the existing data) or, alternatively, “replacing” data with other data. In FIG. 10, the data from spatial mappings 1010 and 1015 have replaced the corresponding data in spatial mapping 1005 (as shown by the fact that spatial mapping 1005 is not included in the background of these two spatial mappings). In contrast, the data from spatial mappings 1020 and 1025 has been selected to supplement (i.e. add to) the data included in spatial mapping 1005 (as shown by the fact that spatial mapping 1005 is present in the background of these two spatial mappings)” and “a single combined coherent spatial mapping may describe an environment in varying degrees of detail resulting in varying degrees of quality” and it may be determined that either a first or second or any set is least or most restrictive as explained where “it may be beneficial to combine spatial mappings of different quality levels is that in some cases, more or less quality is desired when a mixed-reality system is operating in a particular area of an environment” and for example a first set of geometric restrictions may be considered to be least restrictive when “it may be advantageous to use a spatial mapping that is somewhat lower in quality. By using a lower quality spatial mapping, it means that the resulting resolution of the thematic type of hologram will be projectable on all of the guests' mixed-reality systems, even in light of the fact that those systems have different rendering capabilities” such that here it can be seen that are different ways of applying the most restrictive set of geometric restrictions based on the number of sources and the desired outcome and quality level of the spatial mapping). Regarding claim 21, Price teaches all that is required as applied to claim 20 above and further teaches wherein the plurality of different ways of applying the set of most restrictive set of geometric restrictions are mutually exclusive (see Price, paragraphs 0044-0062 and figures 7A-9, wherein for example one way of applying the most restrictive set of geometric restrictions is to use a lower quality level as the basis for merger and another way is to use a higher quality level as the basis for merger and if one or the other is chosen then these are mutually exclusive as you cannot use both ways of combining in that case). Regarding claim 22, Price teaches all that is required as applied to claim 20 above and further teaches wherein the plurality of different ways of applying the most restrictive set of geometric restrictions overlap (see Price, paragraphs 0044-0062 and figures 7A-9 wherein a plurality of different ways of applying the most restrictive set of geometric restrictions could comprise that data to be merged is of a high enough quality level in a same overlapping region and satisfies both some object proximity restriction and ambient light restriction for example as in paragraphs 0082-0104 ). Regarding claim 23, Price teaches all that is required as applied to claim 14 above and further teaches to store a result of the restrictive determination in the map (see Price, paragraphs 0058-0059 teaching to store the spatial mapping representation which contains the restrictive determinations as it is modified and served to others for example where “spatial mapping 705 may be stored locally and/or remotely on one device, and spatial mapping 710 may be stored locally and/or remotely on another device”). Regarding claim 25, Price teaches all that is required as applied to claim 14 above and further teaches wherein the first set of geometric restrictions and the second set of geometric restrictions comprise restrictions based on field of view (see Price, paragraph 0079 teaching “Differences in a 3D sensor's field of view (FOV) 1225 may also cause conflicts. For example, one mixed-reality system may use 3D sensing cameras with a wider or narrower lens. Similarly, the sensor's pixels size (i.e. angular resolution) 1230 may also cause conflicts. In some cases, one 3D sensing technology may have more or less pixel noise (i.e. measurement noise) 1235 than another spatial mapping, thereby resulting in conflicts” such that this impacts the quality and thus the geometric restrictions of combining the representations). Regarding claim 26, Price teaches all that is required as applied to claim 14 above and further teaches wherein the first set of geometric restrictions and the second set of geometric restrictions comprise restrictions based on vertical range (note that the claim does not say what the vertical range is in reference to nor how the vertical range necessarily is a geometric restriction and thus in the context of the claim if for example a capture device has some vertical range associated with it then it must be considered to be a geometric restriction; see Price, paragraph 0079 teaching “Differences in a 3D sensor's field of view (FOV) 1225 may also cause conflicts. For example, one mixed-reality system may use 3D sensing cameras with a wider or narrower lens. Similarly, the sensor's pixels size (i.e. angular resolution) 1230 may also cause conflicts. In some cases, one 3D sensing technology may have more or less pixel noise (i.e. measurement noise) 1235 than another spatial mapping, thereby resulting in conflicts” such that this means of course that the capture device has some resolution in a horizontal and vertical range meaning that the vertical range is one of the possible geometric restrictions). Regarding claim 27, as rendered definite as explained above, the instant claim is directed toward an apparatus in the form of “A non-transitory computer readable mediuam comprising a computer program” which functions to cause an alignment device (which is not positively recited nor required by the claim language as the computer program must merely be capable of causing some alignment device to perform some actions) to perform a set of actions such as in claim 1 above. Price teaches such an apparatus with computer computer program (see Price, paragraphs 0120-0121 teaching “The disclosed embodiments may comprise or utilize a special-purpose or general-purpose computer including computer hardware, such as, for example, one or more processors (such as processor 1705) and system memory (such as storage 1715), as discussed in greater detail below. Embodiments also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general-purpose or special-purpose computer system” and “any other medium that can be used to store desired program code means in the form of computer-executable instructions, data, or data structures and that can be accessed by a general-purpose or special-purpose computer” can be used) and teaches the computer program performing the acts as recited as already addressed in claim 1 above. In light of this, the limitations of claim 27 correspond to the limitations of claim 1 above; thus it is rejected on the same grounds as claim 1 above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 24 is/are rejected under 35 U.S.C. 103 as being unpatentable over Price in view of Minear et al2 (“Minear”). Regarding claim 24, Price teaches all that is required as applied to claim 14 above and further teaches wherein each one of the first representation and second representation comprises a plurality of 3D segments (see Price, paragraphs 0050-0053 teaching “After generating the spatial mapping, objects associated with that environment can be identified. For example, FIG. 5 shows an environment 500 that is an example representation of environment 400 from FIG. 4. Objects included within the resulting spatial mapping have been segmented”). Price fails to teach specifically that “each 3D segment is comprising centroid coordinates and a descriptor defining extension of the 3D segment around the centroid.” Note that the claim does not require that such representation comprising centroid coordinates affects the previous claim limitations in any manner but rather simply specifies some data format for the representations. In the same field of endeavor (3D data processing and representation/reconstruction), Minear teaches that associating a defined 3D segment with a calculated centroid coordinate value is a known geometric characterization of reconstructed objects and further associating such a segment with a descriptor defining its extension around the centroid (e.g., via some bounding box or shape or via some surface description that extends around the centroid) is also a known technique in the art for characterizing a segments size and shape (see Minear, abstract, “registration of n frames 3D point cloud data” and “sub-volumes (702) within each frame are defined” and “sub-volumes are identified in which the 3D point cloud data has a blob-like structure. A location of a centroid associated with each of the blob-like objects is also determined. Correspondence points between frame pairs are determined using the locations of the centroids in corresponding sub-volumes of different frames. Thereafter, the correspondence points are used to simultaneously calculate for all n frames, global translation and rotation vectors for registering all points in each frame. Data points in the n frames are then transformed using the global translation and rotation vectors to provide a set of n coarsely adjusted frames” such that here 3D segmented portions such as “blob-like objects” are determined and a “location of a centroid” of each object is determined which are centroid coordinates and these describe the centroid of the sub-volume of the blob-like objects; see also paragraphs 0045-0055 teaching the objects captured segmented into 3D segments of “ and “qualifying sub-volumes” refers to those sub-volumes that contain a predetermined number of data points (to avoid sparsely populated sub-volumes) and which contain a blob-like point cloud structure” and “centroids are identified for each of the blob-like objects contained in each of the qualifying sub-volumes” and “centroids of blob-like objects for each sub-volume identified in step 312 are used to determine correspondence points between the frame pairs selected in step 304” and “finding a location of a centroid (centroid location) of a blob-like structure contained in a particular sub-volume from a frame”). Thus Minear teaches that not only is representation of 3D segmented data using a centroid location and a descriptor defining the extension of the 3D segment around the centroid known, it is further also known to be used for registering and aligning 3D segmented objects in different 3D datasets of the same subject matter Thus, the prior art contains each element claimed, though not necessarily in a single reference. Therefore it would have been obvious for one of ordinary skill in the art before the effective filing date of the invention to modify Price’s representations by characterizing the disclosed 3D segments using Minear’s technique above. Furthermore, the results of such a modification would be predictable, as applying these known geometric characterizations to the 3D data segments in Price would predictably result in the representations having the centroid and descriptors as in Minear where such data would then still be subject to the filtering and further segmentation via object labeling as in Price once the proper data has been determined for the combined representation of both data types for both sets of geometric restrictions. The motivation would be to provide a standard, quantitative parametrization of the segments to facilitate comparison, analysis, identification, or fusion tasks such as those mentioned and taught in Price as explained above where Minear suggests that use of such a technique can be helpful when registering data from multiple views for example (see Minear, paragraph 0007, teaching “it will be appreciated that a registration process is required for assembling the multiple views or frames into a composite image that combines all of the data. The registration process aligns 3D point clouds from multiple scenes (frames) so that the observable fragments of the target represented by the 3D point cloud are combined together into a useful image”). Response to Arguments Applicant's arguments filed 12/4/2024 as “REMARKS” have been fully considered but they are not persuasive. Applicant argues on pages 8-9 of “REMARKS” that Price does not teach the amended claim limitations of “applying a filter corresponding to the most restrictive set of geometric restrictions to the 3D data structure to crop the 3D data structure.” Applicant first argues that in “Price, the selection of data is drive by quality thresholds and conflict resolution…not by a filter corresponding to a set of geometric restrictions.” The Examiner respectfully disagrees with this characterization as it does not consider fully the teachings of Price nor does it consider the breadth of “filter corresponding to a set of geometric restrictions.” In Price, while quality thresholds may be involved, the data involved in such quality evaluations is of a certain quality based on the geometric restrictions of the capturing devices that are capturing the same scene. Thus when Price’s data selection filters the representations this is based on the geometric restrictions of the capturing devices as the geometric restrictions can result in known effects to the quality of the data where Price teaches examples of a first camera meant to image a broad area and another camera meant to image a narrower area where the data from the narrower area is used to filter the data from the broader area as explained above. In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “cropping a large mapping purely to match another device’s capturing restrictions (e.g., Field of View or Vertical Range” and “cropping a least restrictive dataset to match a most restrictive one for alignment”) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). As seen above Applicant continues to reference geometric restrictiveness as equivalent to the a ”field of view” or “wide view” or “narrow view” without appreciating that these are subsets of a much broader category of “geometric restrictions” where a geometric restriction can be anything related to the geometry being captured being restricted by any aspect of the camera setup or scene. The data collected by any of the multiple devices will have geometric restrictions as explained above which will result in certain qualities of data in the representation captured by each device. This may include geometric restrictions relating more specifically to field of view of the devices, but may also relate to where objects are in position to other devices and scene and device characteristics. Thus in a simple setup as in Price as disclosed in figures 7A-7C, a device with a wider view obtains a representation of data and a device with a smaller view obtains a representation of the data, then a filter is applied so that data from the wide view is used for the areas outside the view captured by the narrower view, such that the data matches from both views. Thus Price also teaches a “geometry-dependent directionality” as in such a setup the geometry causes the quality of the data to be obtained at that level which then leads to the restrictive determination and filtering of the data from the first wider view (in the example of Price). Thus Applicant’s arguments are not persuasive in this respect as Price fully teaches the limitations of the claim language as explained in the rejections above taking into account the amended claim limitations. Applicant then argues on page 9 of “REMARKS” that in “Price, segmentation is performed on the spatial mapping to identify object types, not as a step in the restrictive determination process that follows the application of a geometric filter to crop a 3D data structure” whereas the claimed invention requires “first a filter defined…and second, segmentation is performed based on the filtered 3D data structure.” The Examiner respectfully disagrees. In Price, for example as disclosed in relation to figures 5-7C, data is obtained from a wider view device and a smaller view device which captures detail of a smaller area which are both geometric capturing restrictions and a filter is defined to replace the portion of the first 3D data with data from the other device to obtain a final spatial mapping and then from this mapping the semantic segmentation of objects may take place on what is the restricted determination. Thus the 3D data structure is modified and then the segmentation is able to take place as the data of the spatial mapping is now more accurate than capture from just a single device or representation. Applicant asserts without argument that claim 24 is allowable. The Examiner respectfully disagrees as the combination of Price and Minear renders the claim obvious as explained above. Note that discussion of previous application of Official Notice is moot as Official Notice is no longer used and a new round of prosecution has begun. No further detailed arguments are presented with regard to any specific claim or issue, with all claims standing rejected as fully explained above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. See Lindner et al (US PGPUB No. 20160212411), paragraphs 0044-0065 teaching use of the FOV of multiple device being used to combine data from multiple device with different fields of view and geometric capturing restrictions. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT E SONNERS whose telephone number is (571)270-7504. The examiner can normally be reached Mon-Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached at (571) 272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SCOTT E SONNERS/Examiner, Art Unit 2613 /XIAO M WU/Supervisory Patent Examiner, Art Unit 2613 1 US PGPUB No. 20200035020 2 US PGPUB No. 20090232355
Read full office action

Prosecution Timeline

Apr 07, 2023
Application Filed
Apr 18, 2025
Non-Final Rejection — §102, §103, §112
Jul 31, 2025
Response Filed
Oct 07, 2025
Final Rejection — §102, §103, §112
Dec 04, 2025
Response after Non-Final Action
Jan 05, 2026
Request for Continued Examination
Jan 12, 2026
Response after Non-Final Action
Feb 04, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561816
MOTION CAPTURE USING CONCAVE REFLECTOR STRUCTURES
2y 5m to grant Granted Feb 24, 2026
Patent 12561845
DISTORTION INFORMATION FOR EACH ITERATION OF VERTICES RECONSTRUCTION
2y 5m to grant Granted Feb 24, 2026
Patent 12524957
METHOD OF GENERATING THREE-DIMENSIONAL MODEL AND DATA PROCESSING DEVICE PERFORMING THE SAME
2y 5m to grant Granted Jan 13, 2026
Patent 12518408
VIDEO-BASED TRACKING SYSTEMS AND METHODS
2y 5m to grant Granted Jan 06, 2026
Patent 12519919
METHOD AND SYSTEM FOR CONVERTING SINGLE-VIEW IMAGE TO 2.5D VIEW FOR EXTENDED REALITY (XR) APPLICATIONS
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
81%
With Interview (+12.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 375 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month