Prosecution Insights
Last updated: April 19, 2026
Application No. 18/044,494

METHOD AND DEVICE FOR MAPPING A DEPLOYMENT ENVIRONMENT FOR AT LEAST ONE MOBILE UNIT AND FOR LOCATING AT LEAST ONE MOBILE UNIT IN A DEPLOYMENT ENVIRONMENT, AND LOCATING SYSTEM FOR A DEPLOYMENT ENVIRONMENT

Non-Final OA §103§112
Filed
Mar 08, 2023
Examiner
MCCLEARY, CAITLIN RENEE
Art Unit
3669
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Robert Bosch GmbH
OA Round
3 (Non-Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 11m
To Grant
89%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
54 granted / 95 resolved
+4.8% vs TC avg
Strong +32% interview lift
Without
With
+32.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
56 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
43.5%
+3.5% vs TC avg
§102
14.0%
-26.0% vs TC avg
§112
27.4%
-12.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 95 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 16-29 were previously pending. Claims 16-17, 19, 24, and 26-29 have been amended. Thus, claims 16-29 remain pending and have been examined in this application. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/11/2026 has been entered. Examiner's Note Examiner has cited particular paragraphs/columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Applicant is reminded that the Examiner is entitled to give the broadest reasonable interpretation to the language of the claims. Furthermore, the Examiner is not limited to Applicant's definition which is not specifically set forth in the disclosure. Claim Objections Claims 24 and 26-29 are objected to because of the following informalities: Claims 24 and 26-29 recite “the positions being determined by according to” but should instead recite --the positions being determined --. Appropriate correction is required. Claim Interpretation Use of the word "means" ( or "step for") in a claim with functional language creates a rebuttable presumption that the claim element is to be treated in accordance with 35 U.S.C. 112(-f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(-f) (pre- AIA 35 U.S.C. 112, sixth paragraph) is invoked is rebutted when the function is recited with sufficient structure, material, or acts within the claim itself to entirely perform the recited function. Absence of the word "means" ( or "step for") in a claim creates a rebuttable presumption that the claim element is not to be treated in accordance with 35 U.S.C. 112(-f) (pre-AIA 35 U.S.C. 112, sixth paragraph). The presumption that 35 U.S.C. 112(-f) (pre-AIA 35 U.S.C. 112, sixth paragraph) is not invoked is rebutted when the claim element recites function but fails to recite sufficiently definite structure, material or acts to perform that function. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre- AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “image acquisition apparatus” in claims 16-29, “data processing apparatus” in claims 20 and 28, and “device” in claim 28. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The above-referenced claim limitations has/have been interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because: “image acquisition apparatus” in claims 16-29, “data processing apparatus” in claims 20 and 28, and “device” in claim 28 all use a generic placeholder (apparatus/device) coupled with functional language without reciting sufficient structure to achieve the function. Furthermore, the generic placeholder is not preceded by a structural modifier. Since the claim limitation(s) invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, the claims have been interpreted to cover the corresponding structure described in the specification that achieves the claimed function, and equivalents thereof. A review of the specification shows that the following appears to be the corresponding structure described in the specification for the 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph limitation: Image acquisition apparatus: Page 6, lines 5-6 “camera” Data processing apparatus: Page 12, line 16 - page 13, line 10 “…for example, a signal processor, a microcontroller, or the like…” Device: Page 13, lines 11-22 “…may take the form of hardware and/or software… integrated circuits… a microcontroller” For all the units corresponding to a computer (hardware) the software (steps in an algorithm/flowchart) should be included to indicate proper support. If applicant wishes to provide further explanation or dispute the examiner's interpretation of the corresponding structure, applicant must identify the corresponding structure with reference to the specification by page and line number, and to the drawing, if any, by reference characters in response to this Office action. If applicant does not intend to have the claim limitation(s) treated under 35 U.S.C. l 12(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may amend the claim(s) so that it/they will clearly not invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, or present a sufficient showing that the claim recites/recite sufficient structure, material, or acts for performing the claimed function to preclude application of 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. For more information, see MPEP § 2173 et seq. and Supplementary Examination Guidelines for Determining Compliance With 35 U.S. C. 112 and for Treatment of Related Issues in Patent Applications, 76 FR 7162, 7167 (Feb. 9, 2011). Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 17-18 and 24-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 17 (and similarly claim 24) recites “the positions being determined by using a random process and/or according to a predefined distribution scheme” however in the Arguments/Remarks filed 2/11/2026 Applicant indicates that all independent claims should have been similarly amended (to remove the limitation of using a random process). It is unclear if claim 17 is interpreted to also includes the same amendments. The metes and bounds of the claim limitation are vague and ill-defined, rendering the claims indefinite. As best understood, claim 17 will be interpreted similarly to other independent claims such that the limitation of using a random process has been removed. Claims 18 and 25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being dependent on rejected claims 17 and 24 and for failing to cure the deficiencies listed above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 16-19 and 21-29 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang (High-Precision Localization Using Ground Texture, a copy of which was provided with the IDS dated 3/8/2023 and is being relied upon) in view of Xu (CN 111754408 A, a machine translation is attached and is being relied upon). Regarding claim 16, Zhang discloses a method for providing mapping data for a map of a deployment environment for at least one mobile unit (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”), the method comprising the following steps: reading in reference image data from an interface to an image acquisition apparatus of the at least one mobile unit (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – camera… images), wherein the reference image data represents a plurality of reference images captured using the image acquisition apparatus from subportions specific to each reference image of the ground of the deployment environment, wherein adjacent subportions partially overlap (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization” – ground texture… image stitching); extracting a plurality of reference image features from each of the reference images using the reference image data (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – extracting features from images… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database); and producing the mapping data, wherein, using the reference image data, a reference feature descriptor is ascertained at the position of each of the reference image features, and wherein the mapping data includes the reference image data, the positions of the reference image features, and the reference feature descriptors (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – feature descriptors… building a map). Zhang does not appear to explicitly disclose wherein positions of the reference image features in each of the reference images are determined independently of a specific image content of the reference image data, the positions being determined according to a predefined distribution scheme. Xu, in the same field of endeavor, teaches the following limitations: wherein positions of the image features in each of the images are determined independently of a specific image content of the image data, the positions being determined according to a predefined distribution scheme (see at least [0011, 0063, 0108, 0160] – extract feature point sets of the first and second images, wherein each feature point in the feature point set is uniformly distributed on the corresponding image). It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Xu into the invention of Zhang with a reasonable expectation of success for the purpose of reducing computational load while improving accuracy and real-time performance of image stitching (Xu – [0108, 0160]). Regarding claim 17, Zhang discloses a method for creating a map of a deployment environment for at least one mobile unit (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”), the method comprising the following steps: receiving mapping data from a communication interface to the at least one mobile unit (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization” – building a map), the mapping data being produced by: reading in reference image data from an interface to an image acquisition apparatus of the at least one mobile unit (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – camera…images), wherein the reference image data represents a plurality of reference images captured using the image acquisition apparatus from subportions specific to each reference image of the ground of the deployment environment, wherein adjacent subportions partially overlap (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization” – ground texture… image stitching), extracting a plurality of reference image features from each of the reference images using the reference image data (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – extracting features from images… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database), and producing the mapping data, wherein, using the reference image data, a reference feature descriptor is ascertained at the position of each of the reference image features, and wherein the mapping data includes the reference image data, the positions of the reference image features, and the reference feature descriptors (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – feature descriptors… building a map); determining a reference pose of the image acquisition apparatus for each of the reference images relative to a reference coordinate system using the mapping data and as a function of correspondences between the reference image features of overlapping ones of reference images ascertained using the reference feature descriptors (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – building a map… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database… image stitching pipeline consists of frame-to-frame matching followed by global optimization, leveraging extensive loop closures provided by the zig-zag path); and combining the reference images as a function of the reference poses, the positions of the reference image features, and the reference feature descriptors, to create the map of the deployment environment (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – building a map… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database… image stitching pipeline consists of frame-to-frame matching followed by global optimization, leveraging extensive loop closures provided by the zig-zag path). Zhang does not appear to explicitly disclose wherein positions of the reference image features in each of the reference images are determined independently of a specific image content of the reference image data, the positions being determined by using a random process and/or according to a predefined distribution scheme. Xu, in the same field of endeavor, teaches the following limitations: wherein positions of the reference image features in each of the reference images are determined independently of a specific image content of the reference image data, the positions being determined by using a random process and/or according to a predefined distribution scheme (see at least [0011, 0063, 0108, 0160] – extract feature point sets of the first and second images, wherein each feature point in the feature point set is uniformly distributed on the corresponding image). The motivation to combine Zhang and Xu is the same as in the rejection of claim 16 above. Regarding claim 18, Zhang discloses wherein, in the determining step, each of the reference poses is determined as a function of correspondences between reference image features for which reference feature descriptors satisfying a similarity criterion with regard to one another have been ascertained in the overlapping reference images (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – building a map… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database… image stitching pipeline consists of frame-to-frame matching followed by global optimization, leveraging extensive loop closures provided by the zig-zag path). Regarding claim 19, Zhang discloses a method for determining localization data for localizing at least one mobile unit in a deployment environment (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”), the method comprising the following steps: reading in image data from an interface to an image acquisition apparatus of the at least one mobile unit, wherein the image data represents at least one image, which is captured using the image acquisition apparatus, of a subportion of a ground of the deployment environment (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – camera… test image… ground texture); extracting a plurality of image features for the at least one image using the image data (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… a voting approach); generating a feature descriptor at the position of each of the image features using the image data to determine the localization data, wherein the localization data includes the positions of the image features and the feature descriptors (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… after voting, the cell with the highest score is very likely to contain the true origin of the test image). Zhang does not appear to explicitly disclose wherein positions of the image features in the at least one image are determined independently of a specific image content of the reference image data, the positions being determined according to a predefined distribution scheme. Xu, in the same field of endeavor, teaches the following limitations: wherein positions of the image features in the at least one image are determined independently of a specific image content of the reference image data, the positions being determined according to a predefined distribution scheme (see at least [0011, 0063, 0108, 0160] – extract feature point sets of the first and second images, wherein each feature point in the feature point set is uniformly distributed on the corresponding image). The motivation to combine Zhang and Xu is the same as in the rejection of claim 16 above. Regarding claim 21, Zhang discloses further comprising: eliciting correspondences between image features of the localization data and reference image features of a preceding image using the feature descriptors of the localization data and reference feature descriptors of the preceding image (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… match with the database features); and determining a pose of the image acquisition apparatus for the at least one image relative to a reference coordinate system as a function of the correspondences elicited in the step of eliciting correspondences, in order to carry out localization (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – for each descriptor, search for the nearest neighbor… match with the database features… pose in world coordinates). Regarding claim 22, Zhang discloses in which the random process and/or the predefined distribution scheme is used in the extracting step: (i) in which a list with all possible image positions of reference image features is produced and the list is pseudorandomly shuffled or positions are pseudorandomly selected from the list (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – extracting features from images… for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database), and/or (ii) in which a fixed pattern of positions or one of a number of pseudorandomly created patterns of positions is used (due to the and/or language, BRI only requires one of these limitations). Regarding claim 23, Zhang discloses wherein in which the random process and/or the predefined distribution scheme is used in the extracting step, in which a variable or defined number of positions is used (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – extracting features from images… for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database) and/or in which different distribution densities of positions are defined for different subregions of each reference image (due to the and/or language, BRI only requires one of these limitations). Regarding claim 24, Zhang discloses a method for localizing at least one mobile unit in a deployment environment (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”), the method comprising the following steps: receiving localization data from a communication interface to the at least one mobile unit (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization” – online localization… pose), wherein the localization data is determined by: reading in image data from an interface to an image acquisition apparatus of the at least one mobile unit, wherein the image data represent at least one image, which is captured using the image acquisition apparatus, of a subportion of a ground of the deployment environment (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – camera… test image… ground texture), extracting a plurality of image features for the at least one image using the image data (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… a voting approach), generating a feature descriptor at the position of each of the image features using the image data to determine the localization data, wherein the localization data includes the positions of the image features and the feature descriptors (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… after voting, the cell with the highest score is very likely to contain the true origin of the test image); ascertaining correspondences between the image features of the localization data and reference image features of a map using the feature descriptors of the localization data and reference feature descriptors of the map (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… match with the database features), the map being generated by: receiving mapping data from the communication interface to the at least one mobile unit (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization” – building a map), the mapping data being produced by: reading in reference image data from the communication interface to an image acquisition apparatus of the at least one mobile unit, wherein the reference image data represents a plurality of reference images captured using the image acquisition apparatus from subportions specific to each reference image of the ground of the deployment environment, wherein adjacent subportions partially overlap (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization” – ground texture… image stitching), extracting a plurality of reference image features from each of the reference images using the reference image data (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – extracting features from images… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database), and producing the mapping data, wherein, using the reference image data, a reference feature descriptor is ascertained at the position of each of the reference image features, and wherein the mapping data includes the reference image data, the positions of the reference image features, and the reference feature descriptors (Chapter III.A. “Mapping”, Chapter III.B. “Localization” – feature descriptors… building a map); determining a reference pose of the image acquisition apparatus for each of the reference images relative to a reference coordinate system using the mapping data and as a function of correspondences between the reference image features of overlapping ones of reference images ascertained using the reference feature descriptors (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – building a map… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database… image stitching pipeline consists of frame-to-frame matching followed by global optimization, leveraging extensive loop closures provided by the zig-zag path); and combining the reference images as a function of the reference poses, the positions of the reference image features, and the reference feature descriptors, to create the map of the deployment environment (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – building a map… use the SIFT scale-space DoG detector and gradient orientation histogram descriptor, for each image in the map we typically find 1000 to 2000 SIFT keypoints, and randomly select 50 of them to be stored in the database… image stitching pipeline consists of frame-to-frame matching followed by global optimization, leveraging extensive loop closures provided by the zig-zag path); determining a pose of the image acquisition apparatus for the at least one image relative to the reference coordinate system as a function of the correspondences ascertained in the ascertaining step and using the reference poses of the map to generate pose information which represents the pose (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… match with the database features… pose in world coordinates); and outputting the pose information at the communication interface to the at least one mobile unit, to carry out localization (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization” – online localization… pose). Zhang does not appear to explicitly disclose wherein positions of the image features in the at least one image are determined independently of a specific image content of the reference image data, the positions being determined by according to a predefined distribution scheme; and wherein positions of the reference image features in each of the reference images are determined independently of a specific image content of the reference image data, the positions being determined by using a random process and/or according to a predefined distribution scheme. Xu, in the same field of endeavor, teaches the following limitations: wherein positions of the reference image features in each of the reference images are determined independently of a specific image content of the reference image data, the positions being determined by using a random process and/or according to a predefined distribution scheme (see at least [0011, 0063, 0108, 0160] – extract feature point sets of the first and second images, wherein each feature point in the feature point set is uniformly distributed on the corresponding image). The motivation to combine Zhang and Xu is the same as in the rejection of claim 16 above. Regarding claim 25, Zhang discloses wherein, in the determining step, weighting values and/or confidence values are applied to the correspondences ascertained in the ascertaining step to generate scored correspondences, wherein the pose is determined as a function of the scored correspondences (Abstract, Figure 1, Chapter III.A. “Mapping”, Chapter III.B. “Localization”, Chapter IV. “Evaluation” – extract SIFT features from the test image… for each descriptor, search for the nearest neighbor… match with the database features… after voting the cell with the highest score is very likely to contain the true origin of the test image… pose of the image). Regarding claim 26, all the limitations have been analyzed in view of claim 16, and it has been determined that claim 26 does not teach or define any new limitations beyond those previously recited in claim 16; therefore, claim 26 is also rejected over the same rationale as claim 16. Regarding claim 27, all the limitations have been analyzed in view of claim 17, and it has been determined that claim 27 does not teach or define any new limitations beyond those previously recited in claim 17; therefore, claim 27 is also rejected over the same rationale as claim 17. Regarding claim 28, all the limitations have been analyzed in view of claim 24, and it has been determined that claim 28 does not teach or define any new limitations beyond those previously recited in claim 24; therefore, claim 28 is also rejected over the same rationale as claim 24. Regarding claim 29, all the limitations have been analyzed in view of claim 16, and it has been determined that claim 29 does not teach or define any new limitations beyond those previously recited in claim 16; therefore, claim 29 is also rejected over the same rationale as claim 16. Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Xu and Schonberger (US 2020/0372672 A1). Regarding claim 20, Zhang does not appear to explicitly disclose further comprising: outputting the localization data at the interface to a data processing apparatus, wherein the localization data is output in a plurality of data packets, wherein each of the data packets includes at least one position of an image feature and at least one feature descriptor, wherein a data packet is output as soon as at least one feature descriptor is generated. Schonberger, in the same field of endeavor, teaches the following limitations: outputting the localization data at the interface to a data processing apparatus, wherein the localization data is output in a plurality of data packets, wherein each of the data packets includes at least one position of an image feature and at least one feature descriptor ([0032, 0038] – image features may be SIFT features which include a feature descriptor… information may be divided between any suitable number of discrete data packets or transmissions), wherein a data packet is output as soon as at least one feature descriptor is generated ([0017, 0033] – transmit image features to the remote device as soon as even a single image feature is detected in the first image). It would have been obvious to one of ordinary skill in the art before the effective filing date to have incorporated the teachings of Schonberger into the invention of Zhang with a reasonable expectation of success in order to reduce the time required for the image-based localization (Schonberger – [0017]). Response to Arguments In light of the amendments to the claims, the previous claim objections and the previous 35 U.S.C. 112(b) rejections have been overcome and withdrawn. However, a new 35 U.S.C. 112(b) rejection is presented above, necessitated by the amendments. Applicant's arguments, see page 11 filed 2/11/2026, with respect to the previous prior art rejections have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to CAITLIN MCCLEARY whose telephone number is (703)756-1674. The examiner can normally be reached Monday - Friday 10:00 am - 7:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Navid Z Mehdizadeh can be reached at (571) 272-7691. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /C.R.M./Examiner, Art Unit 3669 /NAVID Z. MEHDIZADEH/Supervisory Patent Examiner, Art Unit 3669
Read full office action

Prosecution Timeline

Mar 08, 2023
Application Filed
Jan 22, 2025
Non-Final Rejection — §103, §112
Apr 28, 2025
Response Filed
Jul 31, 2025
Final Rejection — §103, §112
Jan 12, 2026
Response after Non-Final Action
Feb 11, 2026
Request for Continued Examination
Feb 23, 2026
Response after Non-Final Action
Mar 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589771
VEHICLE CONTROL DEVICE, STORAGE MEDIUM FOR STORING COMPUTER PROGRAM FOR VEHICLE CONTROL, AND METHOD FOR CONTROLLING VEHICLE
2y 5m to grant Granted Mar 31, 2026
Patent 12583670
LIFT ARM ASSEMBLY FOR A FRONT END LOADING REFUSE VEHICLE
2y 5m to grant Granted Mar 24, 2026
Patent 12552379
STAGGERING DETERMINATION DEVICE, STAGGERING DETERMINATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Patent 12539840
SYSTEM AND METHOD FOR PROBING PROPERTIES OF A TRAILER TOWED BY A TOWING VEHICLE IN A HEAVY-DUTY VEHICLE COMBINATION
2y 5m to grant Granted Feb 03, 2026
Patent 12509934
Sensor Device
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
89%
With Interview (+32.0%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 95 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month