DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on March 4th, 2026 has been entered.
Response to Arguments
Applicant amended claims 1, 4, and 9.
The pending claims are 1 – 9 [Page 9 lines 1 – 4].
Applicant provides their summary of the previous office Action [Page 9 lines 5 – 8].
Applicant amended the Drawings to address the Examiner’s Drawing Objections [Page 9 lines 9 – 21].
Applicant amends the claims and comments regarding Examiner’s 112(b) Rejections [Page 10 lines 1 – 17].
Applicant does not comment on Examiner’s determination of terms NOT invoking Functional Analysis. In view of the apparent agreement, the Examiner in the sole interest of brevity removes the Functional Analysis section.
Applicant's arguments filed March 4th, 2026 have been fully considered but they are not persuasive.
First, the Applicant cite the references against the claims [Page 8 lines 18 – 20].
Second, the Applicant recites portions of amended independent claim 1 [Page 9 lines 4 – 11] and then cites Specification support for the amendments to the claims [Page 9 lines 12 – 19].
Third, the Applicant broadly dismisses Kume and Gomita as not germane to the Applicant’s alleged problem solved [Page 9 lines 20 – 23].
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “a countermeasure for abnormal time” [Page 9 line 22] although the Examiner notes there is no mention of such benefit in the Original Specification filed) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Fourth, the Applicant contends Kato does not render obvious features claimed as not performing continuous updating [Page 9 lines 24 – 26]. However Kato in at least Paragraphs 27 (continuous operation a benefit of the invention of Kato), 115 and 124 – 125 as rendering obvious continuous updating the image capture and estimation of the self / own position estimates. Further Kato Figure 14 and associated description render obvious features claimed including the feature point detection thresholds.
Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections.
Fifth, the Applicant broadly contends Zhang does not disclose the self-position estimation as claimed [Page 10 lines 1 – 2]. The Examiner disagrees for at least the cited portions of Zhang given and other references render obvious the iterative / continuous processing allegedly claimed (e.g. Kato) as Zhang teaches environments to apply the teachings of Kato, Kume, and Gomita.
In response to applicant's arguments against the references individually, one cannot show nonobviousness by attacking references individually where the rejections are based on combinations of references. See In re Keller, 642 F.2d 413, 208 USPQ 871 (CCPA 1981); In re Merck & Co., 800 F.2d 1091, 231 USPQ 375 (Fed. Cir. 1986).
Sixth, the Applicant contends the combination of references do not render obvious the alleged benefits claimed in continuous operation alleging Kato has fixed relationships [Page 10 lines 3 – 8], but nothing in Kato is “fixed’ as alleged [Kato Figure 14 as well as Paragraphs 115 – 127 and in combination with Kume and Gomita at least] and the references lack normal / abnormal times of operation [Page 10 lines 9 – 13].
In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “preparation for normal times” [Page 10 lines 9 – 13] although the Examiner notes there is no mention of such benefit in the Original Specification filed) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Seventh, the Applicant contends claim 1 is allowable and argues similarly for claim 9 and contends the dependent claims are allowable [Page 10 lines 14 – 19].
While the Applicant’s points may be understood, to which the Examiner respectfully disagrees; the Rejection is maintained.
Information Disclosure Statement
The information disclosure statements (IDS) submitted on September 8th, 2023 and December 12th, 2024 were filed before the mailing date of the First Action on the Merits (mailed July 1st, 2025). The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the Examiner.
Claim Objections
Claims 1 and 4 are objected to because of the following informalities:
Regarding claim 1, the claims “unacquirable” and “cannot be acquired” raise potential Indefinite issues as the claim terminology does not necessarily require the same requirement (a number of feature points / landmarks detected) in each phraseology as only “unacquirable” is defined. Further “acquirable” has no expressly stated condition and thus may raise Indefinite issues regarding the metes and bounds of the claims.
Regarding claim 4, see claim 1 for similar reasoning with at least the “cannot be acquired” limitation as no definition is given or requirement for making such a determination as no explicit definition in the Specification is given.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 4 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
Regarding claim 4, the amended portion of amended independent claim 1 recites the same / similar limitations and thus claim 4 reiterates the steps in claim 1 regarding the coordinate conversions without further limitation.
Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 – 9 are rejected under 35 U.S.C. 103 as being unpatentable over Kato, et al. (US PG PUB 2018/0266830 A1 referred to as “Kato” throughout), and further in view of Gomita (US PG PUB 2022/0198697 A1 referred to as “Gomita” throughout) [Cited in Applicant’s September 8th, 2023 IDS], Kume, et al. (WO2019/186677 A1 referred to as “Kume” throughout) [First Cited in the Office Action mailed July 1st, 2025], and Zhang, et al. (US PG PUB 2023/0331485 A1 referred to as “Zhang” throughout where the Examiner notes the US Application the PG PUB is based on has been Patented and affords Zhang the Foreign Priority date).
Regarding claim 9, see claim 1 which is the apparatus performing the steps of the claimed method.
Regarding claim 1, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
a first sensor facing in a first direction and acquires first environment information which is information about an object within a field of view of the first sensor [Kato Figures 11 – 13 (see at least reference character 31 or 12 (stereo camera on the bottom / obvious to arrange elsewhere on a vehicle)) as well as Paragraphs 80 – 89 (road surface images at least as an object to compute environment information including coordinate determinations such as in Paragraphs 86 – 87), 92 – 98 (at least Paragraph 96 suggests a forward facing camera / sensor arrangement), 106 and 117 – 120 (road surface as an object for environment / coordinate determinations); Gomita Figures 1 – 2 (see at least reference characters 5 and h1) and 13 – 14 (method and directions of imagers) as well as Paragraphs 37 – 44 (SLAM to generate the environment), 50 – 57 (camera in a first direction imaging a first target object), 62 – 63 (coordinates determined with first / second object), 77 – 87 (imaging using SLAM to generate first environment / coordinate system information using at least a first object imaged), and 126 – 130 (coordinates computed from first / second image); Zhang Paragraphs 28 – 30 (sensors with FOV considerations) and 35 – 37 (imaging target objects in FOV)];
a second sensor facing in a second direction and acquires second environment information which is information about an object within a field of view of the second sensor [Kato Figures 1 – 2 and 11 – 13 (see at least reference character 31 or 12 (stereo camera on the bottom / obvious to arrange elsewhere on a vehicle for the second imager)) as well as Paragraphs 80 – 89 (road surface images at least as an object to compute environment information including coordinate determinations such as in Paragraphs 86 – 87), 92 – 98 (at least Paragraph 96 suggests a rearward or bottom facing camera / sensor arrangement – combine with forward facing imagers / sensors), 106 and 117 – 120 (road surface as an object for environment / coordinate determinations); Gomita Figures 1 – 2 (see at least reference characters 6 and h2) and 13 – 14 (method and directions of imagers) as well as Paragraphs 37 – 44 (SLAM to generate the environment), 50 – 57 (camera in a second direction imaging a second target object), 62 – 63 (coordinates determined with first / second object), 77 – 87 (imaging using SLAM to generate first environment / coordinate system information using at least a second object imaged), 126 – 130 and 145 – 148 (coordinates computed from first / second image); Zhang Paragraphs 28 – 30 (sensors with FOV considerations where the types of sensors (e.g. binocular, depth, laser) render obvious first and second sensors) and 35 – 37 (imaging target objects in FOV)];
a memory [Kato Figure 15 (see at least reference characters 100, 102, 103, and 111 (medium / ROM / RAM)) as well as Paragraph 129 – 134 (memory implementation)], and
a controller [Kato Figure 15 (see at least reference characters 100 and 101) as well as Paragraph 129 – 134 (CPU as a processor / controller obvious variants to one of ordinary skill in the art)] configured to:
estimate a self-position based only on the second environment information of the first environment information and the second environment information, when the second environment information is acquirable [Kato Figures 7, 9, and 13 – 14 (see determinations of “first self-position” and “second self-position” as the labels may be swapped as an obvious variant to one of ordinary skill in the art (selection of finite number of elements in KSR Rationale (E))) as well as Paragraphs 99 – 106 and 116 – 124 (first self-position determinations using first image in real space / geographical coordinates and second self-position determinations using second image in real space / geographical coordinates combinable with Gomita’s coordinate systems per imager in Paragraphs 56 – 63 and 127) where the techniques are combinable with Zhang Figures 2 – 3 as well as Paragraphs 28 – 32, 35 – 36, and 55 – 60 (basic environmental map generation which may include semantic map information in a workplace)], and
estimate the self-position based only on the first environment information of the first environment information and the second environment information [See previous limitation (regarding self-position determinations) and the next limitation (for the conditional) and additionally Kato Figure 14 (see at least reference character S53 determining the availability of the second image); Gomita Paragraphs 145 – 150 (mapping coordinates into a fixed / universal / global system) and 205 – 208 (adjustments / estimates made based on validity of images (can combine / modify Kato Figure 14 reference character S53 at least)); Kume Figures 2 and 6 as well as Page 2 Last Paragraph – Page 3 Fourth Full Paragraph (asynchronous images combined / aligned into a common coordinates using SLAM / vSLAM in the position estimation unit (reference character 101)) and Page 7 Fourth Full Paragraph – Page 8 Second Paragraph (estimating position into a common / global coordinate system based on asynchronous image / sensor data); Zhang Figures 2 – 3 as well as Paragraphs 29 – 32, 35 – 36, and 55 – 62 (basic environmental map generation which may include semantic map or without the second / semantic information (e.g. Paragraph 60 rendering obvious the auxiliary semantic information (e.g. Paragraphs 28 – 32) is not acquired / available) information in a workplace)], when the second environment information becomes unacquirable due to a number of feature points being no more than a threshold [See previous limitation (regarding self-position determinations) and additionally Kato Figure 14 (see at least reference character S53 determining the availability of the second image based on number of landmarks / feature points (obvious variant to one of ordinary skill in the art) as the labels of “first” and “second” may be swapped as an obvious variant to one of ordinary skill in the art (selection of finite number of elements in KSR Rationale (E))) as well as Paragraphs 116 – 124 (using different information for current environment information based on the number of detected landmarks / feature points)],
continue [Kato Figure 14 as well as Paragraphs 27 (continuous operation / repeated iterations of Figure 14 / processing of environment), 115 and 124 – 125 (rendering obvious continuous updating the image capture and estimation of the self / own position estimates)], when both the first environment information and the second environment information are acquirable, to calculate conversion information between a first coordinate system and a second coordinate system, and estimates self-position based only on the second environment information [Kato Figures 7, 9, and 13 – 14 (see determinations of “first self-position” and “second self-position” as the labels may be swapped as an obvious variant to one of ordinary skill in the art (selection of finite number of elements in KSR Rationale (E))) as well as Paragraphs 99 – 106 (corrections to each self-position to combine and see at least equation (1)) and 114 – 124 (first / second self-position determinations using first / second image in real space / geographical coordinates and second self-position determinations using second image in real space / geographical coordinates combinable with Gomita’s coordinate systems per imager in Paragraphs 56 – 63 and 127) where the techniques are combinable with Zhang Figures 2 – 4 as well as Paragraphs 28 – 32, 35 – 36 (acquiring images / environment information), 42 – 45 and 54 – 60 (basic environmental map generation which may include semantic map information in a workplace including sensor / robot coordinate systems); Gomita Figures 5 – 7 and 9 – 12 (see at least ST316 – ST320) as well as Paragraphs 56 – 63, 127 – 131 (combining pose estimates into a common XYZ coordinate system, 139 – 144 and 147 – 150 (transforms to a common coordinate system based on detected feature points / images of objects captured)], and
estimate, when the second environment information cannot be acquired [Kato Figure 14 (see at least reference character S53 determining the availability of the second image based on number of landmarks / feature points (obvious variant to one of ordinary skill in the art) as the labels of “first” and “second” may be swapped as an obvious variant to one of ordinary skill in the art (selection of finite number of elements in KSR Rationale (E))) as well as Paragraphs 116 – 124 (using different information for current environment information based on the number of detected landmarks / feature points)], self-position based only on the first environment information, and perform coordinate conversion using latest conversion information which was calculated [See the “estimate the self-position based only on the first environment information …” limitation and additionally Kato Figures 7, 9, and 12 – 14 (see determinations of estimates of position combining first / second self-position determinations and see at least reference character S53 determining the availability of the second image) as well as Paragraphs 99 – 106 (corrections to each self-position to combine and see at least equation (1)) and 114 – 124 (second self-position determinations using second image in real space / geographical coordinates combined with the first combinable with Gomita’s coordinate systems per imager in Paragraphs 56 – 63 and 127); Gomita Figures 5 – 7 and 9 – 12 (see at least ST316 – ST320) as well as Paragraphs 56 – 63, 127 – 131 (combining pose estimates into a common XYZ coordinate system, 139 – 144, 145 – 150 (transforms to a common coordinate system based on detected feature points / images of objects captured), and 205 – 208 (adjustments / estimates made based on validity of images (can combine / modify Kato Figure 14 reference character S53 at least)); Kume Figures 2 and 6 as well as Page 2 Last Paragraph – Page 3 Fourth Full Paragraph (asynchronous images combined / aligned into a common coordinates using SLAM / vSLAM in the position estimation unit (reference character 101)) and Page 7 Fourth Full Paragraph – Page 8 Second Paragraph (estimating position into a common / global coordinate system based on asynchronous image / sensor data); Zhang Figures 2 – 4 (especially method of Figure 3) as well as Paragraphs 28 – 32, 35 – 36 (acquiring images / environment information in combination with Paragraphs 55 – 62 where basic environmental map generation which may include semantic map or without the second / semantic information (e.g. Paragraph 60 rendering obvious the auxiliary semantic information (e.g. Paragraphs 28 – 32) is not acquired / available) information in a workplace), and 42 – 45 and 54 – 62 (sensor and robot coordinate systems based on maps generated)].
The motivation to combine Gomita with Kato is to combine features in the same / related field of invention of using SLAM to estimate self-positions [Gomita Paragraph 2] in order to improve robustness / accuracy of the estimations [Gomita Paragraphs 2 and 4 – 5 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable].
The motivation to combine Kume with Gomita and Kato is to combine features in the same / related field of invention of position / orientation measuring systems [Kume Page 1 First – Second Paragraphs] in order to improve alignment of images into a common system / reference frame with better accuracy [Kume Page 1 Second through Fourth Paragraph where the Examiner observes KSR Rationales (D) or (F) are also applicable].
The motivation to combine Zhang with Kume, Gomita, and Kato is to combine features in the same / related field of invention of robot location using sensors and environmental maps [Zhang Paragraphs 2 – 3] in order to improve performance of the robot and less reliance on a-priori knowledge of the area to image / map [Zhang Paragraphs 3 and 9 where the Examiner observes at least KSR Rationales (D) or (F) are also applicable].
This is the motivation to combine Kato, Gomita, Kume, and Zhang which will be used throughout the Rejection.
Regarding claim 2, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
the second sensor moves integrally with the first sensor [Kato Figures 1 and 11 (see at least reference characters 12 and 31) as well as Paragraphs 94 – 96 (cameras / imagers attached to a vehicle); Alternatively Gomita Figures 1 – 3 (see at least reference characters 5, 6, and 20) as well as Paragraphs 73 – 85 (imagers in a portable device)].
See claim 1 for the motivation to combine Kato, Gomita, Kume, and Zhang.
Regarding claim 3, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
the first direction and the second direction are different directions [Kato Figures 11 – 13 (see at least reference characters 12 and 31) as well as Paragraphs 94 – 98 (forward / rearward / bottom orientation); Alternatively Gomita Figures 1 – 3 (see at least reference characters 5, 6, and 20 and directions h1 and h2) as well as Paragraphs 73 – 85 (h1 and h2 are in different directions)].
See claim 1 for the motivation to combine Kato, Gomita, Kume, and Zhang.
Regarding claim 4, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
the controller [See claim 1 for citations]:
calculates conversion information for converting the first coordinate system and the second coordinate system when the first environment information and the second environment information are acquirable [Kato Figures 7, 9, and 12 – 14 (see determinations of estimates of position combining first / second self-position determinations) as well as Paragraphs 99 – 106 (corrections to each self-position to combine and see at least equation (1)) and 114 – 124 (second self-position determinations using second image in real space / geographical coordinates combined with the first combinable with Gomita’s coordinate systems per imager in Paragraphs 56 – 63 and 127); Gomita Figures 5 – 7 and 9 – 12 (see at least ST316 – ST320) as well as Paragraphs 56 – 63, 127 – 131 (combining pose estimates into a common XYZ coordinate system, 139 – 144 and 147 – 150 (transforms to a common coordinate system based on detected feature points / images of objects captured); Zhang Figures 2 – 4 (especially method of Figure 3) as well as Paragraphs 28 – 32, 35 – 36 (acquiring images / environment information in combination with Paragraphs 55 – 60), and 42 – 45 and 54 – 62 (sensor and robot coordinate systems)], and
performs a coordinate conversion using the conversion information which has been calculated when the second environment information becomes unacquirable [Kato Figures 7, 9, and 12 – 14 (see determinations of estimates of position combining first / second self-position determinations and see at least reference character S53 determining the availability of the second image) as well as Paragraphs 99 – 106 (corrections to each self-position to combine and see at least equation (1)) and 114 – 124 (second self-position determinations using second image in real space / geographical coordinates combined with the first combinable with Gomita’s coordinate systems per imager in Paragraphs 56 – 63 and 127); Gomita Figures 5 – 7 and 9 – 12 (see at least ST316 – ST320) as well as Paragraphs 56 – 63, 127 – 131 (combining pose estimates into a common XYZ coordinate system, 139 – 144, 145 – 150 (transforms to a common coordinate system based on detected feature points / images of objects captured), and 205 – 208 (adjustments / estimates made based on validity of images (can combine / modify Kato Figure 14 reference character S53 at least)); Kume Figures 2 and 6 as well as Page 2 Last Paragraph – Page 3 Fourth Full Paragraph (asynchronous images combined / aligned into a common coordinates using SLAM / vSLAM in the position estimation unit (reference character 101)) and Page 7 Fourth Full Paragraph – Page 8 Second Paragraph (estimating position into a common / global coordinate system based on asynchronous image / sensor data); Zhang Figures 2 – 4 (especially method of Figure 3) as well as Paragraphs 28 – 32, 35 – 36 (acquiring images / environment information in combination with Paragraphs 55 – 62 where basic environmental map generation which may include semantic map or without the second / semantic information (e.g. Paragraph 60 rendering obvious the auxiliary semantic information (e.g. Paragraphs 28 – 32) is not acquired / available) information in a workplace), and 42 – 45 and 54 – 62 (sensor and robot coordinate systems based on maps generated)].
See claim 1 for the motivation to combine Kato, Gomita, Kume, and Zhang.
Regarding claim 5, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
a projector that projects an auxiliary image that assists a work onto the workplace [In view of the amendments to the claim in the sole interest to expedite prosecution regardless of patentable weight of intended use / functional claim language (e.g. “that assist”) citing Kato Figures 1 and 8 (see at least reference characters 13 and 23 (light projection as an obvious variant of the claimed projector)) as well as Paragraphs 76 – 80 (light projections onto a road surface rendering obvious the workplace (driving) / work (road driven on)) and 142 – 146; Zhang Figures 1 – 2 and 4 (warehouse application) as well as Paragraphs 28 (assisting user in operating the robot to perform tasks)],
wherein the controller creates the auxiliary image according to a position in the workplace and transmits the auxiliary image to the projector [Kato Figures 1 and 8 (see at least reference characters 13, 21, and 23 (light projection as an obvious variant of the claimed projector)) as well as Paragraphs 76 – 80 (light projections onto a road surface rendering obvious the workplace (driving) / work (road driven on) and renders obvious the auxiliary image (texture pattern imaged as an auxiliary image)), 86 – 87, and 142 – 146; Gomita Paragraphs 127 – 138 (imaging / incorporation of the projection image into the universal / XYZ coordinate plane combining self-position estimates); Zhang Figures 1 – 2 and 4 (warehouse application) as well as Paragraphs 28 (assisting user in operating the robot to perform tasks)], and
the controller creates the auxiliary image that assists the work [Kato Figures 1 and 8 (see at least reference characters 13, 21, and 23 (light projection as an obvious variant of the claimed projector)) as well as Paragraphs 76 – 80 (light projections onto a road surface rendering obvious the workplace (driving) / work (road driven on) and renders obvious the auxiliary image (texture pattern imaged as an auxiliary image)), 86 – 87, and 142 – 146; Gomita Paragraphs 127 – 138 (imaging / incorporation of the projection image into the universal / XYZ coordinate plane combining self-position estimates); Zhang Figures 1 – 2 and 4 (warehouse application) as well as Paragraphs 28 (assisting user in operating the robot to perform tasks with displaying of images to assist the user)].
See claim 1 for the motivation to combine Kato, Gomita, Kume, and Zhang.
Regarding claim 6, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
an optical axis of the first sensor overlaps with a range in which the projector can project the auxiliary image [Kato Figures 1 and 8 (see at least reference characters 13, 21, and 23 (light projection as an obvious variant of the claimed projector)) as well as Paragraphs 76 – 80 (light projections onto a road surface rendering obvious the workplace (driving) / work (road driven on) and renders obvious the auxiliary image (texture pattern imaged as an auxiliary image) where the arrangement renders obvious the features claimed in projector / imaging axes orientation), 86 – 87, and 142 – 146; Kume Figure 1 and 4 – 5 as well as Page 3 Second Paragraph renders obvious use of patterns in workplace / factory applications],
the first sensor is a camera [Kato Figures 11 – 13 (see at least reference character 31 or 12 (stereo camera on the bottom / obvious to arrange elsewhere on a vehicle)) as well as Paragraphs 92 – 98 (at least Paragraph 96 suggests a forward facing camera / sensor arrangement)] that captures an image of an area including a marker in the workplace [Kato Figures 1 and 8 (see at least reference characters 13, 21, and 23 (light projection as an obvious variant of the claimed projector)) as well as Paragraphs 76 – 80 (light projections onto a road surface rendering obvious the workplace (driving) / work (road driven on) and renders obvious the auxiliary image (texture pattern imaged as an auxiliary image)), 86 – 87, and 142 – 146; Kume Figure 1 and 4 – 5 as well as Page 3 Second Paragraph renders obvious use of patterns in workplace / factory applications; Zhang Figures 1 – 2 and 4 as well as Paragraphs 3 (reflective strips / 2D codes as markers) and 28 – 32 (target objects in workplace to identify combinable with Paragraph 3)], and
the second sensor is a three-dimensional measurement sensor that acquires a shape and a position of an object within the field of view of the second sensor [Kato Figures 1 – 2, 6 – 8, and 11 – 13 (see at least reference character 31 or 12 (stereo camera on the bottom / obvious to arrange elsewhere on a vehicle for the second imager)) as well as Paragraphs 67 – 74 (shape determinations with stereo camera), 80 – 89 (road surface images at least as an object to compute environment information including shape determinations such as in Paragraphs 86 – 87), 92 – 98 (at least Paragraph 96 suggests a rearward or bottom facing camera / sensor arrangement – combine with forward facing imagers / sensors), 106 and 117 – 120 (road surface as an object for environment / coordinate determinations); Gomita Figures 1 – 2 (see at least reference characters 6 and h2) and 13 – 14 (method and directions of imagers) as well as Paragraphs 37 – 44 (SLAM to generate the environment), 50 – 57 (camera in a second direction imaging a second target object), 65 – 68 (shape of object / space imaged computed), 77 – 87 (imaging using SLAM to generate first environment / coordinate system information using at least a second object imaged), 126 – 130 and 145 – 148 (coordinates computed from first / second image); Zhang Figures 1 – 2 and 4 as well as Paragraphs 26 – 28 (sensor types and capturing shape / location semantics)].
See claim 1 for the motivation to combine Kato, Gomita, Kume, and Zhang.
Regarding claim 7, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
the controller calculates conversion information for converting the first coordinate system and the second coordinate system when the first environment information and the second environment information are acquirable while the projector is projecting the auxiliary image [Kato Figures 7, 9, and 12 – 14 (see determinations of estimates of position combining first / second self-position determinations) as well as Paragraphs 65 – 68 and 75 – 80 and 86 – 87 (auxiliar image generation and combining the image into the common coordinate system), 99 – 106 (corrections to each self-position to combine and see at least equation (1)) and 114 – 124 (second self-position determinations using second image in real space / geographical coordinates combined with the first combinable with Gomita’s coordinate systems per imager in Paragraphs 56 – 63 and 127); Gomita Figures 5 – 7 and 9 – 12 (see at least ST316 – ST320) as well as Paragraphs 56 – 63, 127 – 131 (combining pose estimates into a common XYZ coordinate system, 139 – 144 and 147 – 150 (transforms to a common coordinate system based on detected feature points / images of objects captured)] onto the workplace during the work [Kato Figures 1 and 8 (see at least reference characters 13, 21, and 23 (light projection as an obvious variant of the claimed projector)) as well as Paragraphs 76 – 80 (light projections onto a road surface rendering obvious the workplace (driving) / work (road driven on) and renders obvious the auxiliary image (texture pattern imaged as an auxiliary image)), 86 – 87, and 142 – 146; Kume Figure 1 and 4 – 5 as well as Page 3 Second Paragraph renders obvious use of patterns in workplace / factory applications and Page 4 (see description of Figures 4 and 5 with associated equations rendering obvious use of workplace / work applications with inner / dot products in position determinations); Zhang Figures 2 – 4 (especially method of Figure 3) as well as Paragraphs 28 – 32, 35 – 36 (acquiring images / environment information in combination with Paragraphs 55 – 62 where basic environmental map generation which may include semantic map or without the second / semantic information (e.g. Paragraph 60 rendering obvious the auxiliary semantic information (e.g. Paragraphs 28 – 32) is not acquired / available) information in a workplace), and 42 – 45 and 54 – 62 (sensor and robot coordinate systems based on maps generated)].
See claim 1 for the motivation to combine Kato, Gomita, Kume, and Zhang.
Regarding claim 8, Kato teaches an arrangements of imagers around a body in a fixed arrangement with a projector to project alignment / calibration / marks for alignment / coordinate / position estimation determinations. Gomita teaches image processing techniques, suggest arrangements of cameras, and processing valid images and estimating with partial information. Kume teaches position / pose estimation using asynchronous cameras to further render obvious teachings by Gomita. Zhang teaches a robot in a factory / workplace setting with imagers to assist workers in a workplace environment.
It would have been obvious to one of ordinary skill art before the effective filing date of the claimed invention to modify Kato’s imager arrangements in manners suggested by Gomita with image processing and image analysis algorithms for position estimate and additional image processing considerations as taught by Kume (e.g. for asynchronous image capture) and to apply such imaging arrangements to the robot taught by Zhang to map environments in workplaces. The combination teaches
the conversion information is calculated based on an equation that calculates an inner product of a vector from a position of the first sensor to a position of the second sensor and a vector that indicates an orientation of the second sensor [Kato Figures 7, 9, and 12 – 14 (see determinations of estimates of position combining first / second self-position determinations) as well as Paragraphs 65 – 68 and 75 – 80 and 86 – 87 (auxiliar image generation and combining the image into the common coordinate system), 99 – 106 (corrections to each self-position to combine and see at least equation (1) rendering obvious the use of dot / inner products) and 114 – 124 (second self-position determinations using second image in real space / geographical coordinates combined with the first combinable with Gomita’s coordinate systems per imager in Paragraphs 56 – 63 and 127); Gomita Figures 5 – 7 and 9 – 12 (see at least ST316 – ST320) as well as Paragraphs 56 – 63, 127 – 131 (combining pose estimates into a common XYZ coordinate system, 139 – 150 (transforms to a common coordinate system based on detected feature points / images of objects captured using dot / inner products (which may be explicitly written out as well and thus obvious variants to one of ordinary skill in the art)].
See claim 1 for the motivation to combine Kato, Gomita, Kume, and Zhang.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Oba, et al. (US PG PUB 2021/0097707 A1 referred to as “Oba” throughout) has similar teachings as Kato including the use of a projector for features in claims 5 – 7. Sankai (US PG PUB 2019/0192361 A1 referred to as “Sankai” throughout) teaches in Figures 2 and 5 multiple cameras on a cart / object for imaging and creating a global environment / map.
References found in updated search and consideration include: Yamakura (US PG PUB 2023/0256614 A1 referred to as “Yamakura” throughout) where Figures 1 and 18 render obvious features for assisting the user, but has a robotic arm application. Su, et al. (CN-108256574 A referred to as “Su” throughout) in which Figure 7 as well as Paragraphs 30 – 31 and 95 – 100 renders obvious thresholding the number of feature / matching points to use for updating camera parameter estimation.
Reference found during ODP Search and consideration which may raise issues based on amendments to the claims: Kurashima, et al. (US Patent #12,280,501 B2 referred to as “Kurashima” throughout).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tyler W Sullivan whose telephone number is (571)270-5684. The examiner can normally be reached IFP.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Czekaj can be reached at (571)-272-7327. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/TYLER W. SULLIVAN/Primary Examiner, Art Unit 2487