DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This action is responsive to the amendments and remarks received 18 September 2025. Claims 1, 3 - 6, 8 - 11, 14, 15, 17, 24, 25, 28 and 29 are currently pending
Claim Objections
Claim 10 is objected to because of the following informalities: Lines 22 - 24 of claim 10 recite, in part, “the second laser image and the second image; an object identification module, wherein” which appears to contain a grammatical error and/or a minor informality. The Examiner suggests amending the claim to --the second laser image and the second image; and an object identification module, wherein-- in order to improve the clarity and precision of the claim. Appropriate correction is required.
The objections to claims 1, 3, 6 and 24, due to minor informalities, are hereby withdrawn in view of the amendments and remarks received 18 September 2025.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “left line laser emitter emits”, “right line laser emitter emits”, “light-compensating device emits”, “ranging module obtains”, “object identification module identifies” and “driving device drives” in claims 10 and 11.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 3 - 6, 8 - 11, 14, 15, 17, 24, 25, 28 and 29 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the target object with the size exceeding a preset threshold" (emphasis added) in lines 24 - 25. There is insufficient antecedent basis for this limitation in the claim.
Claim 1 recites the limitation "the cleaning device” in line 25. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 recites the limitation "the target object with the size exceeding a preset threshold" (emphasis added) in lines 21 - 22. There is insufficient antecedent basis for this limitation in the claim.
Claim 6 recites the limitation "the cleaning device” in line 22. There is insufficient antecedent basis for this limitation in the claim.
Claim 10 recites the limitation "the target object with the size exceeding a preset threshold" (emphasis added) in lines 27 - 28. There is insufficient antecedent basis for this limitation in the claim.
Claim 10 recites the limitation "the cleaning device” in line 28. There is insufficient antecedent basis for this limitation in the claim.
Claim 15 recites the limitation “the distance between the self-propelled equipment and the target object” (emphasis added) in lines 6 - 7. There is insufficient antecedent basis for this limitation in the claim.
Claim 28 recites the limitation "the preset distance between the cleaning robot and the first target object" (emphasis added) in line 7. There is insufficient antecedent basis for this limitation in the claim.
Claim 29 recites the limitation "the preset distance between the cleaning robot and the first target object" (emphasis added) in line 7. There is insufficient antecedent basis for this limitation in the claim.
Claims 3 - 5, 8, 9, 11, 14, 17, 24 and 25 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, due to being dependent upon a rejected base claim(s) but would be withdrawn from the rejection if their base claim(s) overcome the rejection.
Response to Arguments
Applicant’s arguments with respect to claim(s) 1, 3 - 6, 8 - 11, 14, 15, 17, 24, 25, 28 and 29 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Applicant's arguments filed 18 September 2025 have been fully considered but they are not persuasive.
On pages 13 - 14 of the remarks the Applicant’s Representative argues that the previously cited prior art references fail to disclose, teach or suggest “obtaining a size of the target object based on the first laser image and the second laser image”. The Applicant’s Representative argues that Jeong et al. fail to disclose the aforementioned disputed claim limitation(s) at least because “Jeong does not explicitly disclose obtaining the size of the target object from laser images.”
The Examiner respectfully disagrees.
Initially, the Examiner asserts that Applicant's arguments fail to comply with 37 CFR 1.111(b) because they amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references.
Furthermore, the Examiner asserts that, at least Jeong et al. disclose the aforementioned disputed claim limitation(s), see at least the rejections pertaining to claims 1, 6 and 10 included herein below in section 24 of the instant Office Action and figures 4 - 10 and 19 - 23, page 1 paragraphs 0011 and 0013 - 0016, page 2 paragraph 0028, page 3 paragraphs 0035 and 0044, page 5 paragraphs 0087 - 0094, page 6 paragraphs 0097 - 0098 and 0103 - 0104, page 9 paragraphs 0170 - 0178, page 10 paragraphs 0212 - 0217, page 11 paragraphs 0224 - 0233 and page 13 paragraph 0276 of Jeong et al. wherein they disclose “an object information acquiring apparatus which acquires distance information and type information related to an object using a single sensor” [0011], that “a multi-channel lidar sensor module includes at least one pair of light emitting units configured to emit laser beams and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams which are emitted from the at least one pair of light emitting units and reflected by target objects” [0013], that the “at least one pair of light emitting units may be provided with a plurality of pairs of light emitting units, each of the pairs of light emitting units may be disposed around the light receiving unit and may face the light receiving unit, and the light emitting units provided with the plurality of pairs of light emitting units may be controlled such that emission periods thereof do not overlap each other” [0016], that “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” [0098], that “it is possible to sense a plurality of target objects A and measure distances to the target objects A existing on a plurality of light source optical axes using one multi-channel lidar sensor module” [0104] that the “first image may include laser beam images corresponding to laser beams that are emitted from the laser module 1000, reflected from a plurality of targets ta1, ta2, and ta3, and then received by the camera module 3000” [0213], that “the object information acquiring apparatus 10000 may generate a traveling signal of the moving body in consideration of a height, width, and size of an object included in an image acquired through the camera module 3000” [0227] and that “an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000” [0276].
The Examiner asserts that, as shown herein above and in the cited portions, Jeong et al. disclose that a laser image may be captured for each pair of a plurality of pairs of light emitting units, that the plurality of pairs of light emitting units may comprise left and right line light emitting units, that the light emitting units may generate and emit laser beams and that images captured at a same time point when a laser beam(s) is emitted from their laser module and light is emitted from their light-emitting diode (LED) module may be used to acquire information related to an object. Furthermore, the Examiner asserts that Jeong et al. disclose that object information may be acquired from a first image, that their first image may include laser beam images and that a traveling signal may be generated in consideration of a height, width, and size of an object included in an image, ex., the first image/laser beam images. The Examiner asserts that in order for Jeong et al. to generate a traveling signal in consideration of a height, width, and size of an object included in an image that the height, width, and size of the object included in the image must be obtained. Thus, the Examiner asserts that, at least, Jeong et al. disclose obtaining a size of the target object based on the first laser image and the second laser image. Therefore, the Examiner asserts that, at least, Jeong et al. disclose the aforementioned disputed claim limitation(s).
On pages 14 - 16 of the remarks the Applicant’s Representative argues that the previously cited prior art references fail to disclose, suggest or teach “acquiring a first laser image captured by the camera of the imaging device, wherein the first laser image is captured when a first laser light with a first predetermined wavelength is emitted by the left line laser emitter and a light with a second predetermined wavelength is emitted by the light-compensating device; acquiring a second laser image captured by the camera of the imaging device, wherein the second laser image is captured when a second laser light with the first predetermined wavelength is emitted by the right line laser emitter and the light with the second predetermined wavelength is emitted by the light-compensating device; wherein the first and second laser light with the first predetermined wavelength and the light with the second predetermined wavelength have different wavelengths.” In particular, the Applicant’s Representative argues that Jeong et al. fail to disclose the aforementioned disputed claim limitation(s) at least because, “although Jeong allows simultaneous emission of the laser beam and LED light, it requires reliance on increasing a threshold to remove interference caused by the LED” and thus “Jeong actually suggests avoiding simultaneous activation of the LED module 2000 and the laser module 1000 during distance measurement.” Furthermore, the Applicant’s Representative argues that Jeong et al. fail to disclose the aforementioned disputed claim limitation(s) at least because although Jeong et al. mention “that the light source unit 1100 may generate laser beams at various wavelengths (e.g., 850 nm, 905 nm, and 1550 nm), and that the LED module 2000 may also emit light beams of various wavelengths, such disclosure merely indicates a range of selectable wavelengths for each source” and because Jeong et al. do “not explicitly disclose or suggest that, in system configuration, the laser light and the LED light are configured to have different wavelengths in order to avoid interference.”
The Examiner respectfully disagrees.
The Examiner asserts that, at least, Jeong et al. disclose the aforementioned disputed claim limitation, see at least the rejections pertaining to claims 1, 6 and 10 included herein below in section 24 of the instant Office Action and figures 4 - 7, 9, 10, 27 and 28, page 1 paragraphs 0011 and 0013 - 0016, page 2 paragraph 0028, page 3 paragraph 0044, page 5 paragraphs 0081 - 0083 and 0087 - 0094, page 6 paragraphs 0097 - 0098 and 0103 - 0104, page 7 paragraphs 0112 and 0125 - 0129, page 9 paragraphs 0170 - 0178, page 10 paragraphs 0212 - 0216, page 11 paragraphs 0228 - 0233, page 12 paragraphs 0242 - 0246 and 0255 - 0256 and page 13 paragraphs 0276 - 0279 of Jeong et al. wherein they disclose “an object information acquiring apparatus which acquires distance information and type information related to an object using a single sensor” [0011], that “a multi-channel lidar sensor module includes at least one pair of light emitting units configured to emit laser beams and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams which are emitted from the at least one pair of light emitting units and reflected by target objects” [0013], that the “at least one pair of light emitting units may be provided with a plurality of pairs of light emitting units, each of the pairs of light emitting units may be disposed around the light receiving unit and may face the light receiving unit, and the light emitting units provided with the plurality of pairs of light emitting units may be controlled such that emission periods thereof do not overlap each other” [0016], that “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” [0098], that “light source unit 1100 may generate laser beams having various wavelengths. For example, the light source unit 1100 may generate laser beams having wavelengths of 850 nm, 905 nm, and 1,550 nm” [0125], that “LED module 2000 may emit light beams having various wavelengths. For example, the LED module 2000 may emit light beams having wavelengths of 850 nm, 905 nm, and 1,550 nm” [0129], that “an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000” [0276], that “as shown in FIG. 28, the sensing unit 3100 may be divided into a first region and a second region. For example, a first sensor 3110 configured to acquire a reflection image may be provided in the first region, and a second sensor 3120 configured to acquire a laser beam image may be provided in the second region” [0277], that “Light receiving sensitivity of the first sensor according to a wavelength may be different from light receiving sensitivity of the second sensor according to a wavelength. For example, the light receiving sensitivity of the first sensor may be maximized in a visible light band, and the light receiving sensitivity of the second sensor may be maximized in an infrared band” [0278] and that the “first sensor may include an infrared ray (IR) filter for blocking an infrared ray. The second sensor may include a filter for blocking visible light” [0279].
The Examiner asserts that, as shown herein above and in the cited portions, Jeong et al. disclose that a laser image may be captured for each pair of a plurality of pairs of light emitting units, that the plurality of pairs of light emitting units may comprise left and right line light emitting units, that the light emitting units may generate and emit laser beams and that images may be captured at a same time point wherein a laser beam(s) is emitted from their laser module and light is emitted from their light-emitting diode (LED) module. Thus, the Examiner asserts that Jeong et al. disclose a technical solution wherein laser images are captured when both the laser emitting device and the light-compensating device are enabled. The Examiner asserts that, as shown herein above and in the cited portions, Jeong et al. disclose that a laser image may be captured for each pair of a plurality of pairs of light emitting units, that the plurality of pairs of light emitting units may comprise left and right line light emitting units, that the light emitting units may generate and emit laser beams and that images captured at a same time point when a laser beam(s) is emitted from their laser module and light is emitted from their light-emitting diode (LED) module may be used to acquire distance information and type information related to an object. Furthermore, the Examiner asserts that, at least in view of paragraph 0276 of Jeong et al., images may be captured at points in time when both laser beams and light are emitted toward the target object(s). The Examiner asserts that Jeong et al. do not suggest avoiding simultaneous activation of the LED module 2000 and the laser module 1000 during distance measurement at least because they expressly disclose at paragraph 0276 that images captured at a point in time when both a laser beam is emitted from the laser module 100 and light is emitted from the LED module 2000 may be utilized to acquire distance information and type information related to an imaged object. Moreover, the Examiner asserts that “the prior art’s mere disclosure of more than one alternative does not constitute a teaching away from any of these alternatives because such disclosure does not criticize, discredit, or otherwise discourage the solution claimed…." In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004). See MPEP § 2123, MPEP § 2141.02(VI) and MPEP § 2143.01(I). In addition, the Examiner asserts that Jeong et al. disclose “wherein the first and second laser light with the first predetermined wavelength and the light with the second predetermined wavelength have different wavelengths” at least because Jeong et al. disclose that “light source 1100 may generate laser beams having wavelengths of 850 nm, 905 nm, and 1,550 nm”, that “LED module 2000 may emit light beams having wavelengths of 850 nm, 905 nm, and 1,550 nm”, that “the object information acquiring apparatus 10000 may include an optical filter which transmits only light corresponding to a wavelength band of a laser beam emitted from the laser module 1000 and blocks light having other wavelength bands”, that “the sensing unit 3100 may be divided into a first region and a second region. For example, a first sensor 3110 configured to acquire a reflection image may be provided in the first region, and a second sensor 3120 configured to acquire a laser beam image may be provided in the second region” and that light “receiving sensitivity of the first sensor according to a wavelength may be different from light receiving sensitivity of the second sensor according to a wavelength. For example, the light receiving sensitivity of the first sensor may be maximized in a visible light band, and the light receiving sensitivity of the second sensor may be maximized in an infrared band.” The Examiner asserts that Jeong et al. do not disclose that the wavelength of the laser beams and the wavelength of the light beams are the same but merely disclose that the wavelengths of the laser beams and light beams may have various wavelengths. Furthermore, the Examiner asserts that, in view of paragraphs 0277 - 0281, Jeong et al. at least suggest that the wavelength of the laser beams and the wavelength of the light beams may be different. Thus, the Examiner asserts that one of ordinary skill in the art would understand that Jeong et al. disclose that the wavelength of their laser beams and the wavelength of their light beams may be the same or different. Therefore, the Examiner asserts that, at least, Jeong et al. disclose the aforementioned disputed claim limitation(s).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claims 1, 4, 6, 10, 28 and 29 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. U.S. Publication No. 2019/0293765 A1 in view of Izawa et al. U.S. Publication No. 2018/0289225 A1 in view of Gil et al. U.S. Publication No. 2018/0353042 A1.
- With regards to claim 1, Jeong et al. disclose a target detection method, (Jeong et al., Figs. 16, 29 & 30, Pg. 1 ¶ 0010 - 0012, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0069 and 0074 - 0075, Pg. 8 ¶ 0144 - 0146, Pg. 9 ¶ 0165 - 0169, Pg. 10 ¶ 0217 - Pg. 11 ¶ 0227) applied to a cleaning robot, (Jeong et al., Pg. 11 ¶ 0220 - 0222 [“when the object information acquiring apparatus 10000 is installed in the AGV or the robot cleaner, it is possible for the AGV or the robot cleaner to travel efficiently”]) wherein an imaging device, (Jeong et al., Abstract, Figs. 2 - 4, 6 - 10, 12 & 15, Pg. 1 ¶ 0002, 0010 - 0011 and 0015 - 0016, Pg. 2 ¶ 0022 and 0027 - 0028, Pg. 3 ¶ 0064 - Pg. 4 ¶ 0066, Pg. 5 ¶ 0083 - 0087 and 0089 - 0091, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 7 ¶ 0117 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 13 ¶ 0276 - 0281) a light-compensating device (Jeong et al., Fig. 15, Pg. 4 ¶ 0070, Pg. 7 ¶ 0117 and 0128 - 0130, Pg. 8 ¶ 0141, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0166, Pg. 9 ¶ 0190 - Pg. 10 ¶ 0195, Pg. 12 ¶ 0242 - 0274 and 0269 - 0271, Pg. 13 ¶ 0273 - 0276) and a laser emitting device are provided, (Jeong et al., Figs. 2 - 12, 15, 17 & 19, Pg. 1 ¶ 0013 - 0016, Pg. 2 ¶ 0022 - 0023 and 0025 - 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083 and 0085 - 0090, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0117 - 0125) the laser emitting device comprises a left line laser emitter and a right line laser emitter disposed side by side in a horizontal direction, (Jeong et al., Figs. 4 & 6 - 11, Pg. 1 ¶ 0014 - 0016, Pg. 2 ¶ 0022 and 0025 - 0028, Pg. 4 ¶ 0078 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085 - 0087 and 0090 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 [“a multi-channel lidar sensor module may be provided. The module may comprise a light emitting unit including at least one pair of emitting units for emitting laser beams; and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams that are emitted from the at least one pair of emitting units and reflected by a target object”, “the at least one pair of light emitting units may be disposed in a vertical direction or in parallel in a horizontal direction with respect to the ground”, “light receiving unit 200 is disposed between the first light emitting unit 110 and the second light emitting unit 120. The first light emitting unit 110 and the second light emitting unit 120 may be disposed in a vertical direction or disposed in parallel in a horizontal direction with respect to the ground... when the first light emitting unit 110 and the second light emitting unit 120 are disposed in the horizontal direction, a left region and a right region may be sensed and measured with respect to the same height” and “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”]) the imaging device comprises one camera, (Jeong et al., Abstract, Figs. 2 - 4, 6 - 10, 12 & 15, Pg. 1 ¶ 0002, 0010 - 0011 and 0015 - 0016, Pg. 2 ¶ 0022 and 0027 - 0028, Pg. 3 ¶ 0064 - Pg. 4 ¶ 0066, Pg. 5 ¶ 0083 - 0087 and 0089 - 0091, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 7 ¶ 0117 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 13 ¶ 0276 - 0281 [“a multi-channel lidar sensor module capable of measuring two target objects using one image sensor”, “an object information acquiring apparatus which acquires distance information and type information related to an object using a single sensor”, “the camera module may include a sensing unit including a plurality of sensing elements arranged in an array form the direction of the perpendicular axis”, “The sensing unit may be divided into a first region and a second region different from the first region and may include a first sensor, which is provided in the first region and acquires a laser beam image, and a second sensor which is provided in the second region and acquires a reflection image” and “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”]) the method comprising: acquiring a first laser image captured by the camera of the imaging device, wherein the first laser image is captured when a first laser light with a first predetermined wavelength is emitted by the left line laser emitter and a light with a second predetermined wavelength is emitted by the light-compensating device; (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0087 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202 and 0209 - 0216, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) acquiring a second laser image captured by the camera of the imaging device, wherein the second laser image is captured when a second laser light with the first predetermined wavelength is emitted by the right line laser emitter and the light with the second predetermined wavelength is emitted by the light-compensating device; (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 4 ¶ 0067 and 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0087 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202, Pg. 11 ¶ 0229 - 0234, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) wherein the first and second laser light with the first predetermined wavelength and the light with the second predetermined wavelength have different wavelengths; (Jeong et al., Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) acquiring a second image captured by the camera of the imaging device, wherein the second image is captured when the light with the second predetermined wavelength is emitted by the light-compensating device; (Jeong et al., Figs. 4 - 10, 15, 19 - 23 & 27, Pg. 1 ¶ 0016, Pg. 2 ¶ 0022 - 0024 and 0028, Pg. 3 ¶ 0044 - 0047, 0058 and 0064, Pg. 4 ¶ 0066 - 0070, Pg. 5 ¶ 0086 - 0092, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 8 ¶ 0158 and 0163, Pg. 10 ¶ 0212 - 0217, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0274, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) obtaining a distance between a target object and the imaging device based on the first laser image, the second laser image and the second image; (Jeong et al., Figs. 4 - 10 & 19 - 23, Pg. 1 ¶ 0011, 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 5 ¶ 0086 - 0094, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 10 ¶ 0212 - 0216, Pg. 11 ¶ 0228 - 0233, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000.” The Examiner asserts that, for example, the application example of the multi-channel lidar sensor module illustrated in figure 9 of Jeong et al. would capture four laser images of a target object(s) from which the distance between the imaging device and the target object(s) would be measured. In addition, Jeong et al. explicitly state in paragraph 0276 that laser images may be captured while laser light from their laser module and light from their LED module are both emitted and that distance and type information related to an object may be acquired from those captured laser images.]) obtaining a size of the target object based on the first laser image and the second laser image; (Jeong et al., Figs. 4 - 10 & 19 - 23, Pg. 1 ¶ 0011, 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 5 ¶ 0086 - 0094, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 10 ¶ 0212 - 0217, Pg. 11 ¶ 0224 - 0233, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”, “The first image may include laser beam images corresponding to laser beams that are emitted from the laser module 1000, reflected from a plurality of targets ta1, ta2, and ta3, and then received by the camera module 3000”, “the object information acquiring apparatus 10000 may generate a traveling signal of the moving body in consideration of a height, width, and size of an object included in an image acquired through the camera module 3000” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000.” The Examiner asserts that, for example, the application example of the multi-channel lidar sensor module illustrated in figure 9 of Jeong et al. would capture four laser images of a target object(s) from which a height, width, and size of an object would be obtained. In addition, Jeong et al. explicitly state in paragraph 0276 that laser images may be captured while laser light from their laser module and light from their LED module are both emitted and that information related to an object may be acquired from those captured laser images.]) identifying a type of the target object based on the second image; (Jeong et al., Figs. 16, 25, 26, 29 & 30, Pg. 1 ¶ 0011, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0071 and 0073 - 0076, Pg. 8 ¶ 0158, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0167, Pg. 10 ¶ 0218, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) and in response to that a distance between the target object and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object. (Jeong et al., Pg. 2 ¶ 0023, Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) Jeong et al. fail to disclose explicitly the target object with the size exceeding a preset threshold; and wherein the preset distance is related to the type of the target object. Pertaining to analogous art, Izawa et al. disclose obtaining a size of the target object; (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) and in response to that a distance between the target object with the size exceeding a preset threshold (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object. (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037 - 0038, Pg. 5 ¶ 0052, Pg. 6 ¶ 0060, Pg. 7 ¶ 0070 - 0077) Izawa et al. fail to disclose explicitly wherein the preset distance is related to the type of the target object. Pertaining to analogous art, Gil et al. disclose identifying a type of the target object; (Gil et al., Abstract, Figs. 3B, 4, 5C, 6C, 7B & 11, Pg. 1 ¶ 0013, Pg. 4 ¶ 0067 - 0070, Pg. 5 ¶ 0085 - 0089, Pg. 6 ¶ 0093, 0097 and 0102, Pg. 7 ¶ 0116 - 0118, Pg. 8 ¶ 0131, Pg. 9 ¶ 0159 - 0162) and in response to that a distance between the target object and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object; (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 6 ¶ 0108 - Pg. 7 ¶ 0110, Pg. 7 ¶ 0120 - 0125, Pg. 8 ¶ 0134 - 0135) wherein the preset distance is related to the type of the target object. (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 7 ¶ 0109 - 0110 and 0120 - 0125, Pg. 8 ¶ 0135) Jeong et al. and Izawa et al. are combinable because they are both directed towards autonomous cleaning robots that employ image processing systems to detect and avoid obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jeong et al. with the teachings of Izawa et al. This modification would have been prompted in order to enhance the base device of Jeong et al. with the well-known and applicable technique Izawa et al. applied to a comparable device. Determining target objects whose sizes exceed a preset threshold, as taught by Izawa et al., would enhance the base device of Jeong et al. by improving its ability to efficiently and reliably navigate the cleaning robot throughout its environment since detected objects exceeding a certain size, which may cause it to get stuck and thus need to be avoided, would be quickly identified while detected objects of a smaller size would not undergo further processing nor affect its traveling path thereby enabling the cleaning robot to more completely and thoroughly cover and clean its environment. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that objects classified as obstacles may be reclassified according to their characteristics, that traveling signals to bypass obstacles may be generated based on type information of the obstacles and that a traveling signal of their moving body, autonomous vehicle, may be generated in consideration of a height, width, and size of an object, see at least page 10 paragraph 0217 - page 11 paragraph 0219 page 11 paragraphs 0224 - 0227 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that target objects whose sizes exceed a preset threshold would be determined in order to allow for the cleaning robot of the base device of Jeong et al. to efficiently, reliably and completely navigate its environment since detected objects exceeding a certain size would be quickly identified so that they may be avoided while detected objects of a smaller size would not undergo further processing nor affect its traveling path thereby enabling the cleaning robot to more completely and thoroughly explore and clean its environment. In addition, Jeong et al. in view of Izawa et al. and Gil et al. are combinable because they are all directed towards autonomous cleaning robots that employ image processing systems to detect and avoid obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. with the teachings of Gil et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. with the well-known and applicable technique Gil et al. applied to a comparable device. Determining a preset distance between the cleaning robot and the target object according to a type of the target object, as taught by Gil et al., would enhance the combined base device by helping ensure that the cleaning robot of the combined base device is able to robustly, reliably and more completely navigate and clean its environment since it would be permitted to move/clean closer to certain types of obstacles as compared to other types of obstacles while always ensuring that it maintains a safe distance away from every obstacle. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that objects classified as obstacles may be reclassified according to their characteristics, that traveling signals to bypass obstacles may be generated based on type information of the obstacles and that a traveling signal of their moving body, autonomous vehicle, may be generated in consideration of a height, width, and size of an object, see at least page 10 paragraph 0217 - page 11 paragraph 0219 page 11 paragraphs 0224 - 0227 of Jeong et al. Moreover, this modification would have been prompted by the teachings and suggestions of Izawa et al. that changes of traveling direction for avoidance operations may be based on shape information of detected objects and that different avoidance operations may be provided in accordance with shape information of detected objects, see at least page 6 paragraph 0060 and page 7 paragraphs 0072 - 0077 of Izawa et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a preset distance between the cleaning robot and the target object would be determined according to a type of the target object in order to permit the cleaning robot of the combined base device to move/clean closer to certain types of obstacles as compared to other types of obstacles while always ensuring that it maintains a safe distance away from every obstacle so that as to help improve its ability to robustly, reliably and more completely navigate and clean its environment. Therefore, it would have been obvious to combine Jeong et al. with Izawa et al. and Gil et al. to obtain the invention as specified in claim 1.
- With regards to claim 4, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 1, wherein the first laser image, the second laser image and the second image are captured alternately by the imaging device. (Jeong et al., Figs. 3 - 12, 20 - 23 & 27, Pg. 1 ¶ 0013 - 0016, Pg. 2 ¶ 0022 - 0024 and 0028, Pg. 3 ¶ 0044 - 0047, 0058 and 0064, Pg. 4 ¶ 0068 - 0070, Pg. 5 ¶ 0088 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 8 ¶ 0158 and 0163, Pg. 10 ¶ 0212 - 0213, Pg. 11 ¶ 0229 - 0231, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0274, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”, “in order to improve accuracy of the distance information, the controller 4000 may acquire a second image in which intensity of the noise is reduced from the first image”, “the controller 4000 may generate the second image using the first image by adjusting a threshold value of the sensing unit 3100” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000.”])
- With regards to claim 6, Jeong et al. disclose a target detection control method, (Jeong et al., Figs. 16, 29 & 30, Pg. 1 ¶ 0010 - 0012, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0069 and 0074 - 0075, Pg. 8 ¶ 0144 - 0146, Pg. 9 ¶ 0165 - 0169, Pg. 10 ¶ 0217 - Pg. 11 ¶ 0227) applied to a cleaning robot, (Jeong et al., Pg. 11 ¶ 0220 - 0222 [“when the object information acquiring apparatus 10000 is installed in the AGV or the robot cleaner, it is possible for the AGV or the robot cleaner to travel efficiently”]) wherein an imaging device, (Jeong et al., Abstract, Figs. 2 - 4, 6 - 10, 12 & 15, Pg. 1 ¶ 0002, 0010 - 0011 and 0015 - 0016, Pg. 2 ¶ 0022 and 0027 - 0028, Pg. 3 ¶ 0064 - Pg. 4 ¶ 0066, Pg. 5 ¶ 0083 - 0087 and 0089 - 0091, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 7 ¶ 0117 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 13 ¶ 0276 - 0281) a light-compensating device (Jeong et al., Fig. 15, Pg. 4 ¶ 0070, Pg. 7 ¶ 0117 and 0128 - 0130, Pg. 8 ¶ 0141, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0166, Pg. 9 ¶ 0190 - Pg. 10 ¶ 0195, Pg. 12 ¶ 0242 - 0274 and 0269 - 0271, Pg. 13 ¶ 0273 - 0276) and a laser emitting device are provided, (Jeong et al., Figs. 2 - 12, 15, 17 & 19, Pg. 1 ¶ 0013 - 0016, Pg. 2 ¶ 0022 - 0023 and 0025 - 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083 and 0085 - 0090, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0117 - 0125) the laser emitting device comprises a left line laser emitter and a right line laser emitter disposed side by side in a horizontal direction, (Jeong et al., Figs. 4 & 6 - 11, Pg. 1 ¶ 0014 - 0016, Pg. 2 ¶ 0022 and 0025 - 0028, Pg. 4 ¶ 0078 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085 - 0087 and 0090 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 [“a multi-channel lidar sensor module may be provided. The module may comprise a light emitting unit including at least one pair of emitting units for emitting laser beams; and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams that are emitted from the at least one pair of emitting units and reflected by a target object”, “the at least one pair of light emitting units may be disposed in a vertical direction or in parallel in a horizontal direction with respect to the ground”, “light receiving unit 200 is disposed between the first light emitting unit 110 and the second light emitting unit 120. The first light emitting unit 110 and the second light emitting unit 120 may be disposed in a vertical direction or disposed in parallel in a horizontal direction with respect to the ground... when the first light emitting unit 110 and the second light emitting unit 120 are disposed in the horizontal direction, a left region and a right region may be sensed and measured with respect to the same height” and “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”]) the imaging device comprises one camera, (Jeong et al., Abstract, Figs. 2 - 4, 6 - 10, 12 & 15, Pg. 1 ¶ 0002, 0010 - 0011 and 0015 - 0016, Pg. 2 ¶ 0022 and 0027 - 0028, Pg. 3 ¶ 0064 - Pg. 4 ¶ 0066, Pg. 5 ¶ 0083 - 0087 and 0089 - 0091, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 7 ¶ 0117 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 13 ¶ 0276 - 0281 [“a multi-channel lidar sensor module capable of measuring two target objects using one image sensor”, “an object information acquiring apparatus which acquires distance information and type information related to an object using a single sensor”, “the camera module may include a sensing unit including a plurality of sensing elements arranged in an array form the direction of the perpendicular axis”, “The sensing unit may be divided into a first region and a second region different from the first region and may include a first sensor, which is provided in the first region and acquires a laser beam image, and a second sensor which is provided in the second region and acquires a reflection image” and “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”]) the method comprising: turning on the left line laser emitter and the right line laser emitter alternately, (Jeong et al., Figs. 4 - 7 & 9, Pg. 1 ¶ 0013 - 0016, Pg. 2 ¶ 0028, Pg. 3 ¶ 0044 and 0047, Pg. 5 ¶ 0088 - 0095, Pg. 6 ¶ 0098 and 0103 - 0104 [“the at least one pair of light emitting units may be provided with a plurality of pairs of light emitting units, each of the pairs of light emitting units is disposed around the light receiving unit and faces the light receiving unit, and the light emitting units provided with the plurality of pairs of light emitting units may be controlled such that emission periods thereof do not overlap each other” and “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”]) wherein a first laser image is captured by the camera of the imaging device when the left line laser emitter and the light-compensating device are turned on, (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0087 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202 and 0209 - 0216, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) a second laser image is captured by the camera of the imaging device when the right line laser emitter and the light-compensating device are turned on, (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 4 ¶ 0067 and 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0087 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202, Pg. 11 ¶ 0229 - 0234, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) and a second image is captured by the camera of the imaging device when the light-compensating device is turned on; (Jeong et al., Figs. 4 - 10, 15, 19 - 23 & 27, Pg. 1 ¶ 0016, Pg. 2 ¶ 0022 - 0024 and 0028, Pg. 3 ¶ 0044 - 0047, 0058 and 0064, Pg. 4 ¶ 0066 - 0070, Pg. 5 ¶ 0086 - 0092, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 8 ¶ 0158 and 0163, Pg. 10 ¶ 0212 - 0217, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0274, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) the left line laser emitter and the right line laser emitter are configured to emit a first laser light with a first predetermined wavelength, (Jeong et al., Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) and the light-compensating device is configured to emit a light with a second predetermined wavelength, (Jeong et al Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) and the first laser light with the first predetermined wavelength and the light with the second predetermined wavelength have different wavelengths; (Jeong et al., Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) obtaining a distance between a target object and the imaging device based on the first laser image, the second laser image and the second image; (Jeong et al., Figs. 4 - 10 & 19 - 23, Pg. 1 ¶ 0011, 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 5 ¶ 0086 - 0094, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 10 ¶ 0212 - 0216, Pg. 11 ¶ 0228 - 0233, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000.” The Examiner asserts that, for example, the application example of the multi-channel lidar sensor module illustrated in figure 9 of Jeong et al. would capture four laser images of a target object(s) from which the distance between the imaging device and the target object(s) would be measured. In addition, Jeong et al. explicitly state in paragraph 0276 that laser images may be captured while laser light from their laser module and light from their LED module are both emitted and that distance and type information related to an object may be acquired from those captured laser images.]) obtaining a size of the target object based on the first laser image and the second laser image; (Jeong et al., Figs. 4 - 10 & 19 - 23, Pg. 1 ¶ 0011, 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 5 ¶ 0086 - 0094, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 10 ¶ 0212 - 0217, Pg. 11 ¶ 0224 - 0233, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”, “The first image may include laser beam images corresponding to laser beams that are emitted from the laser module 1000, reflected from a plurality of targets ta1, ta2, and ta3, and then received by the camera module 3000”, “the object information acquiring apparatus 10000 may generate a traveling signal of the moving body in consideration of a height, width, and size of an object included in an image acquired through the camera module 3000” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000.” The Examiner asserts that, for example, the application example of the multi-channel lidar sensor module illustrated in figure 9 of Jeong et al. would capture four laser images of a target object(s) from which a height, width, and size of an object would be obtained. In addition, Jeong et al. explicitly state in paragraph 0276 that laser images may be captured while laser light from their laser module and light from their LED module are both emitted and that information related to an object may be acquired from those captured laser images.]) identifying a type of the target object based on the second image; (Jeong et al., Figs. 16, 25, 26, 29 & 30, Pg. 1 ¶ 0011, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0071 and 0073 - 0076, Pg. 8 ¶ 0158, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0167, Pg. 10 ¶ 0218, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) and in response to that a distance between the target object and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object. (Jeong et al., Pg. 2 ¶ 0023, Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) Jeong et al. fail to disclose explicitly the target object with the size exceeding a preset threshold; and wherein the preset distance is related to the type of the target object. Pertaining to analogous art, Izawa et al. disclose obtaining a size of the target object; (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) and in response to that a distance between the target object with the size exceeding a preset threshold (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object. (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037 - 0038, Pg. 5 ¶ 0052, Pg. 6 ¶ 0060, Pg. 7 ¶ 0070 - 0077) Izawa et al. fail to disclose explicitly wherein the preset distance is related to the type of the target object. Pertaining to analogous art, Gil et al. disclose identifying a type of the target object; (Gil et al., Abstract, Figs. 3B, 4, 5C, 6C, 7B & 11, Pg. 1 ¶ 0013, Pg. 4 ¶ 0067 - 0070, Pg. 5 ¶ 0085 - 0089, Pg. 6 ¶ 0093, 0097 and 0102, Pg. 7 ¶ 0116 - 0118, Pg. 8 ¶ 0131, Pg. 9 ¶ 0159 - 0162) and in response to that a distance between the target object and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object; (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 6 ¶ 0108 - Pg. 7 ¶ 0110, Pg. 7 ¶ 0120 - 0125, Pg. 8 ¶ 0134 - 0135) wherein the preset distance is related to the type of the target object. (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 7 ¶ 0109 - 0110 and 0120 - 0125, Pg. 8 ¶ 0135) Jeong et al. and Izawa et al. are combinable because they are both directed towards autonomous cleaning robots that employ image processing systems to detect and avoid obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jeong et al. with the teachings of Izawa et al. This modification would have been prompted in order to enhance the base device of Jeong et al. with the well-known and applicable technique Izawa et al. applied to a comparable device. Determining target objects whose sizes exceed a preset threshold, as taught by Izawa et al., would enhance the base device of Jeong et al. by improving its ability to efficiently and reliably navigate the cleaning robot throughout its environment since detected objects exceeding a certain size, which may cause it to get stuck and thus need to be avoided, would be quickly identified while detected objects of a smaller size would not undergo further processing nor affect its traveling path thereby enabling the cleaning robot to more completely and thoroughly cover and clean its environment. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that objects classified as obstacles may be reclassified according to their characteristics, that traveling signals to bypass obstacles may be generated based on type information of the obstacles and that a traveling signal of their moving body, autonomous vehicle, may be generated in consideration of a height, width, and size of an object, see at least page 10 paragraph 0217 - page 11 paragraph 0219 page 11 paragraphs 0224 - 0227 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that target objects whose sizes exceed a preset threshold would be determined in order to allow for the cleaning robot of the base device of Jeong et al. to efficiently, reliably and completely navigate its environment since detected objects exceeding a certain size would be quickly identified so that they may be avoided while detected objects of a smaller size would not undergo further processing nor affect its traveling path thereby enabling the cleaning robot to more completely and thoroughly explore and clean its environment. In addition, Jeong et al. in view of Izawa et al. and Gil et al. are combinable because they are all directed towards autonomous cleaning robots that employ image processing systems to detect and avoid obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. with the teachings of Gil et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. with the well-known and applicable technique Gil et al. applied to a comparable device. Determining a preset distance between the cleaning robot and the target object according to a type of the target object, as taught by Gil et al., would enhance the combined base device by helping ensure that the cleaning robot of the combined base device is able to robustly, reliably and more completely navigate and clean its environment since it would be permitted to move/clean closer to certain types of obstacles as compared to other types of obstacles while always ensuring that it maintains a safe distance away from every obstacle. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that objects classified as obstacles may be reclassified according to their characteristics, that traveling signals to bypass obstacles may be generated based on type information of the obstacles and that a traveling signal of their moving body, autonomous vehicle, may be generated in consideration of a height, width, and size of an object, see at least page 10 paragraph 0217 - page 11 paragraph 0219 page 11 paragraphs 0224 - 0227 of Jeong et al. Moreover, this modification would have been prompted by the teachings and suggestions of Izawa et al. that changes of traveling direction for avoidance operations may be based on shape information of detected objects and that different avoidance operations may be provided in accordance with shape information of detected objects, see at least page 6 paragraph 0060 and page 7 paragraphs 0072 - 0077 of Izawa et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a preset distance between the cleaning robot and the target object would be determined according to a type of the target object in order to permit the cleaning robot of the combined base device to move/clean closer to certain types of obstacles as compared to other types of obstacles while always ensuring that it maintains a safe distance away from every obstacle so that as to help improve its ability to robustly, reliably and more completely navigate and clean its environment. Therefore, it would have been obvious to combine Jeong et al. with Izawa et al. and Gil et al. to obtain the invention as specified in claim 6.
- With regards to claim 10, Jeong et al. disclose a target detection system (Jeong et al., Figs. 15, 16, 29 & 30, Pg. 1 ¶ 0010 - 0012, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0069 and 0074 - 0075, Pg. 7 ¶ 0116 - 0117, Pg. 8 ¶ 0144 - 0146 and 0154 - 0156, Pg. 9 ¶ 0165 - 0169, Pg. 10 ¶ 0217 - Pg. 11 ¶ 0227) for a cleaning robot, (Jeong et al., Pg. 11 ¶ 0220 - 0222 [“when the object information acquiring apparatus 10000 is installed in the AGV or the robot cleaner, it is possible for the AGV or the robot cleaner to travel efficiently”]) comprising a laser emitting device, (Jeong et al., Figs. 2 - 12, 15, 17 & 19, Pg. 1 ¶ 0013 - 0016, Pg. 2 ¶ 0022 - 0023 and 0025 - 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083 and 0085 - 0090, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0117 - 0125) a light-compensating device, (Jeong et al., Fig. 15, Pg. 4 ¶ 0070, Pg. 7 ¶ 0117 and 0128 - 0130, Pg. 8 ¶ 0141, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0166, Pg. 9 ¶ 0190 - Pg. 10 ¶ 0195, Pg. 12 ¶ 0242 - 0274 and 0269 - 0271, Pg. 13 ¶ 0273 - 0276) an imaging device (Jeong et al., Abstract, Figs. 2 - 4, 6 - 10, 12 & 15, Pg. 1 ¶ 0002, 0010 - 0011 and 0015 - 0016, Pg. 2 ¶ 0022 and 0027 - 0028, Pg. 3 ¶ 0064 - Pg. 4 ¶ 0066, Pg. 5 ¶ 0083 - 0087 and 0089 - 0091, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 7 ¶ 0117 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 13 ¶ 0276 - 0281) and a target detection device, (Jeong et al., Figs. 15 & 16, Pg. 1 ¶ 0010 - 0011, Pg. 2 ¶ 0022, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0069, Pg. 7 ¶ 0116 - 0117, Pg. 8 ¶ 0144 - 0146 and 0154 - 0156, Pg. 9 ¶ 0165 - 0169, Pg. 10 ¶ 0195, Pg. 10 ¶ 0217 - Pg. 11 ¶ 0227) wherein: the imaging device comprises one camera; (Jeong et al., Abstract, Figs. 2 - 4, 6 - 10, 12 & 15, Pg. 1 ¶ 0002, 0010 - 0011 and 0015 - 0016, Pg. 2 ¶ 0022 and 0027 - 0028, Pg. 3 ¶ 0064 - Pg. 4 ¶ 0066, Pg. 5 ¶ 0083 - 0087 and 0089 - 0091, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 7 ¶ 0117 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 13 ¶ 0276 - 0281 [“a multi-channel lidar sensor module capable of measuring two target objects using one image sensor”, “an object information acquiring apparatus which acquires distance information and type information related to an object using a single sensor”, “the camera module may include a sensing unit including a plurality of sensing elements arranged in an array form the direction of the perpendicular axis”, “The sensing unit may be divided into a first region and a second region different from the first region and may include a first sensor, which is provided in the first region and acquires a laser beam image, and a second sensor which is provided in the second region and acquires a reflection image” and “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”]) the laser emitting device comprises a left line laser emitter and a right line laser emitter disposed side by side in a horizontal direction (Jeong et al., Figs. 4 & 6 - 11, Pg. 1 ¶ 0014 - 0016, Pg. 2 ¶ 0022 and 0025 - 0028, Pg. 4 ¶ 0078 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085 - 0087 and 0090 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 [“a multi-channel lidar sensor module may be provided. The module may comprise a light emitting unit including at least one pair of emitting units for emitting laser beams; and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams that are emitted from the at least one pair of emitting units and reflected by a target object”, “the at least one pair of light emitting units may be disposed in a vertical direction or in parallel in a horizontal direction with respect to the ground”, “light receiving unit 200 is disposed between the first light emitting unit 110 and the second light emitting unit 120. The first light emitting unit 110 and the second light emitting unit 120 may be disposed in a vertical direction or disposed in parallel in a horizontal direction with respect to the ground... when the first light emitting unit 110 and the second light emitting unit 120 are disposed in the horizontal direction, a left region and a right region may be sensed and measured with respect to the same height” and “a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”]) the left line laser emitter emits a first laser light with a first predetermined wavelength; (Jeong et al., Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) the right line laser emitter emits a second laser light with the first predetermined wavelength; (Jeong et al., Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) the light-compensating device emits a light with a second predetermined wavelength, (Jeong et al., Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) and the first and second laser light with the first predetermined wavelength and the light with the second predetermined wavelength have different wavelengths; (Jeong et al., Pg. 7 ¶ 0125 - 0129, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 - 0281) the camera of the imaging device captures a first laser image when the first laser light with the first predetermined wavelength is emitted by the left line laser emitter and the light with the second predetermined wavelength is emitted by the light-compensating device, (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0087 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202 and 0209 - 0216, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) captures a second laser image when the second laser light with the first predetermined wavelength is emitted by the right line laser emitter and the light with the second predetermined wavelength is emitted by the light-compensating device (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 4 ¶ 0067 and 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0087 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202, Pg. 11 ¶ 0229 - 0234, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) and captures a second image when the light with the second predetermined wavelength is emitted by the light-compensating device; (Jeong et al., Figs. 4 - 10, 15, 19 - 23 & 27, Pg. 1 ¶ 0016, Pg. 2 ¶ 0022 - 0024 and 0028, Pg. 3 ¶ 0044 - 0047, 0058 and 0064, Pg. 4 ¶ 0066 - 0070, Pg. 5 ¶ 0086 - 0092, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 8 ¶ 0158 and 0163, Pg. 10 ¶ 0212 - 0217, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0274, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) and the target detection device comprises: a ranging module, (Jeong et al., Figs. 15 & 16, Pg. 2 ¶ 0022, Pg. 3 ¶ 0035 and 0064, Pg. 4 ¶ 0068, Pg. 7 ¶ 0117, Pg. 8 ¶ 0154 - 0158, Pg. 10 ¶ 0209 and 0215 - 0216) wherein the ranging module obtains a distance between a target object and the imaging device based on the first laser image, the second laser image and the second image; (Jeong et al., Figs. 4 - 10 & 19 - 23, Pg. 1 ¶ 0011, 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 5 ¶ 0086 - 0094, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 10 ¶ 0212 - 0216, Pg. 11 ¶ 0228 - 0233, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000.” The Examiner asserts that, for example, the application example of the multi-channel lidar sensor module illustrated in figure 9 of Jeong et al. would capture four laser images of a target object(s) from which the distance between the imaging device and the target object(s) would be measured. In addition, Jeong et al. explicitly state in paragraph 0276 that laser images may be captured while laser light from their laser module and light from their LED module are both emitted and that distance and type information related to an object may be acquired from those captured laser images.]) an object identification module, (Jeong et al., Figs. 15 & 16, Pg. 2 ¶ 0022, Pg. 3 ¶ 0064, Pg. 4 ¶ 0069 - 0071, Pg. 7 ¶ 0117, Pg. 8 ¶ 0144 - 0146, Pg. 9 ¶ 0166 - 0169, Pg. 10 ¶ 0195 and 0217 - 0218) wherein the object identification module obtains a size of the target object based on the first laser image and the second laser image (Jeong et al., Figs. 4 - 10 & 19 - 23, Pg. 1 ¶ 0011, 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 5 ¶ 0086 - 0094, Pg. 6 ¶ 0098 and 0103 - 0104, Pg. 10 ¶ 0212 - 0217, Pg. 11 ¶ 0224 - 0233, Pg. 13 ¶ 0276 [“a plurality of pairs of light emitting units opposite to each other may be disposed around one light receiving unit 200, and emission periods of the pairs of light emitting units may be controlled to implement a multi-channel lidar sensor module. FIGS. 9 to 10 illustrate various application examples of the multi-channel lidar sensor module according to the embodiment of the present invention”, “The first image may include laser beam images corresponding to laser beams that are emitted from the laser module 1000, reflected from a plurality of targets ta1, ta2, and ta3, and then received by the camera module 3000”, “the object information acquiring apparatus 10000 may generate a traveling signal of the moving body in consideration of a height, width, and size of an object included in an image acquired through the camera module 3000” and “On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000.” The Examiner asserts that, for example, the application example of the multi-channel lidar sensor module illustrated in figure 9 of Jeong et al. would capture four laser images of a target object(s) from which a height, width, and size of an object would be obtained. In addition, Jeong et al. explicitly state in paragraph 0276 that laser images may be captured while laser light from their laser module and light from their LED module are both emitted and that information related to an object may be acquired from those captured laser images.]) and identifies a type of the target object based on the second image; (Jeong et al., Figs. 16, 25, 26, 29 & 30, Pg. 1 ¶ 0011, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0071 and 0073 - 0076, Pg. 8 ¶ 0158, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0167, Pg. 10 ¶ 0218, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) wherein in response to that a distance between the target object and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object. (Jeong et al., Pg. 2 ¶ 0023, Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) Jeong et al. fail to disclose explicitly the target object with the size exceeding a preset threshold; and wherein the preset distance is related to the type of the target object. Pertaining to analogous art, Izawa et al. disclose obtaining a size of the target object; (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) and in response to that a distance between the target object with the size exceeding a preset threshold (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object. (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037 - 0038, Pg. 5 ¶ 0052, Pg. 6 ¶ 0060, Pg. 7 ¶ 0070 - 0077) Izawa et al. fail to disclose explicitly wherein the preset distance is related to the type of the target object. Pertaining to analogous art, Gil et al. disclose identifying a type of the target object; (Gil et al., Abstract, Figs. 3B, 4, 5C, 6C, 7B & 11, Pg. 1 ¶ 0013, Pg. 4 ¶ 0067 - 0070, Pg. 5 ¶ 0085 - 0089, Pg. 6 ¶ 0093, 0097 and 0102, Pg. 7 ¶ 0116 - 0118, Pg. 8 ¶ 0131, Pg. 9 ¶ 0159 - 0162) and in response to that a distance between the target object and the cleaning device is less than or equal to a preset distance, controlling the cleaning robot to bypass the target object; (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 6 ¶ 0108 - Pg. 7 ¶ 0110, Pg. 7 ¶ 0120 - 0125, Pg. 8 ¶ 0134 - 0135) wherein the preset distance is related to the type of the target object. (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 7 ¶ 0109 - 0110 and 0120 - 0125, Pg. 8 ¶ 0135) Jeong et al. and Izawa et al. are combinable because they are both directed towards autonomous cleaning robots that employ image processing systems to detect and avoid obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Jeong et al. with the teachings of Izawa et al. This modification would have been prompted in order to enhance the base device of Jeong et al. with the well-known and applicable technique Izawa et al. applied to a comparable device. Determining target objects whose sizes exceed a preset threshold, as taught by Izawa et al., would enhance the base device of Jeong et al. by improving its ability to efficiently and reliably navigate the cleaning robot throughout its environment since detected objects exceeding a certain size, which may cause it to get stuck and thus need to be avoided, would be quickly identified while detected objects of a smaller size would not undergo further processing nor affect its traveling path thereby enabling the cleaning robot to more completely and thoroughly cover and clean its environment. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that objects classified as obstacles may be reclassified according to their characteristics, that traveling signals to bypass obstacles may be generated based on type information of the obstacles and that a traveling signal of their moving body, autonomous vehicle, may be generated in consideration of a height, width, and size of an object, see at least page 10 paragraph 0217 - page 11 paragraph 0219 page 11 paragraphs 0224 - 0227 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that target objects whose sizes exceed a preset threshold would be determined in order to allow for the cleaning robot of the base device of Jeong et al. to efficiently, reliably and completely navigate its environment since detected objects exceeding a certain size would be quickly identified so that they may be avoided while detected objects of a smaller size would not undergo further processing nor affect its traveling path thereby enabling the cleaning robot to more completely and thoroughly explore and clean its environment. In addition, Jeong et al. in view of Izawa et al. and Gil et al. are combinable because they are all directed towards autonomous cleaning robots that employ image processing systems to detect and avoid obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. with the teachings of Gil et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. with the well-known and applicable technique Gil et al. applied to a comparable device. Determining a preset distance between the cleaning robot and the target object according to a type of the target object, as taught by Gil et al., would enhance the combined base device by helping ensure that the cleaning robot of the combined base device is able to robustly, reliably and more completely navigate and clean its environment since it would be permitted to move/clean closer to certain types of obstacles as compared to other types of obstacles while always ensuring that it maintains a safe distance away from every obstacle. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that objects classified as obstacles may be reclassified according to their characteristics, that traveling signals to bypass obstacles may be generated based on type information of the obstacles and that a traveling signal of their moving body, autonomous vehicle, may be generated in consideration of a height, width, and size of an object, see at least page 10 paragraph 0217 - page 11 paragraph 0219 page 11 paragraphs 0224 - 0227 of Jeong et al. Moreover, this modification would have been prompted by the teachings and suggestions of Izawa et al. that changes of traveling direction for avoidance operations may be based on shape information of detected objects and that different avoidance operations may be provided in accordance with shape information of detected objects, see at least page 6 paragraph 0060 and page 7 paragraphs 0072 - 0077 of Izawa et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that a preset distance between the cleaning robot and the target object would be determined according to a type of the target object in order to permit the cleaning robot of the combined base device to move/clean closer to certain types of obstacles as compared to other types of obstacles while always ensuring that it maintains a safe distance away from every obstacle so that as to help improve its ability to robustly, reliably and more completely navigate and clean its environment. Therefore, it would have been obvious to combine Jeong et al. with Izawa et al. and Gil et al. to obtain the invention as specified in claim 10.
- With regards to claim 28, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 1, wherein the left line laser emitter, the imaging device and the right line laser emitter are disposed side by side in the horizontal direction; (Jeong et al., Figs. 4, 6, 7, 9 & 10, Pg. 1 ¶ 0014 - 0016, Pg. 2 ¶ 0022 and 0025 - 0028, Pg. 4 ¶ 0079 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085 - 0087 and 0090 - 0095, Pg. 6 ¶ 0098 - 0104 [“a multi-channel lidar sensor module may be provided. The module may comprise a light emitting unit including at least one pair of emitting units for emitting laser beams; and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams that are emitted from the at least one pair of emitting units and reflected by a target object”, “the at least one pair of light emitting units may be disposed in a vertical direction or in parallel in a horizontal direction with respect to the ground” and “light receiving unit 200 is disposed between the first light emitting unit 110 and the second light emitting unit 120. The first light emitting unit 110 and the second light emitting unit 120 may be disposed in a vertical direction or disposed in parallel in a horizontal direction with respect to the ground... when the first light emitting unit 110 and the second light emitting unit 120 are disposed in the horizontal direction, a left region and a right region may be sensed and measured with respect to the same height”]) and the method further comprises: determining the target object as a first target object; (Jeong et al., Figs. 16, 25, 26, 29 & 30, Pg. 1 ¶ 0011, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0071 and 0073 - 0076, Pg. 8 ¶ 0158, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0167, Pg. 10 ¶ 0218 - Pg. 11 ¶ 0220, Pg. 11 ¶ 0224 - 0227, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276) determining the preset distance between the cleaning robot and the first target object; (Jeong et al., Pg. 2 ¶ 0023, Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) and in response to that a distance between the cleaning robot and the first target object is less than or equal to the preset distance, controlling the cleaning robot to bypass the first target object. (Jeong et al., Pg. 2 ¶ 0023, Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) Jeong et al. fail to disclose explicitly determining the target object with the size exceeding the preset threshold as a first target object; and determining the preset distance according to the type of the first target object. Pertaining to analogous art, Izawa et al. disclose determining the target object with the size exceeding the preset threshold as a first target object; (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) determining the preset distance between the cleaning robot and the first target object; (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037, Pg. 5 ¶ 0053 - 0054, Pg. 6 ¶ 0060, Pg. 7 ¶ 0072 - 0075) and in response to that a distance between the cleaning robot and the first target object is less than or equal to the preset distance, controlling the cleaning robot to bypass the first target object. (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037 - 0038, Pg. 5 ¶ 0052, Pg. 6 ¶ 0060, Pg. 7 ¶ 0070 - 0077) Izawa et al. fail to disclose explicitly determining the preset distance according to the type of the first target object. Pertaining to analogous art, Gil et al. disclose determining the preset distance between the cleaning robot and the first target object according to the type of the first target object; (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 7 ¶ 0109 - 0110 and 0120 - 0125, Pg. 8 ¶ 0135) and in response to that a distance between the cleaning robot and the first target object is less than or equal to the preset distance, controlling the cleaning robot to bypass the first target object. (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 6 ¶ 0108 - Pg. 7 ¶ 0110, Pg. 7 ¶ 0120 - 0125, Pg. 8 ¶ 0134 - 0135)
- With regards to claim 29, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 6, wherein the left line laser emitter, the imaging device and the right line laser emitter are disposed side by side in the horizontal direction; (Jeong et al., Figs. 4, 6, 7, 9 & 10, Pg. 1 ¶ 0014 - 0016, Pg. 2 ¶ 0022 and 0025 - 0028, Pg. 4 ¶ 0079 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085 - 0087 and 0090 - 0095, Pg. 6 ¶ 0098 - 0104 [“a multi-channel lidar sensor module may be provided. The module may comprise a light emitting unit including at least one pair of emitting units for emitting laser beams; and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams that are emitted from the at least one pair of emitting units and reflected by a target object”, “the at least one pair of light emitting units may be disposed in a vertical direction or in parallel in a horizontal direction with respect to the ground” and “light receiving unit 200 is disposed between the first light emitting unit 110 and the second light emitting unit 120. The first light emitting unit 110 and the second light emitting unit 120 may be disposed in a vertical direction or disposed in parallel in a horizontal direction with respect to the ground... when the first light emitting unit 110 and the second light emitting unit 120 are disposed in the horizontal direction, a left region and a right region may be sensed and measured with respect to the same height”]) and the method further comprises: determining the target object as a first target object; (Jeong et al., Figs. 16, 25, 26, 29 & 30, Pg. 1 ¶ 0011, Pg. 2 ¶ 0022 - 0024, Pg. 3 ¶ 0064, Pg. 4 ¶ 0068 - 0071 and 0073 - 0076, Pg. 8 ¶ 0158, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0167, Pg. 10 ¶ 0218 - Pg. 11 ¶ 0220, Pg. 11 ¶ 0224 - 0227, Pg. 12 ¶ 0242 - 0248, Pg. 13 ¶ 0276) determining the preset distance between the cleaning robot and the first target object; (Jeong et al., Pg. 2 ¶ 0023, Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) and in response to that a distance between the cleaning robot and the first target object is less than or equal to the preset distance, controlling the cleaning robot to bypass the first target object. (Jeong et al., Pg. 2 ¶ 0023, Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) Jeong et al. fail to disclose explicitly determining the target object with the size exceeding the preset threshold as a first target object; and determining the preset distance according to the type of the first target object. Pertaining to analogous art, Izawa et al. disclose determining the target object with the size exceeding the preset threshold as a first target object; (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) determining the preset distance between the cleaning robot and the first target object; (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037, Pg. 5 ¶ 0053 - 0054, Pg. 6 ¶ 0060, Pg. 7 ¶ 0072 - 0075) and in response to that a distance between the cleaning robot and the first target object is less than or equal to the preset distance, controlling the cleaning robot to bypass the first target object. (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037 - 0038, Pg. 5 ¶ 0052, Pg. 6 ¶ 0060, Pg. 7 ¶ 0070 - 0077) Izawa et al. fail to disclose explicitly determining the preset distance according to the type of the first target object. Pertaining to analogous art, Gil et al. disclose determining the preset distance between the cleaning robot and the first target object according to the type of the first target object; (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 7 ¶ 0109 - 0110 and 0120 - 0125, Pg. 8 ¶ 0135) and in response to that a distance between the cleaning robot and the first target object is less than or equal to the preset distance, controlling the cleaning robot to bypass the first target object. (Gil et al., Pg. 5 ¶ 0089 - Pg. 6 ¶ 0092, Pg. 6 ¶ 0108 - Pg. 7 ¶ 0110, Pg. 7 ¶ 0120 - 0125, Pg. 8 ¶ 0134 - 0135)
Claims 3 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. U.S. Publication No. 2019/0293765 A1 in view of Izawa et al. U.S. Publication No. 2018/0289225 A1 in view of Gil et al. U.S. Publication No. 2018/0353042 A1 as applied to claims 1 and 6 above, and further in view of Hickerson et al. U.S. Publication No. 2015/0168954 A1.
- With regards to claim 3, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 1, further comprising: acquiring a third image captured by the imaging device, (Jeong et al., Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248) wherein the third image is captured when emitting the first and second laser light with the first predetermined wavelength and the light with the second predetermined wavelength is stopped, (Jeong et al., Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248 [“controller 4000 may acquire a fourth image captured by the camera module 3000 at a non-emission timing of the laser module 1000 and an emission timing of the LED module 2000. The non-emission timing of the laser module 1000 may refer to a time point at which a laser beam is not emitted from the laser module 1000. In addition, the emission timing of the LED module 2000 may refer to a time point at which light is not emitted from the LED module 2000”]) and wherein the obtaining the distance between the target object and the imaging device further comprises: obtaining a corrected laser image; (Jeong et al., Pg. 11 ¶ 0229 - 0234) and obtaining the distance between the target object and the imaging device based on the corrected laser image. (Jeong et al., Pg. 11 ¶ 0229 - 0234, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0294) Jeong et al. fail to disclose explicitly obtaining a corrected laser image by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image. Pertaining to analogous art, Hickerson et al. disclose acquiring a third image captured by the imaging device, (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0032 - 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) wherein the third image is captured when emitting the first and second laser light with the first predetermined wavelength and the light with the second predetermined wavelength is stopped, (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) and wherein the obtaining the distance between the target object and the imaging device further comprises: obtaining a corrected laser image by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image; (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) and obtaining the distance between the target object and the imaging device based on the corrected laser image. (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0032 - 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054 - Pg. 6 ¶ 0056, Pg. 6 ¶ 0059 - 0062 and 0064) Jeong et al. in view of Izawa et al. in view of Gil et al. and Hickerson et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems to detect obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. with the teachings of Hickerson et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. with the well-known and applicable technique Hickerson et al. applied to a comparable device. Obtaining the corrected laser image by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image, as taught by Hickerson et al., would enhance the combined base device by further reducing the amount of noise and artifacts present in initially acquired laser images and thereby improving its ability to accurately and reliably calculate distances to objects in an environment and thus robustly navigate the environment while avoiding obstacles. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that to improve accuracy of the distance information they may obtain corrected laser images by processing the first laser images, see at least page 11 paragraphs 0228 - 0234 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the corrected laser image would be obtained by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image so as to further reduce the amount of noise and artifacts present in initially acquired laser images and thereby improve the ability of the autonomous vehicle of the combined base device to accurately and reliably calculate distances to objects in an environment and robustly navigate the environment effectively while avoiding obstacles. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. with Hickerson et al. to obtain the invention as specified in claim 3.
- With regards to claim 8, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 6, further comprising: turning off the left line laser emitter, the right line laser emitter and the light-compensating device; (Jeong et al., Figs. 4 & 6 - 10, Pg. 1 ¶ 0013 - 0016, Pg. 4 ¶ 0078 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085, 0087 - 0090 and 0094 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248 [“controller 4000 may acquire a fourth image captured by the camera module 3000 at a non-emission timing of the laser module 1000 and an emission timing of the LED module 2000. The non-emission timing of the laser module 1000 may refer to a time point at which a laser beam is not emitted from the laser module 1000. In addition, the emission timing of the LED module 2000 may refer to a time point at which light is not emitted from the LED module 2000”]) wherein a third image is captured by the imaging device when the left line laser emitter, the right line laser emitter and the light-compensating device are turned off, (Jeong et al., Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248 [“controller 4000 may acquire a fourth image captured by the camera module 3000 at a non-emission timing of the laser module 1000 and an emission timing of the LED module 2000. The non-emission timing of the laser module 1000 may refer to a time point at which a laser beam is not emitted from the laser module 1000. In addition, the emission timing of the LED module 2000 may refer to a time point at which light is not emitted from the LED module 2000”]) and wherein the obtaining the distance between the target object and the imaging device comprises: obtaining a corrected laser image; (Jeong et al., Pg. 11 ¶ 0229 - 0234) and obtaining the distance between the target object and the imaging device based on the corrected laser image. (Jeong et al., Pg. 11 ¶ 0229 - 0234, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0294) Jeong et al. fail to disclose explicitly obtaining a corrected laser image by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image. Pertaining to analogous art, Hickerson et al. disclose turning off the laser emitters and the light-compensating device; (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) wherein a third image is captured by the imaging device when the laser emitters and the light-compensating device are turned off, (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) and wherein the obtaining the distance between the target object and the imaging device comprises: obtaining a corrected laser image by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image; (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) and obtaining the distance between the target object and the imaging device based on the corrected laser image. (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0032 - 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054 - Pg. 6 ¶ 0056, Pg. 6 ¶ 0059 - 0062 and 0064) Jeong et al. in view of Izawa et al. in view of Gil et al. and Hickerson et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems to detect obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. with the teachings of Hickerson et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. with the well-known and applicable technique Hickerson et al. applied to a comparable device. Obtaining the corrected laser image by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image, as taught by Hickerson et al., would enhance the combined base device by further reducing the amount of noise and artifacts present in initially acquired laser images and thereby improving its ability to accurately and reliably calculate distances to objects in an environment and thus robustly navigate the environment while avoiding obstacles. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that to improve accuracy of the distance information they may obtain corrected laser images by processing the first laser images, see at least page 11 paragraphs 0228 - 0234 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the corrected laser image would be obtained by calculating a difference between pixel points in the first laser image as well as the second laser image and pixel points at corresponding positions in the third image so as to further reduce the amount of noise and artifacts present in initially acquired laser images and thereby improve the ability of the autonomous vehicle of the combined base device to accurately and reliably calculate distances to objects in an environment and robustly navigate the environment effectively while avoiding obstacles. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. with Hickerson et al. to obtain the invention as specified in claim 8.
Claims 5 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. U.S. Publication No. 2019/0293765 A1 in view of Izawa et al. U.S. Publication No. 2018/0289225 A1 in view of Gil et al. U.S. Publication No. 2018/0353042 A1 as applied to claims 1 and 6 above, and further in view of Dooley et al. U.S. Publication No. 2021/0164785 A1.
- With regards to claim 5, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 1, wherein the first laser image is captured by the imaging device under preset first exposure parameters; (Jeong et al., Pg. 3 ¶ 0064, Pg. 4 ¶ 0067, Pg. 8 ¶ 0142, Pg. 8 ¶ 0158 - 0160, Pg. 10 ¶ 0212 - 0216, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0281) the second image is captured by the imaging device under second exposure parameters; (Jeong et al., Pg. 3 ¶ 0064, Pg. 4 ¶ 0070, Pg. 8 ¶ 0158, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0165, Pg. 11 ¶ 0238 - Pg. 12 ¶ 0246, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0281) wherein the exposure parameters comprise at least one of an exposure time or an exposure gain. (Jeong et al., Pg. 2 ¶ 0022 - 0023, Pg. 3 ¶ 0064, Pg. 4 ¶ 0067 - 0070, Pg. 8 ¶ 0142, Pg. 8 ¶ 0158 - Pg. 9 ¶ 0165, Pg. 10 ¶ 0212 - 0213, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0243, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0281) Jeong et al. fail to disclose explicitly wherein the second exposure parameters are obtained according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame. Pertaining to analogous art, Dooley et al. disclose wherein the first image is captured by the imaging device under preset first exposure parameters; (Dooley et al., Figs. 3A - 3D, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0009, Pg. 3 ¶ 0025, Pg. 7 ¶ 0086, Pg. 8 ¶ 0090 - 0091, Pg. 8 ¶ 0095 - Pg. 9 ¶ 0096) the second image is captured by the imaging device under second exposure parameters, (Dooley et al., Abstract, Figs. 3A - 3D, Pg. 2 ¶ 0015 - 0016, Pg. 6 ¶ 0077) and the second exposure parameters are obtained according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame; (Dooley et al., Abstract, Figs. 3A - 3D, Pg. 2 ¶ 0015 - 0016, Pg. 6 ¶ 0077 [“the at least one processor is configured to (i) evaluate exposure of images captured by the first camera, and (ii) vary the exposure interval of the first camera responsive to the evaluation of exposure of the images”]) wherein the exposure parameters comprise at least one of an exposure time or an exposure gain. (Dooley et al., Abstract, Figs. 3A - 3D, Pg. 2 ¶ 0015 - 0016, Pg. 6 ¶ 0077 [“the at least one processor is configured to (i) evaluate exposure of images captured by the first camera, and (ii) vary the exposure interval of the first camera responsive to the evaluation of exposure of the images”]) Jeong et al. in view of Izawa et al. in view of Gil et al. and Dooley et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems to detect obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. with the teachings of Dooley et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. with the well-known and applicable technique Dooley et al. applied to a comparable device. Obtaining second exposure parameters for the second image according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame, as taught by Dooley et al., would enhance the combined base device by improving its ability to acquire high quality second images and thus help ensure it is able to reliably and accurately recognize objects from the acquired second images. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that their controller may control light receiving sensitivity of the camera module by controlling a gain value, that at night a size of an optical signal may not be sufficient to recognize an object, that an LED module may emit light towards the object to increase pixel values and increase the rate of object recognition and that an exposure time of their sensors may be controlled, see at least page 8 paragraphs 0141 - 0144, page 8 paragraph 0162 - page 9 paragraph 0165 and page 13 paragraphs 0280 - 0282 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that second exposure parameters for the second image would be obtained according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame so as to help ensure the combined base device is able to reliably and accurately recognize objects from acquired second images by improving the quality of the second images it acquires. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. with Dooley et al. to obtain the invention as specified in claim 5.
- With regards to claim 9, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 6, wherein the first laser image is captured by the imaging device under preset first exposure parameters; (Jeong et al., Pg. 3 ¶ 0064, Pg. 4 ¶ 0067, Pg. 8 ¶ 0142, Pg. 8 ¶ 0158 - 0160, Pg. 10 ¶ 0212 - 0216, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0281) the second image is acquired by the imaging device under second exposure parameters; (Jeong et al., Pg. 3 ¶ 0064, Pg. 4 ¶ 0070, Pg. 8 ¶ 0158, Pg. 8 ¶ 0162 - Pg. 9 ¶ 0165, Pg. 11 ¶ 0238 - Pg. 12 ¶ 0246, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0281) wherein the exposure parameters comprise at least one of an exposure time or an exposure gain. (Jeong et al., Pg. 2 ¶ 0022 - 0023, Pg. 3 ¶ 0064, Pg. 4 ¶ 0067 - 0070, Pg. 8 ¶ 0142, Pg. 8 ¶ 0158 - Pg. 9 ¶ 0165, Pg. 10 ¶ 0212 - 0213, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0243, Pg. 12 ¶ 0269 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0281) Jeong et al. fail to disclose explicitly wherein the second exposure parameters are obtained according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame. Pertaining to analogous art, Dooley et al. disclose wherein the first image is captured by the imaging device under preset first exposure parameters; (Dooley et al., Figs. 3A - 3D, Pg. 1 ¶ 0008 - Pg. 2 ¶ 0009, Pg. 3 ¶ 0025, Pg. 7 ¶ 0086, Pg. 8 ¶ 0090 - 0091, Pg. 8 ¶ 0095 - Pg. 9 ¶ 0096) the second image is acquired by the imaging device under second exposure parameters, (Dooley et al., Abstract, Figs. 3A - 3D, Pg. 2 ¶ 0015 - 0016, Pg. 6 ¶ 0077) and the second exposure parameters are obtained according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame; (Dooley et al., Abstract, Figs. 3A - 3D, Pg. 2 ¶ 0015 - 0016, Pg. 6 ¶ 0077 [“the at least one processor is configured to (i) evaluate exposure of images captured by the first camera, and (ii) vary the exposure interval of the first camera responsive to the evaluation of exposure of the images”]) wherein the exposure parameters comprise at least one of an exposure time or an exposure gain. (Dooley et al., Abstract, Figs. 3A - 3D, Pg. 2 ¶ 0015 - 0016, Pg. 6 ¶ 0077 [“the at least one processor is configured to (i) evaluate exposure of images captured by the first camera, and (ii) vary the exposure interval of the first camera responsive to the evaluation of exposure of the images”]) Jeong et al. in view of Izawa et al. in view of Gil et al. and Dooley et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems to detect obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. with the teachings of Dooley et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. with the well-known and applicable technique Dooley et al. applied to a comparable device. Obtaining second exposure parameters for the second image according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame, as taught by Dooley et al., would enhance the combined base device by improving its ability to acquire high quality second images and thus help ensure it is able to reliably and accurately recognize objects from the acquired second images. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that their controller may control light receiving sensitivity of the camera module by controlling a gain value, that at night a size of an optical signal may not be sufficient to recognize an object, that an LED module may emit light towards the object to increase pixel values and increase the rate of object recognition and that an exposure time of their sensors may be controlled, see at least page 8 paragraphs 0141 - 0144, page 8 paragraph 0162 - page 9 paragraph 0165 and page 13 paragraphs 0280 - 0282 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that second exposure parameters for the second image would be obtained according to imaging quality of a captured previous second image frame and exposure parameters when capturing the previous second image frame so as to help ensure the combined base device is able to reliably and accurately recognize objects from acquired second images by improving the quality of the second images it acquires. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. with Dooley et al. to obtain the invention as specified in claim 9.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. U.S. Publication No. 2019/0293765 A1 in view of Izawa et al. U.S. Publication No. 2018/0289225 A1 in view of Gil et al. U.S. Publication No. 2018/0353042 A1 as applied to claim 10 above, and further in view of Asatani et al. U.S. Publication No. 2008/0310705 A1.
- With regards to claim 11, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the target detection system according to claim 10 ([The Examiner asserts that Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the target detection system according to claim 10, see analysis of claim 10 included herein above.]) and self-propelled equipment (Jeong et al., Pg. 4 ¶ 0071 - 0074 and 0076, Pg. 11 ¶ 0220 - 0227, Pg. 12 ¶ 0260 - 0264, Pg. 14 ¶ 0300) comprising the target detection system (Jeong et al., Pg. 4 ¶ 0071 - 0074 and 0076, Pg. 11 ¶ 0220 - 0226, Pg. 12 ¶ 0260 - 0264, Pg. 14 ¶ 0300) according to claim 10, ([The Examiner asserts that Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the target detection system according to claim 10, see analysis of claim 10 included herein above.]) wherein the self-propelled equipment further comprises a driving device, (Jeong et al., Pg. 4 ¶ 0071 - 0074, Pg. 4 ¶ 0076, Pg. 110220 - 0226, Pg. 12 ¶ 0260 - 0264, Pg. 14 ¶ 0300) and the driving device drives the self-propelled equipment along a working surface. (Jeong et al., Pg. 4 ¶ 0071 - 0074, Pg. 4 ¶ 0076, Pg. 110220 - 0226, Pg. 12 ¶ 0260 - 0264, Pg. 14 ¶ 0300) Jeong et al. fail to disclose explicitly driving the self-propelled equipment to walk. Pertaining to analogous art, Asatani et al. disclose self-propelled equipment, (Asatani et al., Abstract, Figs. 1, 2 5 & 7, Pg. 1 ¶ 0009 - 0010, Pg. 4 ¶ 0053, Pg. 5 ¶ 0057 - 0064, Pg. 16 ¶ 0189) wherein the self-propelled equipment further comprises a driving device, (Asatani et al., Abstract, Figs. 1, 2 5 & 7, Pg. 1 ¶ 0009 - 0010, Pg. 4 ¶ 0053, Pg. 5 ¶ 0057 - 0064, Pg. 16 ¶ 0189) and the driving device drives the self-propelled equipment to walk along a working surface. (Asatani et al., Abstract, Figs. 1, 2 5 & 7, Pg. 1 ¶ 0008 - 0011, Pg. 4 ¶ 0053, Pg. 5 ¶ 0057 - 0069, Pg. 16 ¶ 0189) Jeong et al. in view of Izawa et al. in view of Gil et al. and Asatani et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems for navigation planning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. with the teachings of Asatani et al. This modification would have been prompted in order to substitute the autonomous vehicle of Jeong et al. for the self-propelled equipment of Asatani et al. The self-propelled equipment of Asatani et al. could be substituted in place of the autonomous vehicle of Jeong et al. utilizing well-known techniques in the art and would likely yield predictable results, in that in the combination the self-propelled equipment of Asatani et al. that walks along a working surface would be utilized as the autonomous vehicle of the combined base device. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that the self-propelled equipment of Asatani et al. that walks along a working surface would be utilized as the autonomous vehicle of the combined base device. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. with Asatani et al. to obtain the invention as specified in claim 11.
Claims 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. U.S. Publication No. 2019/0293765 A1 in view of Izawa et al. U.S. Publication No. 2018/0289225 A1 in view of Gil et al. U.S. Publication No. 2018/0353042 A1 in view of Asatani et al. U.S. Publication No. 2008/0310705 A1 as applied to claim 11 above, and further in view of Chamberlain et al. U.S. Patent No. 10,029,804.
- With regards to claim 14, Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. disclose a method for controlling the self-propelled equipment according to claim 11, comprising: acquiring first laser images captured by the imaging device disposed on the self-propelled equipment at a plurality of first time points, (Jeong et al., Abstract, Figs. 5, 7 - 11, 15 & 27, Pg. 1 ¶ 0016, Pg. 2 ¶ 0022 - 0025 and 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0071 - 0072, 0074 and 0076, Pg. 5 ¶ 0086 - 0090 and 0093 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0117 - 0125 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 9 ¶ 0186 - 0191, Pg. 10 ¶ 0212 - 0214, Pg. 11 ¶ 0220 - 0226, Pg. 12 ¶ 0260, Pg. 12 ¶ 0268 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0282, Pg. 14 ¶ 0300) wherein the first laser images are captured when the first laser light with the first predetermined wavelength and the light with the second predetermined wavelength are emitted; (Jeong et al., Abstract, Fig. 15, Pg. 1 ¶ 0011 and 0013, Pg. 2 ¶ 0022 - 0025, Pg. 3 ¶ 0064, Pg. 7 ¶ 0117 - 0125, 0129 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 10 ¶ 0212 - 0214, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) acquiring second laser images captured by the imaging device disposed on the self-propelled equipment at a plurality of second time points, (Jeong et al., Abstract, Figs. 5, 7 - 11, 15 & 27, Pg. 1 ¶ 0016, Pg. 2 ¶ 0022 - 0025 and 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0071 - 0072, 0074 and 0076, Pg. 5 ¶ 0086 - 0090 and 0093 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0117 - 0125 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 9 ¶ 0186 - 0191, Pg. 10 ¶ 0212 - 0214, Pg. 11 ¶ 0220 - 0226, Pg. 12 ¶ 0260, Pg. 12 ¶ 0268 - Pg. 13 ¶ 0276, Pg. 13 ¶ 0280 - 0282, Pg. 14 ¶ 0300) wherein the second laser images are captured when the second laser light with the first predetermined wavelength and the light with the second predetermined wavelength are emitted; (Jeong et al., Abstract, Fig. 15, Pg. 1 ¶ 0011 and 0013, Pg. 2 ¶ 0022 - 0025, Pg. 3 ¶ 0064, Pg. 7 ¶ 0117 - 0125, 0129 and 0131 - 0134, Pg. 8 ¶ 0158, Pg. 10 ¶ 0212 - 0214, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) and conducting a navigation planning for the self-propelled equipment. (Jeong et al., Pg. 4 ¶ 0071 - 0072, 0074 and 0076, Pg. 11 ¶ 0220 - 0227) Jeong et al. fail to disclose explicitly acquiring a plurality of positions where the self-propelled equipment is located when respective images are captured by the imaging device at the plurality of first and second time points; obtaining a point cloud according to the first laser images and the second laser images captured by the imaging device at the plurality of first and second time points and the plurality of positions where the self-propelled equipment is located; and clustering the point cloud and conducting navigation planning according to a clustering result. Pertaining to analogous art, Chamberlain et al. disclose acquiring first laser images captured by the imaging device disposed on the self-propelled equipment at a plurality of first time points, (Chamberlain et al., Col. 3 Lines 10 - 19, Col. 3 Line 36 - Col. 4 Line 13, Col. 11 Line 39 - Col. 12 Line 7) wherein the first laser images are captured when the laser light with the first predetermined wavelength is emitted; (Chamberlain et al., Fig. 1, Col. 2 Lines 27 - 51, Col. 5 Lines 33 - 38, Col. 7 Lines 7 - 30, Col. 11 Lines 42 - 65) acquiring second laser images captured by the imaging device disposed on the self-propelled equipment at a plurality of second time points, (Chamberlain et al., Col. 3 Lines 10 - 19, Col. 3 Line 36 - Col. 4 Line 13, Col. 11 Line 39 - Col. 12 Line 7) wherein the second laser images are captured when the laser light with the first predetermined wavelength is emitted; (Chamberlain et al., Fig. 1, Col. 2 Lines 27 - 51, Col. 5 Lines 33 - 38, Col. 7 Lines 7 - 30, Col. 11 Lines 42 - 65) acquiring a plurality of positions where the self-propelled equipment is located when respective images are captured by the imaging device at the plurality of first and second time points; (Chamberlain et al., Col. 2 Line 45 - Col. 3 Line 19, Col. 3 Line 36 - Col. 4 Line 8, Col. 10 Line 34 - Col. 12 Line 7) obtaining a point cloud according to the first laser images and the second laser images captured by the imaging device at the plurality of first and second time points and the plurality of positions where the self-propelled equipment is located; (Chamberlain et al., Fig. 2, Col. 3 Lines 10 - 19, Col. 3 Line 36 - Col. 4 Line 8, Col. 4 Line 54 - Col. 5 Line 51, Col. 10 Line 34 - Col. 11 Line 65) and clustering the point cloud (Chamberlain et al., Col. 3 Line 54 - Col. 4 Line 8, Col. 7 Line 49 - Col. 8 Line 5, Col. 11 Lines 10 - 65) and conducting a navigation planning for the self-propelled equipment according to a clustering result. (Chamberlain et al., Fig. 1, Col. 3 Line 54 - Col. 4 Line 64, Col. 6 Line 65 - Col. 8 Line 58, Col. 11 Line 66 - Col. 12 Line 7) Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. and Chamberlain et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems for navigation planning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. with the teachings of Chamberlain et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. with the well-known and applicable technique Chamberlain et al. applied to a comparable device. Acquiring a plurality of positions where the self-propelled equipment is located when respective images are captured by the imaging device at the plurality of first and second time points, obtaining a point cloud according to the first and second laser images and conducting navigation planning according to a result of clustering the point cloud, as taught by Chamberlain et al., would enhance the combined base device by improving its ability to accurately and robustly acquire distance and type information for objects included in an environment, and thus help ensure that the autonomous vehicle is able to reliably navigate the environment effectively while avoiding obstacles, since the distance and type information would be able to be generated from a more complete representation of the environment with less errors obtained by registering, aligning and merging a plurality of lidar images together. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that a traveling signal of the moving body may be generated in consideration of a height, width, and size of an object included in an acquired image, see at least page 11 paragraph 0227 of Jeong et al. Moreover, this modification would have been prompted by the teachings and suggestions of Chamberlain et al. that higher quality data and better object detection certainty can be obtained by continuously refining, updating and correcting lidar images to form a more dense and accurate complete representation of the autonomous vehicle’s surroundings by merging multiple lidar images collected over time together, see at least column 3 lines 10 - 19, column 4 line 64 - column 5 line 38 and column 6 lines 23 - 64 of Chamberlain et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that navigation planning would be conducted according to a result of clustering a point cloud obtained from a plurality of first and second laser images acquired at a plurality of different positions of the self-propelled equipment in order to improve the ability of the self-propelled equipment to reliably navigate its environment effectively while avoiding obstacles since its navigation planning would be able to be conducted from distance and type information of objects acquired from a more complete and accurate representation of the environment with less errors obtained by registering, aligning and merging a plurality of lidar images together. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. with Chamberlain et al. to obtain the invention as specified in claim 14.
- With regards to claim 15, Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. in view of Chamberlain et al. disclose the method according to claim 14, wherein the conducting the navigation planning for the self-propelled equipment according to the clustering result comprises: controlling the self-propelled equipment to bypass when the distance between the self-propelled equipment and the target object is less than or equal to the preset distance; (Jeong et al., Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) wherein a value of the preset distance is greater than 0. (Jeong et al., Pg. 4 ¶ 0068 - 0072, 0074 and 0076, Pg. 9 ¶ 0186 - 0189, Pg. 10 ¶ 0215 - Pg. 11 ¶ 0227, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0295, Pg. 14 ¶ 0300 [Jeong et al. disclose bypassing objects classified as obstacles and that only when an object is within a preset distance is type information of the object recognized, i.e., only obstacles within the preset distance are bypassed.]) Jeong et al. fail to disclose explicitly obtaining the clustering result, wherein the clustering result comprises the target object of which the size exceeds the preset threshold; and controlling the self-propelled equipment to bypass the target object of which the size exceeds the preset threshold. Pertaining to analogous art, Izawa et al. disclose wherein the conducting the navigation planning for the self-propelled equipment according to the clustering result comprises: wherein the clustering result comprises the target object of which the size exceeds the preset threshold; (Izawa et al., Pg. 4 ¶ 0037, Pg. 5 ¶ 0054, Pg. 6 ¶ 0058 - 0060 [“the discrimination part 64 discriminates whether or not the object is an obstacle based on the height dimension of the object acquired by the shape acquisition part 63. In more detail, when the height dimension of the object acquired by the shape acquisition part 63 is equal to or higher than a specified height, the discrimination part 64 discriminates that the object is an obstacle” and “in step 10, upon discriminating that the object is an obstacle (the height dimension of the object is more than a specified height dimension), since it is assumed that there is an object to be avoided or a narrow space which the vacuum cleaner 11 (main casing 20) cannot enter ahead of the vacuum cleaner 11 (main casing 20), the discrimination part 64 changes the traveling direction of the vacuum cleaner 11 (main casing 20) by the control means 27 (travel control part 66) (step 12), and processing is returned to step 1”]) and controlling the self-propelled equipment to bypass when the distance between the self-proposed equipment and the target object of which the size exceeds the preset threshold is less than or equal to the preset distance; wherein a value of the preset distance is greater than 0. (Izawa et al., Figs. 5 & 9, Pg. 4 ¶ 0037 - 0038, Pg. 5 ¶ 0052, Pg. 6 ¶ 0060, Pg. 7 ¶ 0070 - 0077) Izawa et al. fail to disclose explicitly obtaining the clustering result. Pertaining to analogous art, Chamberlain et al. disclose wherein the conducting the navigation planning for the self-propelled equipment according to the clustering result comprises: obtaining the clustering result, (Chamberlain et al., Fig. 2, Col. 3 Line 48 - Col. 4 Line 8, Col. 7 Line 49 - Col. 8 Line 5) wherein the clustering result comprises the target object of which the size exceeds the preset threshold; (Chamberlain et al., Abstract, Figs. 1 & 2, Col. 3 Line 48 - Col. 4 Line 8, Col. 5 Line 46 - Col. 6 Line 64, Col. 7 Line 49 - Col. 8 Line 44) and controlling the self-propelled equipment to bypass the target object of which size exceeds the preset threshold. (Chamberlain et al., Figs. 1 & 2, Col. 3 Line 48 - Col. 4 Line 53, Col. 6 Line 33 - Col. 7 Line 6, Col. 7 Line 49 - Col. 8 Line 44, Col. 11 Line 30 - Col. 12 Line 7)
Claim 17 is rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. U.S. Publication No. 2019/0293765 A1 in view of Izawa et al. U.S. Publication No. 2018/0289225 A1 in view of Gil et al. U.S. Publication No. 2018/0353042 A1 in view of Asatani et al. U.S. Publication No. 2008/0310705 A1 in view of Chamberlain et al. U.S. Patent No. 10,029,804 as applied to claim 14 above, and further in view of Hickerson et al. U.S. Publication No. 2015/0168954 A1.
- With regards to claim 17, Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. in view of Chamberlain et al. disclose the method according to claim 14, further comprising: acquiring a third image captured by the imaging device, (Jeong et al., Pg. 12 ¶ 0243) wherein the third image is captured when emitting the first laser light with the first predetermined wavelength and the second laser light with the first predetermined wavelength is stopped; (Jeong et al., Figs. 4 & 6 - 10, Pg. 1 ¶ 0013 - 0016, Pg. 4 ¶ 0078 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085, 0087 - 0090 and 0094 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 11 ¶ 0235 - Pg. 12 ¶ 0248 [“controller 4000 may acquire a fourth image captured by the camera module 3000 at a non-emission timing of the laser module 1000 and an emission timing of the LED module 2000. The non-emission timing of the laser module 1000 may refer to a time point at which a laser beam is not emitted from the laser module 1000. In addition, the emission timing of the LED module 2000 may refer to a time point at which light is not emitted from the LED module 2000”]) obtaining a corrected laser image; (Jeong et al., Pg. 11 ¶ 0229 - 0234) and wherein obtaining the point cloud according to the first laser images and the second laser images captured by the imaging device at the plurality of first and second time points and the plurality of positions where the self-propelled equipment is located comprises: obtaining the distance between the target object and the imaging device according to a plurality of corrected laser images corresponding to the first laser image and the second laser image captured where the self-propelled equipment is located. (Jeong et al., Figs. 4 & 6 - 10, Pg. 1 ¶ 0013 - 0016, Pg. 4 ¶ 0071 - 0072, 0074 and 0076, Pg. 4 ¶ 0078 - Pg. 5 ¶ 0083, Pg. 5 ¶ 0085 - 0090 and 0093 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 9 ¶ 0186 - 0191, Pg. 11 ¶ 0220 - Pg. 12 ¶ 0248, Pg. 12 ¶ 0260 - 0265, Pg. 13 ¶ 0290 - Pg. 14 ¶ 0294, Pg. 14 ¶ 0300) Jeong et al. fail to disclose explicitly obtaining a corrected laser image by calculating a difference between pixel points in the first laser image and pixel points at corresponding positions in the third image; and obtaining the distance according to a plurality of corrected laser images corresponding to the first laser images and the second laser images captured at the plurality of first and second time points and the plurality of positions where the self-propelled equipment is located. Pertaining to analogous art, Hickerson et al. disclose acquiring a third image captured by the imaging device, (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0032 - 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) wherein the third image is captured when emitting the first laser light with the first predetermined wavelength and the second laser light with the first predetermined wavelength is stopped; (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) obtaining a corrected laser image by calculating a difference between pixel points in the first laser image and pixel points at corresponding positions in the third image; (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0054) and wherein obtaining the point cloud according to the first laser images and the second laser images captured by the imaging device at the plurality of first and second time points and the plurality of positions where the self-propelled equipment is located comprises: obtaining the distance between the target object and the imaging device according to a plurality of corrected laser images corresponding to the first laser images and the second laser images captured at the plurality of first and second time points and the plurality of positions where the self-propelled equipment is located. (Hickerson et al., Abstract, Figs. 1, 3 & 5, Pg. 2 ¶ 0022 - Pg. 3 ¶ 0025, Pg. 3 ¶ 0032 - 0034, Pg. 4 ¶ 0040 - 0044, Pg. 5 ¶ 0046 - 0047, Pg. 5 ¶ 0054 - Pg. 6 ¶ 0056, Pg. 6 ¶ 0059 - 0062 and 0064 [“An obstacle detector for a mobile robot while the robot is in motion”, “at least one pulsed light source configured to project light in the path of the robot; a visual sensor for capturing images including a subset of images showing the light reflected from the floor or obstacle; a microprocessor or equivalent processing unit configured to subtract or difference pairs of images to extract the reflected light and to add or otherwise combine two or more pairs of images after subtraction to average out and suppress the background. With this technique, obstacle detection may be implemented while the robot is in motion without the need to stop” and “capturing a plurality of images of light reflected from the path of the robot; generating two or more difference images by subtracting pairs of the plurality of images where each pair of images comprising a first image with the at least one light source on and a second image with the light source off; combining two or more difference images to enhance the reflected light relative to the background; and detecting the obstacle in the path of the robot based on a location of the reflected light in the combined difference images. The method may also determine the location of the reflected light in the combined difference images is determined by triangulation”]) Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. in view of Chamberlain et al. and Hickerson et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems for navigation planning. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. in view of Chamberlain et al. with the teachings of Hickerson et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. in view of Chamberlain et al. with the well-known and applicable technique Hickerson et al. applied to a comparable device. Obtaining the corrected laser image by calculating a difference between pixel points in the first laser image and pixel points at corresponding positions in the third image and obtaining the distance according to a plurality of corrected laser images corresponding to the first laser images and the second laser images captured at the plurality of first and second time points and the plurality of positions where the self-propelled equipment is located, as taught by Hickerson et al., would enhance the combined base device by further reducing the amount of noise and artifacts present in initially acquired laser images and by improving its ability to accurately and robustly compute the positions and heights of obstacles in three-dimensional space, since they would be computed from a sequence of multiple images, thereby helping ensure that the autonomous vehicle is able to reliably navigate its environment effectively while avoiding obstacles. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that multiple laser images may be obtained and that to improve accuracy of the distance information they may obtain corrected laser images by processing laser images, see at least figures 4 - 11, page 5 paragraph 0086 - page 6 paragraph 0104 and page 11 paragraphs 0228 - 0234 of Jeong et al. Moreover, this modification would have been prompted by the teachings and suggestions of Chamberlain et al. to continuously refine, update and correct a registered point cloud by merging multiple lidar images together, see at least column 4 line 64 - column 5 line 38 of Chamberlain et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that corrected laser images would be obtained by calculating a difference between pixel points in a first laser image and pixel points at corresponding positions in a corresponding third image and in that the distance between the target object and the imaging device would be obtained according to a plurality of corrected laser images corresponding to first and second laser images so as to improve the ability of the autonomous vehicle of the combined base device to accurately and robustly calculate distances to objects in its environment and reliably navigate the environment effectively while avoiding obstacles. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. in view of Asatani et al. in view of Chamberlain et al. with Hickerson et al. to obtain the invention as specified in claim 17
Claims 24 and 25 are rejected under 35 U.S.C. 103 as being unpatentable over Jeong et al. U.S. Publication No. 2019/0293765 A1 in view of Izawa et al. U.S. Publication No. 2018/0289225 A1 in view of Gil et al. U.S. Publication No. 2018/0353042 A1 as applied to claims 1 and 6 above, and further in view of Shen et al. U.S. Publication No. 2022/0287533 A1.
- With regards to claim 24, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 1, wherein the first laser image is captured by irradiating the target object with the first laser light with the first predetermined wavelength emitted by the left line laser emitter at a first angle and the light with the second predetermined wavelength emitted by the light-compensating device, (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085, 0087 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202 and 0209 - 0216, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) and the second laser image is captured by irradiating the same target object with the second laser light with the first predetermined wavelength emitted by the right line laser emitter at a second angle and the light with the second predetermined wavelength emitted by the light-compensating device; (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 4 ¶ 0067 and 0078 - 0080, Pg. 5 ¶ 0083, 0085, 0087 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202, Pg. 11 ¶ 0229 - 0234, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) wherein the first angle is an angle between a direction of the first laser light emitted by the left line laser emitter and an optical axis of the imaging device, (Jeong et al., Figs. 2, 3, 7 - 12, 17 & 19, Pg. 1 ¶ 0013 - 0014, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0126 - 0127, Pg. 10 ¶ 0198 - 0202) and the second angle is an angle between a direction of the second laser light emitted by the right line laser emitter and the optical axis of the imaging device; (Jeong et al., Figs. 2, 3, 7 - 12, 17 & 19, Pg. 1 ¶ 0013 - 0014, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0126 - 0127, Pg. 10 ¶ 0198 - 0202) wherein the obtaining the distance between the target object and the imaging device based on the first laser image and the second laser image comprises: calculating coordinates, relative to the imaging device, of points at which the first laser light and the second laser light irradiate the target object at the first angle and the second angle, respectively, based on the first laser image and the second laser image. (Jeong et al., Figs. 20, 21, 25, 26, 29 & 30, Pg. 3 ¶ 0064, Pg. 4 ¶ 0067 - 0068 and 0078 - 0080, Pg. 8 ¶ 0150 - 0153 and 0157 - 0160, Pg. 9 ¶ 0170 - 0177, Pg. 10 ¶ 0198 - 0201 and 0209 - 0216, Pg. 11 ¶ 0228 - 0234, Pg. 12 ¶ 0260 - 0262, Pg. 13 ¶ 0276) Jeong et al. fail to disclose expressly calculating three-dimensional coordinates. Pertaining to analogous art, Shen et al. disclose irradiating the target object with the first laser light with the first predetermined wavelength emitted by the left line laser emitter at a first angle, (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) and irradiating the same target object with the second laser light with the first predetermined wavelength emitted by the right line laser emitter at a second angle; (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) wherein the first angle is an angle between a direction of the first laser light emitted by the left line laser emitter and an optical axis of the imaging device, (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) and the second angle is an angle between a direction of the second laser light emitted by the right line laser emitter and the optical axis of the imaging device; (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) wherein the obtaining the distance between the target object and the imaging device based on the first laser image and the second laser image comprises: calculating three-dimensional coordinates, relative to the imaging device, of points at which the first laser light and the second laser light irradiate the target object at the first angle and the second angle, respectively. (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0012 - 0013, 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0041 - 0049, Pg. 4 ¶ 0085 - 0087, Pg. 5 ¶ 0098, 0101 - 0108 and 0110, Pg. 6 ¶ 0120 - 0124) Jeong et al. in view of Izawa et al. in view of Gil et al. and Shen et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems to detect obstacles and determine distances to the obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. with the teachings of Shen et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. with the well-known and applicable technique Shen et al. applied to a comparable device. Calculating three-dimensional coordinates of points at which the first laser light and the second laser light irradiate the target object at the first angle and the second angle, respectively, as taught by Shen et al., would enhance the combined base device by enabling coordinates of obstacles in three-dimensional space to be determined in addition to distances to the obstacles so as to expand the information it has regarding the positioning of obstacles in its environment and thereby improve its ability to reliably and robustly navigate its environment efficiently while avoiding obstacles. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that they may calculate the distance information related to an object through various methods including a triangulation method, and that distance information related to an object may be displayed at a position of the object in an image, see at least figure 26, page 8 paragraphs 0151 - 0157 and page 12 paragraphs 0260 - 0262 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that three-dimensional coordinate of points at which the first laser light and the second laser light irradiate the target object would be calculated so as to improve the ability of the autonomous vehicle of the combined base device to reliably and robustly navigate its environment efficiently while avoiding obstacles. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. with Shen et al. to obtain the invention as specified in claim 24.
- With regards to claim 25, Jeong et al. in view of Izawa et al. in view of Gil et al. disclose the method according to claim 6, wherein the first laser image is captured by irradiating the target object with the first laser light with the first predetermined wavelength emitted by the left line laser emitter at a first angle and the light with the second predetermined wavelength emitted by the light-compensating device, (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 3 ¶ 0064, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085, 0087 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202 and 0209 - 0216, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) the second laser image is captured by irradiating the same target object with a second laser light with the first predetermined wavelength emitted by the right line laser emitter at a second angle and the light with the second predetermined wavelength emitted by the light-compensating device; (Jeong et al., Figs. 2 - 4, 7 - 12, 17 & 19 - 21, Pg. 1 ¶ 0013 - 0014 and 0016, Pg. 2 ¶ 0028, Pg. 4 ¶ 0067 and 0078 - 0080, Pg. 5 ¶ 0083, 0085, 0087 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0125 - 0129, Pg. 10 ¶ 0198 - 0202, Pg. 11 ¶ 0229 - 0234, Pg. 12 ¶ 0255 - 0256, Pg. 13 ¶ 0276 [“On the other hand, an emission timing of the laser module 1000 and an emission timing of the LED module 2000 may overlap each other. That is, at the same time point, a laser beam may be emitted from the laser module 1000, and light may be emitted from the LED module 2000. In this case, the controller 4000 may acquire distance information and type information related to an object included in an image based on the image captured at the same time by the camera module 3000”]) wherein the first angle is an angle between a direction of the first laser light emitted by the left line laser emitter and an optical axis of the imaging device, (Jeong et al., Figs. 2, 3, 7 - 12, 17 & 19, Pg. 1 ¶ 0013 - 0014, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0126 - 0127, Pg. 10 ¶ 0198 - 0202) and the second angle is an angle between a direction of the second laser light emitted by the right line laser emitter and the optical axis of the imaging device; (Jeong et al., Figs. 2, 3, 7 - 12, 17 & 19, Pg. 1 ¶ 0013 - 0014, Pg. 4 ¶ 0078 - 0080, Pg. 5 ¶ 0083, 0085 and 0091 - 0095, Pg. 6 ¶ 0098 - 0104, Pg. 7 ¶ 0119 - 0120 and 0126 - 0127, Pg. 10 ¶ 0198 - 0202) and wherein the obtaining the distance between the target object and the imaging device based on the first laser image and the second laser image comprises: calculating coordinates, relative to the imaging device, of points at which the first laser light with the first predetermined wavelength and the second laser light with the first predetermined wavelength irradiate the target object at the first angle and the second angle, respectively, based on the first laser image and the second laser image. (Jeong et al., Figs. 20, 21, 25, 26, 29 & 30, Pg. 3 ¶ 0064, Pg. 4 ¶ 0067 - 0068 and 0078 - 0080, Pg. 7 ¶ 0125 - 0129, Pg. 8 ¶ 0150 - 0153 and 0157 - 0160, Pg. 9 ¶ 0170 - 0177, Pg. 10 ¶ 0198 - 0201 and 0209 - 0216, Pg. 11 ¶ 0228 - 0234, Pg. 12 ¶ 0255 - 0256 and 0260 - 0262, Pg. 13 ¶ 0276) Jeong et al. fail to disclose expressly calculating three-dimensional coordinates. Pertaining to analogous art, Shen et al. disclose irradiating the target object with the first laser light with the first predetermined wavelength emitted by the left line laser emitter at a first angle, (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) irradiating the same target object with a second laser light with the first predetermined wavelength emitted by the right line laser emitter at a second angle; (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) wherein the first angle is an angle between a direction of the first laser light emitted by the left line laser emitter and an optical axis of the imaging device, (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) and the second angle is an angle between a direction of the second laser light emitted by the right line laser emitter and the optical axis of the imaging device; (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0044 - 0049, Pg. 4 ¶ 0085 - 0087, 0091 and 0093, Pg. 5 ¶ 0098, 0101 - 0108 and 0110) and wherein the obtaining the distance between the target object and the imaging device based on the first laser image and the second laser image comprises: calculating three-dimensional coordinates, relative to the imaging device, of points at which the first laser light with the first predetermined wavelength and the second laser light with the first predetermined wavelength irradiate the target object at the first angle and the second angle, respectively. (Shen et al., Figs. 2, 3, 10 & 13, Pg. 1 ¶ 0012 - 0013, 0018 and 0021 - 0025, Pg. 2 ¶ 0033 and 0041 - 0049, Pg. 4 ¶ 0085 - 0087 and 0091, Pg. 5 ¶ 0098, 0101 - 0108 and 0110, Pg. 6 ¶ 0120 - 0124) Jeong et al. in view of Izawa et al. in view of Gil et al. and Shen et al. are combinable because they are all directed towards autonomous vehicles that employ image processing systems to detect obstacles and determine distances to the obstacles. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combined teachings of Jeong et al. in view of Izawa et al. in view of Gil et al. with the teachings of Shen et al. This modification would have been prompted in order to enhance the combined base device of Jeong et al. in view of Izawa et al. in view of Gil et al. with the well-known and applicable technique Shen et al. applied to a comparable device. Calculating three-dimensional coordinates of points at which the first laser light and the second laser light irradiate the target object at the first angle and the second angle, respectively, as taught by Shen et al., would enhance the combined base device by enabling coordinates of obstacles in three-dimensional space to be determined in addition to distances to the obstacles so as to expand the information it has regarding the positioning of obstacles in its environment and thereby improve its ability to reliably and robustly navigate its environment efficiently while avoiding obstacles. Furthermore, this modification would have been prompted by the teachings and suggestions of Jeong et al. that they may calculate the distance information related to an object through various methods including a triangulation method, and that distance information related to an object may be displayed at a position of the object in an image, see at least figure 26, page 8 paragraphs 0151 - 0157 and page 12 paragraphs 0260 - 0262 of Jeong et al. This combination could be completed according to well-known techniques in the art and would likely yield predictable results, in that three-dimensional coordinate of points at which the first laser light and the second laser light irradiate the target object would be calculated so as to improve the ability of the autonomous vehicle of the combined base device to reliably and robustly navigate its environment efficiently while avoiding obstacles. Therefore, it would have been obvious to combine Jeong et al. in view of Izawa et al. in view of Gil et al. with Shen et al. to obtain the invention as specified in claim 25.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC RUSH whose telephone number is (571) 270-3017. The examiner can normally be reached 9am - 5pm Monday - Friday.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached at (571) 270 - 5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERIC RUSH/Primary Examiner, Art Unit 2677