DETAILED ACTION
Notice of Pre-AIA or AIA Status
The United States Patent & Trademark Office appreciates the response filed for the current application that is submitted on 02/09/2026. The United States Patent & Trademark Office reviewed the following documents submitted and has made the following comments below.
AMENDMENTS
Applicant/s submitted arguments and remarks on 01/27/2026. The Examiner acknowledges the arguments and reviewed the claims accordingly.
Applicant/s amended claims 1 – 2, and 5 – 16. Claims 3 – 4 have been cancelled and a new claim 17 has been added. Claims 1 – 2, and 5 – 17 are currently pending.
Response to Arguments
In regards to Argument 1, with respect to the objection to claims 3, 7 and 16, the applicant/s states that claim 3 has been cancelled rendering the objection to claim 3 moot and claims 7 and 16 have been amended to correct the informalities. Therefore, the applicant/s requests withdrawal of objection of claims. (See Arguments/Remarks, page 8, dated 01/27/2026)
In response to Argument 1, with respect to the objection to claims 3, 7 and 16, the Examiner states that the applicant/s arguments have been fully considered and are persuasive. Therefore, the Examiner states that objections to claims 7 and 16 have been withdrawn.
In regards to Argument 2, with respect to the interpretation of claims 1, 3, 7, and 15 under 35 U.S.C. 112(f), the applicant/s states that claims have been amended. Therefore, the applicant/s requests withdrawal of interpretation of claims under 35 U.S.C. 112(f). (See Arguments/Remarks, page 9, dated 01/27/2026)
In response to Argument 2, with respect to the claim interpretation of claims 1, 3, 7, and 15 under 35 U.S.C. 112(f), the Examiner acknowledges the amendments. Therefore, the examiner states that the interpretation of claims under 35 U.S.C. 112(f) have been withdrawn.
In regards to Argument 3, with respect to the rejection of claims 7 – 10 and 14, under 23 U.S.C. 112(b) for being indefinite, the applicant/s states that claims have been amended to overcome the rejections. Therefore, the applicant/s requests withdrawal of rejection of claims 7 – 10 and 14 under 35 U.S.C. 112(b). (See Arguments/Remarks, page 9, dated 01/27/2026)
In response to Argument 3, with respect to rejection of claims 7 – 10 and 14 under 35 U.S.C. 112(b) for being indefinite, the Examiner states that the applicant/s arguments have been fully considered and are persuasive. Therefore, the Examiner states that rejection of claims 7 – 10 and 14 under 35 U.S.C. 112(b) have been withdrawn.
In regards to Argument 4, with respect to the rejection of claim 16, under 35 U.S.C. 101, the applicant/s states that claim has been amended to overcome the rejections. Therefore, the applicant/s requests withdrawal of rejection of claim 16 under 35 U.S.C. 101. (See Arguments/Remarks, page 9 - 10, dated 01/27/2026)
In response to Argument 4, with respect to rejection of claim 16 under 35 U.S.C. 101, the Examiner states that the applicant/s arguments have been fully considered and are persuasive. Therefore, the Examiner states that rejection of claim 16 under 35 U.S.C. 101 has been withdrawn.
In regards to Argument 5, with respect to the rejection of claims 1, 3 – 5 and 11 – 16, under 35 U.S.C. 102, the applicant/s states that claim has been amended to incorporate all the limitations of dependent claims 3 and 4. The applicant/s further states that Saito fails to teach all the limitations recited in the amended independent claim 1 and therefore, claim 1 is not anticipated by Saito. The applicant/s states that claims 2 and 5 – 14 as amended, are directly or indirectly dependent on amended independent claim 1. The applicant/s further states that method claim 15 and non-transitory computer-readable media claim 16 have been amended similar to claim 1. Therefore, the applicant/s requests withdrawal of rejection of claim 1, 3 – 5 and 11 – 16, under 35 U.S.C. 102. (See Arguments/Remarks, page 9 - 10, dated 01/27/2026)
In response to Argument 5, with respect to the rejection of claims 1, 3 – 5 and 11 – 16, under 35 U.S.C. 102, the Examiner states that the applicant/s arguments have been fully considered but rendered moot in view of the amendments made to the claims. The Examiner further states that the amendments changed the scope of the claims. Therefore, the Examiner states that upon further search and consideration, the following new rejections have been necessitated by the amendments.
In regards to Argument 6, with respect to the rejection of claims 2, and 6 – 8 under 35 U.S.C. 103, the applicant/s states that the claims 2 and 5 – 14 are directly or indirectly dependent on claim 1 and therefore overcome the rejection of claims under 35 U.S.C. 102 or under 35 U.S.C. 103. Therefore, the applicant/s requests the withdrawal of rejection of claims under 35 U.S.C. 103. (See Arguments/Remarks, page 12, dated 01/27/2026)
In response to Argument 6, with respect to the rejection of claims 2, and 6 – 8 under 35 U.S.C. 103, the Examiner states that the applicant/s arguments have been fully considered but rendered moot in view of the amendments made to the independent claims. The Examiner further states that the amendments changed the scope of the claims. Therefore, the Examiner states that upon further search and consideration, the following new rejections have been necessitated by the amendments.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5, 13 and 15 – 17 are rejected under 35 U.S.C. 103 as being unpatentable over Saito (See Machine Translation for JP 2007189542 A; hereafter referred to as Saito) in view of Satoshi et al. (See Machine Translation for JP 2015126537 A; hereafter referred to as Satoshi).
Regarding Claim 1, Saito teaches:
A measuring device comprising:
image sensor (Saito, [0015] “image sensor 26 that photoelectrically converts the subject light”);
circuitry including a CPU that is configured to cause the image sensor to capture an image of an imaging range in water (Saito, [0010] “the photographing apparatus of the present invention includes an imaging unit that captures an image of a subject, a strobe light emitting unit that emits strobe light that illuminates the subject, and an image of the subject imaged by emitting the strobe light as a strobe image”; Saito, [0018] “an image quality correction circuit (W/γ) 34c for performing image quality correction processing”; Saito, [0027] “the CPU 49 executes the underwater shooting mode”); and
measure information regarding a position of a target object in an imaging direction based on the image captured by the image sensor (Saito, [0011] “a distance measuring unit that measures a subject distance, and a subject distance within the range of the strobe light. And a light emission prohibiting means for prohibiting the emission of strobe light when it is determined that the subject distance is outside the range of the strobe light”; Saito, [0040] “Each of the 79 object distances is measured. The subject distance can be obtained from the position of the focus lens when the subject in each divided area is in focus. When the subject distance of each divided region is measured, it is determined whether or not the subject distance of the divided region 75 located at the center of the shooting range is within the strobe light reachable range”);
While Saito teaches a light emission unit (Saito, [0014] “a strobe light emitting unit 22 that emits strobe light as auxiliary light for illuminating the subject”), it fails to explicitly teach:
a light source configured to switch and emit light of different wavelengths, and irradiate the imaging range with the light of the different wavelengths, wherein
the circuitry is further configured to cause the image sensor to capture respective images of the imaging range irradiated with the light of the different wavelengths, respectively.
In the same field of endeavor, Satoshi teaches:
a light source configured to switch and emit light of different wavelengths, and irradiate the imaging range with the light of the different wavelengths (Satoshi, [0045] “The mode switching unit 72 appropriately switches the operation in the image processing unit 5 as described later in accordance with the normal mode, the intermediate mode, and the night-vision mode”; Satoshi, [0067] “FIG. 9, the infrared light of wavelength IR1 (780 nm) is irradiated to the subject in the first 1/3 period of one frame. In the next 1/3 period of one frame, infrared light of wavelength IR 2 (940 nm) is irradiated to the subject. In the last 1/3 period of one frame, infrared light of wavelength IR3 (870 nm) is irradiated to the subject”), wherein
the circuitry is further configured to cause the image sensor to capture respective images of the imaging range irradiated with the light of the different wavelengths, respectively (Satoshi, [0184] “At step S44, the control unit 7 causes the imaging unit 3 to capture an object. The imaging unit 3 projects the infrared light of the wavelength IR1 associated with R, the infrared light of the wavelength IR2 associated with G, and the infrared light of the wavelength IR3 associated with B. Take a picture of the subject while it is on”; [0186] The frames constituting the video signal generated by the imaging unit 3 imaging the subject in the state where the infrared light of the wavelengths IR1, IR2 and IR3 are respectively projected are referred to as the first frame and the second frame, and the third frame”).
Saito and Satoshi are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito with the invention of Satoshi to make the invention that replaces the light source with of Saito with the light source of Satoshi to switch and emit light of different wavelengths, and irradiate the imaging range with the light of the different wavelengths, wherein the circuitry is further configured to cause the image sensor to capture respective images of the imaging range irradiated with the light of the different wavelengths, respectively; doing so can efficiently identify object under different light emitting wavelengths; thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 5, Saito in view of Satoshi teaches the measuring device according to claim 1, wherein the circuitry is further configured to measure a distance to the target object in the imaging direction (Saito, [0011] “a distance measuring unit that measures a subject distance, and a subject distance within the range of the strobe light”).
Regarding Claim 13, Saito in view of Satoshi teaches the measuring device according to claim 1, wherein the circuitry is further configured to cause the light source to move in a case where the target object may not be detected within the imaging range (Saito, [0033] “Strobe light emission is prohibited when there is a lot of marine snow, and strobe light is allowed when there is little marine snow”; Saito, [0034] “the strobe light emission prohibition setting is maintained, but this is maintained, but the power is turned on again in consideration of the case where the user moves from a place where there is a lot of marine snow to a place where there is little”).
Regarding Claim 15, Saito teaches:
A measurement method comprising:
capturing, by an image sensor (Saito, [0015] “image sensor 26 that photoelectrically converts the subject light”), an image of an imaging range in water (Saito, [0010] “the photographing apparatus of the present invention includes an imaging unit that captures an image of a subject, a strobe light emitting unit that emits strobe light that illuminates the subject, and an image of the subject imaged by emitting the strobe light as a strobe image”); and
measuring information regarding a position of a target object in an imaging direction based on the image that was captured (Saito, [0011] “a distance measuring unit that measures a subject distance, and a subject distance within the range of the strobe light. And a light emission prohibiting means for prohibiting the emission of strobe light when it is determined that the subject distance is outside the range of the strobe light”; Saito, [0040] “Each of the 79 object distances is measured. The subject distance can be obtained from the position of the focus lens when the subject in each divided area is in focus. When the subject distance of each divided region is measured, it is determined whether or not the subject distance of the divided region 75 located at the center of the shooting range is within the strobe light reachable range”);
While Saito teaches a light emission unit (Saito, [0014] “a strobe light emitting unit 22 that emits strobe light as auxiliary light for illuminating the subject”), it fails to explicitly teach:
causing a light source to switch and emit light of different wavelengths, and irradiate the image range with the light of the different wavelengths, wherein
the capturing is capturing respective images of the imaging range irradiated with the light of the different wavelengths, respectively.
In the same field of endeavor, Satoshi teaches:
causing a light source to switch and emit light of different wavelengths, and irradiate the image range with the light of the different wavelengths (Satoshi, [0045] “The mode switching unit 72 appropriately switches the operation in the image processing unit 5 as described later in accordance with the normal mode, the intermediate mode, and the night-vision mode”; Satoshi, [0067] “FIG. 9, the infrared light of wavelength IR1 (780 nm) is irradiated to the subject in the first 1/3 period of one frame. In the next 1/3 period of one frame, infrared light of wavelength IR 2 (940 nm) is irradiated to the subject. In the last 1/3 period of one frame, infrared light of wavelength IR3 (870 nm) is irradiated to the subject”), wherein
the capturing is capturing respective images of the imaging range irradiated with the light of the different wavelengths, respectively (Satoshi, [0184] “At step S44, the control unit 7 causes the imaging unit 3 to capture an object. The imaging unit 3 projects the infrared light of the wavelength IR1 associated with R, the infrared light of the wavelength IR2 associated with G, and the infrared light of the wavelength IR3 associated with B. Take a picture of the subject while it is on”; [0186] The frames constituting the video signal generated by the imaging unit 3 imaging the subject in the state where the infrared light of the wavelengths IR1, IR2 and IR3 are respectively projected are referred to as the first frame and the second frame, and the third frame”).
Saito and Satoshi are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito with the invention of Satoshi to make the invention that replaces the light source with of Saito with the light source of Satoshi to switch and emit light of different wavelengths, and irradiate the imaging range with the light of the different wavelengths, wherein the circuitry is further configured to cause the image sensor to capture respective images of the imaging range irradiated with the light of the different wavelengths, respectively; doing so can efficiently identify object under different light emitting wavelengths; thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 16, Saito teaches:
A non-transitory computer-readable medium storing a program (Saito, [0025] “The CPU 49 executes a control program”) configured to cause a measuring device to execute processing of:
causing an image sensor to capture an image of a predetermined imaging range in water (Saito, [0015] “image sensor 26 that photoelectrically converts the subject light”), an image of an imaging range in water (Saito, [0010] “the photographing apparatus of the present invention includes an imaging unit that captures an image of a subject, a strobe light emitting unit that emits strobe light that illuminates the subject, and an image of the subject imaged by emitting the strobe light as a strobe image”); and
measuring information regarding a position of a target object in an imaging direction based on the image that was captured (Saito, [0011] “a distance measuring unit that measures a subject distance, and a subject distance within the range of the strobe light. And a light emission prohibiting means for prohibiting the emission of strobe light when it is determined that the subject distance is outside the range of the strobe light”; Saito, [0040] “Each of the 79 object distances is measured. The subject distance can be obtained from the position of the focus lens when the subject in each divided area is in focus. When the subject distance of each divided region is measured, it is determined whether or not the subject distance of the divided region 75 located at the center of the shooting range is within the strobe light reachable range”);
While Saito teaches a light emission unit (Saito, [0014] “a strobe light emitting unit 22 that emits strobe light as auxiliary light for illuminating the subject”), it fails to explicitly teach:
causing a light source to switch and emit light of different wavelengths, and irradiate the image range with the light of the different wavelengths, wherein
the image sensor is caused to capture respective images of the imaging range irradiated with the light of the different wavelengths, respectively.
In the same field of endeavor, Satoshi teaches:
causing a light source to switch and emit light of different wavelengths, and irradiate the image range with the light of the different wavelengths (Satoshi, [0045] “The mode switching unit 72 appropriately switches the operation in the image processing unit 5 as described later in accordance with the normal mode, the intermediate mode, and the night-vision mode”; Satoshi, [0067] “FIG. 9, the infrared light of wavelength IR1 (780 nm) is irradiated to the subject in the first 1/3 period of one frame. In the next 1/3 period of one frame, infrared light of wavelength IR 2 (940 nm) is irradiated to the subject. In the last 1/3 period of one frame, infrared light of wavelength IR3 (870 nm) is irradiated to the subject”), wherein
the image sensor is caused to capture respective images of the imaging range irradiated with the light of the different wavelengths, respectively (Satoshi, [0184] “At step S44, the control unit 7 causes the imaging unit 3 to capture an object. The imaging unit 3 projects the infrared light of the wavelength IR1 associated with R, the infrared light of the wavelength IR2 associated with G, and the infrared light of the wavelength IR3 associated with B. Take a picture of the subject while it is on”; [0186] The frames constituting the video signal generated by the imaging unit 3 imaging the subject in the state where the infrared light of the wavelengths IR1, IR2 and IR3 are respectively projected are referred to as the first frame and the second frame, and the third frame”).
Saito and Satoshi are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito with the invention of Satoshi to make the invention that replaces the light source with of Saito with the light source of Satoshi to switch and emit light of different wavelengths, and irradiate the imaging range with the light of the different wavelengths, wherein the circuitry is further configured to cause the image sensor to capture respective images of the imaging range irradiated with the light of the different wavelengths, respectively; doing so can efficiently identify object under different light emitting wavelengths; thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 17, Saito in view of Satoshi teaches the measuring device according to claim 1, wherein
the light source is further configured to switch and emit the light of the different wavelengths at different timings, and irradiate the imaging range with the light of the different wavelengths at the different timings (Satoshi, [0009] “An imaging step for imaging a subject in a second mode for imaging in each of the third sections for imaging in a state in which the third infrared light is projected, and an imaging signal output from the imaging unit And a video output step of generating and outputting a video signal of a predetermined signal system based on the second mode, the exposure time in the first section switched from the second mode to the first mode is the second time. Exposure time or more in one interval in the Providing a control program of an image pickup apparatus, characterized in that time”; Satoshi, [0067] “the infrared light of wavelength IR1 (780 nm) is irradiated to the subject in the first 1/3 period of one frame. In the next 1/3 period of one frame, infrared light of wavelength IR 2 (940 nm) is irradiated to the subject. In the last 1/3 period of one frame, infrared light of wavelength IR3 (870 nm) is irradiated to the subject. The order of projecting infrared light of the wavelengths IR1 to IR3 is arbitrary”), and
the circuitry is further configured to cause the image sensor to capture the respective images of the imaging range irradiated with the light of the different wavelengths at the different timings, respectively (Satoshi, [0068] “As shown in (b) of FIG. 9, at the timing when the infrared light of the wavelength IR1 is projected, the imaging unit 3 performs the exposure Ex1R having a high correlation with the R light. At the time of projecting infrared light of wavelength IR2, the imaging unit 3 performs exposure Ex1G having high correlation with G light. At the timing at which the infrared light of the wavelength IR3 is projected, the imaging unit 3 performs the exposure Ex1B having a high correlation with the B light”; Satoshi, [0076] “The pre-signal processing in the pre-signal processing unit 52 will be described with reference to FIG. 10A shows an arbitrary frame FmIR1 of video data generated at the timing of projecting infrared light of wavelength IR1. The pixel data of R, B, Gr, and Gb in the frame FmIR1 is attached with a subscript 1 indicating that it is generated in the state of projecting infrared light of the wavelength IR1”; Satoshi, [0077] “(B) of FIG. 10 shows an arbitrary frame FmIR2 of video data generated at the timing of projecting infrared light of wavelength IR2. The pixel data of R, B, Gr, and Gb in the frame FmIR2 is attached with a subscript 2 indicating that it is generated in the state where the infrared light of the wavelength IR2 is projected”; Satoshi, [0078] “(C) of FIG. 10 shows an arbitrary frame FmIR3 of video data generated at the timing of projecting infrared light of wavelength IR3. The pixel data of R, B, Gr, and Gb in the frame FmIR3 is assigned a subscript 3 indicating that it is generated in the state of projecting infrared light of wavelength IR3”).
Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Saito (See Machine Translation for JP 2007189542 A; hereafter referred to as Saito) in view of Satoshi et al. (See Machine Translation for JP 2015126537 A; hereafter referred to as Satoshi) further in view of Drazen, David, et al. (Drazen, David, et al. "Toward real-time particle tracking using an event-based dynamic vision sensor." Experiments in Fluids 51.5 (2011): 1465-1469; hereafter referred to as David).
Regarding Claim 2, Saito in Satoshi teaches the measuring device according to claim 1, wherein
the image sensor includes a vision sensor (Saito, [0023] “a subject image captured by the CCD image sensor 26 in the photographing mode”) configured to acquire pixel data in accordance with an amount of light incident on each of a plurality of pixels arranged two-dimensionally (Saito, [0017] “The CCD image sensor 26 has millions of unit pixels arranged in a matrix with a photodiode having a color filter as a unit pixel, and generates an analog image signal for one screen from the signal charge obtained for each pixel”).
However, Saito in view of Satoshi fails to explicitly teach:
wherein the image sensor includes a vision sensor configured to acquire pixel data asynchronously in accordance with an amount of light incident on each of a plurality of pixels arranged two-dimensionally.
In the same field of endeavor, David teaches:
wherein the image sensor includes a vision sensor configured to acquire pixel data asynchronously in accordance with an amount of light incident on each of a plurality of pixels arranged two-dimensionally (David, Abstract “A novel camera technology for use in particle tracking velocimetry is presented in this paper. This technology consists of a dynamic vision sensor in which pixels operate in parallel, transmitting asynchronous events only when relative changes in intensity of approximately 10% are encountered with a temporal resolution of 1µs”, page 1466, col. 2, last para, “The DVS has a spatial resolution of 128 x 128 pixels and a temporal resolution of 1µs. Pixels are asynchronous emitters of events, and these events are encoded using the pixel coordinates combined with a timestamp”).
Saito, Satoshi and David are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of David to make the invention that uses the vision sensor to acquire pixel data asynchronously in accordance with an amount of light incident on each of a plurality of pixels arranged two-dimensionally; doing so can efficiently track dense particles with improved processing time (David, Abstract); thus one of the ordinary skill in the art would have been motivated to combine the references.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Saito (See Machine Translation for JP 2007189542 A; hereafter referred to as Saito) in view of Satoshi et al. (See Machine Translation for JP 2015126537 A; hereafter referred to as Satoshi) further in view of Fujiwara, Ken (See Machine Translation for WO 2021038753 A1; hereafter referred to as Fujiwara).
Regarding Claim 6, Saito in view of Satoshi teaches the measuring device according to claim 1, but fails to explicitly teach:
wherein the circuitry is further configured to measure a speed of the target object in the imaging direction.
In the same field of endeavor, Fujiwara teaches:
the circuitry is further configured to measure a speed of the target object in the imaging direction (Ken Fujiwara, page 11, para 4, “That is, in two or more images or moving images taken within a predetermined time, the distance traveled, the amount of change in posture, and the like are detected for one detected individual”).
Saito, Satoshi and Fujiwara are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of Fujiwara to make the invention that measures a speed of the target object in the imaging direction; doing so can efficiently detect aquatic animals (target object) in the water (Fujiwara, Abstract); thus one of the ordinary skill in the art would have been motivated to combine the references.
Claims 7 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Saito (See Machine Translation for JP 2007189542 A; hereafter referred to as Saito) in view of Satoshi et al. (See Machine Translation for JP 2015126537 A; hereafter referred to as Satoshi) further in view of Asano et al. (See Machine Translation for WO 2014171052 A1; hereafter referred to as Asano).
Regarding Claim 7, Saito in view of Satoshi teaches the measuring device according to claim 1, but fails to explicitly teach:
the circuitry is further configured to identify a type of the target object based on the image captured by the image sensor, and
measure the information regarding the position of the target object based on the type of the target object was identified.
In the same field of endeavor, Asano teaches:
the circuitry is further configured to identify a type of the target object based on the image captured by the image sensor (Asano, page 2, summary of the invention, “The distance to the object is calculated based on the length of the object in the captured image and the reference length of the object predetermined in accordance with the type of the object”), and
measure the information regarding the position of the target object based on the type of the target object was identified (Asano, page 9, para 2, “The object detection unit 1300 extracts the object in the captured image at the request from the distance estimation unit 1200 and calculates the pixel size according to the type of the object and the direction (direction) of the type (the length of the object is The number of pixels indicated), and the position coordinates of the object to the distance estimating unit 1200”; Asano, page 14, para 5, “from an image capturing position to an object in a captured image, the image processing method comprising the steps of: detecting a type of the object in the captured image A predetermined constant obtained by photographing a chart of a predetermined length arranged at a position apart from the imaging position by a predetermined distance, a length of the object in the captured image, And a first distance calculating step of calculating a distance to the object based on a reference length of an object predetermined in accordance with the type detected in the object detecting step”).
Saito, Satoshi and Asano are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of Asano to make the invention that identifies the type of the target object on a basis of the image captured by the imaging unit and measures information regarding a position of the target object; doing so can efficiently detect different types of target objects (Asano, page 2, summary); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 8, Saito in view of Satoshi further in view of Asano teaches the measuring device according to claim 7, wherein the circuitry is further configured to measure the information regarding the position of the target object based on statistical information for the type of the target object (Asano, page 14, last para, “According to such an image processing method, an image processing apparatus and an image processing program, a reference length (actual length) is previously determined for each type of object and the length of the object in the captured image (the number of pixels Since the distance to the object in the captured image is calculated using the image capturing method, the distance to the object in the captured image can be accurately calculated”; Asano, page 15, para 2, “since the direction of the length used for distance measurement is different for each type of object, by using the length in the direction with little change due to the movement of the object or the like, the distance is calculated more accurately Is possible”).
Claims 9 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Saito (See Machine Translation for JP 2007189542 A; hereafter referred to as Saito) in view of Satoshi et al. (See Machine Translation for JP 2015126537 A; hereafter referred to as Satoshi) further in view of Noda et al. (US 20180373942 A1; hereafter referred to as Noda).
Regarding Claim 9, Saito in view of Satoshi teaches the measuring device according to claim 1, but fails to explicitly teach:
wherein the circuitry is further configured to measure information regarding the position of the target object based on a learning result of information regarding a position previously learned for a type of the target object.
In the same field of endeavor, Noda teaches:
wherein the circuitry is further configured to measure information regarding the position of the target object based on a learning result of information regarding a position previously learned for a type of the target object (Noda, [0053] “When a plurality of types of objects are to be detected simultaneously, different neural networks may be trained and used for the respective object types to be detected, or the same neural network may be trained and used. Even when the object to be detected …, different neural networks may be trained for respective types, … and such neural networks may be used in the estimations of the posture or the distance”).
Saito, Satoshi and Noda are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of Noda to make the invention that measures information regarding a position of the target object on a basis of a learning result of information regarding a position previously learned for each type of the target object; doing so can efficiently detect different types of target objects (Noda, [0016]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 10, Saito in view of Satoshi teaches the measuring device according to claim 1, but fails to explicitly teach:
wherein the circuitry is further configured to measure information regarding the position of the target object based on a learning result of information regarding a position previously learned regardless of a type of the target object.
In the same field of endeavor, Noda teaches:
wherein the circuitry is further configured to measure information regarding the position of the target object based on a learning result of information regarding a position previously learned regardless of a type of the target object (Noda, [0032] “The detecting function 12 may also be configured to input the entire captured image or a part of the captured image captured by the onboard camera 2 to a neural network having been trained in advance, to obtain only the output of the position of the scanning rectangle, and to further subject the position to non-linear processing performed by a neural network or the like, and to cause the neural network to output likelihood of the object being another object”; [0033] To detect a plurality of types of objects …, the number of variations in the shape or the size of the scanning rectangle may be increased, corresponding to the respective types of objects.”).
Saito, Satoshi and Noda are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of Noda to make the invention that measures information regarding a position of the target object on a basis of a learning result of information regarding a position previously learned for each type of the target object; doing so can efficiently detect different types of target objects (Noda, [0016]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Claim 11, 12 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Saito (See Machine Translation for JP 2007189542 A; hereafter referred to as Saito) in view of Satoshi et al. (See Machine Translation for JP 2015126537 A; hereafter referred to as Satoshi) further in view of Shibusawa et al. (US20150222798; hereafter referred to as Shibusawa).
Regarding Claim 11, Saito in view of Satoshi teaches the measuring device according to claim 1, but fails to explicitly teach:
wherein the circuitry is further configured to cause the light source to temporarily stop emission of the light from the illumination unit in a case where the target object may not be detected within the imaging range.
In the same field of endeavor, Shibusawa teaches:
wherein the circuitry is further configured to cause the light source to temporarily stop emission of the light from the illumination unit in a case where the target object may not be detected within the imaging range (Shibusawa, [0477] when it is determined by the determiner 15 that the captured image obtained by the second operation of the controller 14 does not include the detection target, since the determining of the light emission intensity (the light emission amount) of the light source 11 during imaging for obtaining the captured image as a light emission intensity to be set is canceled, the imaging at a light emission intensity (a light emission amount) not suitable to determine whether the detection target exists may be stopped”).
Saito, Satoshi and Shibusawa are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of Shibusawa to make the invention that causes the light source to temporarily stop emission of the light from the illumination unit in a case where the target object may not be detected within the imaging range; doing so can efficiently detect target objects/ subjects while reducing power consumption needed to illuminate the subject (Shibusawa, [0004]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 12, Saito in view of Satoshi teaches the measuring device according to claim 1, but fails to explicitly teach:
wherein the circuitry is further configured to cause the light source to change a wavelength of light emitted from the light source in a case where the target object may not be detected within the imaging range.
In the same field of endeavor, Shibusawa teaches:
wherein the circuitry is further configured to cause the light source to change a wavelength of light emitted from the light source in a case where the target object may not be detected within the imaging range (Shibusawa, [0076] “the controller may be further configured to, when the image area determined by the determiner to include the detection target does not exist, perform the light emission control so that the plurality of light sources emit light in a pre-determined light emission order, and when the image area determined by the determiner to include the detection target exists, perform the light emission control so that a light emission frequency of the light source corresponding to the image area determined by the determiner to include the detection target from among the plurality of light sources is increased”; Shibusawa, [0085] “when the image area determined by the determiner to include the detection target does not exist, perform the light emission control so that the plurality of light sources emit light in a pre-determined light emission order and a turn-on order, and when the image area determined by the determiner to include the detection target exists, perform the light emission control so that a frequency at which the light source corresponding to the image area determined by the determiner to include the detection target from among the plurality of light sources is turned on in a turn-on period corresponding to the image area determined by the determiner to include the detection target is increased”).
Saito, Satoshi and Shibusawa are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of Shibusawa to make the invention that causes the light source to change a wavelength of light emitted from the light source in a case where the target object may not be detected within the imaging range; doing so can efficiently detect target objects/ subjects while reducing power consumption needed to illuminate the subject (Shibusawa, [0004]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Regarding Claim 14, Saito in view of Satoshi teaches the measuring device according to claim 1, but fails to explicitly teach:
another light source configured to irradiate the imaging range, wherein the circuitry is further configured to cause the another light source to emit light in a case where the target object may not be detected within the imaging range.
In the same field of endeavor, Shibusawa teaches:
another light source configured to irradiate the imaging range, wherein the circuitry is further configured to cause the another light source to emit light in a case where the target object may not be detected within the imaging range (Shibusawa, [0085] “when the image area determined by the determiner to include the detection target does not exist, perform the light emission control so that the plurality of light sources emit light in a pre-determined light emission order and a turn-on order, and when the image area determined by the determiner to include the detection target exists, perform the light emission control so that a frequency at which the light source corresponding to the image area determined by the determiner to include the detection target from among the plurality of light sources is turned on in a turn-on period corresponding to the image area determined by the determiner to include the detection target is increased”).
Saito, Satoshi and Shibusawa are considered analogous art as they are reasonably pertinent to the same field of endeavor of image processing. Therefore, it would have been obvious to one of the ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Saito in view of Satoshi with the method of Shibusawa to make the invention such that another light source configured to irradiate the imaging range, wherein the circuitry is further configured to cause the another light source to emit light in a case where the target object may not be detected within the imaging range; doing so can efficiently detect target objects/ subjects while reducing power consumption needed to illuminate the subject (Shibusawa, [0004]); thus one of the ordinary skill in the art would have been motivated to combine the references.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VAISALI RAO KOPPOLU whose telephone number is (571)270-0273. The examiner can normally be reached Monday - Friday 8:30 - 5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format.
For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
VAISALI RAO. KOPPOLU
Examiner
Art Unit 2664
/VAISALI RAO KOPPOLU/Examiner of Art Unit 2664
/XIAO LIU/Primary Examiner, Art Unit 2664