Detailed Action
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 01/30/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 12/30/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 7-10 are rejected under 35 U.S.C. 103 as being unpatentable over Mori (US 20220414920 A1) and further in view of Mizuno (JP 2020021111 A).
Regarding claim 1, Mori teaches A correction device comprising: (“The update processing unit corrects a position of a detection point” Mori, abstract)
processing circuitry performing: to acquire object detection information regarding an object detected by a sensor in a target area, (“The external information sensor 1 observes an object by detecting any point on a surface of the object as a detection point.” Mori, para. [0034] and fig. 2) the object detection information including information on a state-related item related to a state of the object; (“The millimeter wave radar can measure a distance and a relative speed to an object. The distance and the relative speed to the object are measured by, for example, a frequency modulation continuous wave (FMCW) method.” Mori, para. [0036])
to correct the information on the state-related item included in the object detection information on a basis of the object detection information acquired and correction information for correcting the information on the state-related item, (“the update processing unit 36 corrects the position P of the detection point DP with respect to the external information sensor 1 based on the position HP of the candidate point DPH on the object, and updates the track data TD indicating the track of the object based on the position P of the detection point DP with respect to the external information sensor 1 after the correction.” Mori, para. [0119])
and to generate object information that is the corrected object detection information; and (“When the coordinates of the position P of the rear-portion left end on the vehicle C.sub.model2 are determined, a correction amount from the position P of the rear-portion left end on the vehicle C.sub.model2 to the center point in the vehicle C.sub.model2 is accurately determined through use of the width W and the length L of the vehicle C.sub.model2 detected by the external information sensor 1.” Mori, para. [0123])
However, Mori does not teach to output the object information generated.
Mizuno teaches to output the object information generated. (“in the second embodiment, the image C including the object A1 and the movement history information B of the object A1 are input to the learned model 60, and the probability of each type of the object A1 is output.” Mizuno, p. 7, para. [007])
Mori and Mizuno are combinable because from the same field of endeavor, object recognition. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mori in light of Mizuno’s outputting the generated object information. One would have been motivated to do so because it can be useful for quickly and accurately determining the type of an object. (Mizuno, p. 9, para. [003])
Regarding claim 2, Mori teaches wherein the processing circuitry further performs: to calculate first reliability that is reliability of the information on the state-related item included in the object detection information acquired and second reliability that is reliability of the correction information; (“he reliability DOR(1) is set to 1 when the distance is shorter than the determination threshold distance D.sub… the reliability DOR(2) is set to 0 when the distance is shorter than the determination threshold distance D.sub.TH1” Mori, para. [0113]) and (“the position P of the detection point DP on the vehicle C.sub.model2 is identified, and hence the correction amount for correcting the position P of the detection point DP with respect to the external information sensor 1 to the position P of the center point in the vehicle C.sub.model2 is obtained through the position P of the detection point DP on the vehicle C.sub.model2 The position P of the detection point DP with respect to the external information sensor 1 is corrected by correcting the position P of the detection point DP on the vehicle C.sub.model2 to the position P of the center point in the vehicle C.sub.model2 through use of the correction amount.” Mori, para. [0137])
to compare the first reliability and the second reliability calculated with each other, (“a reliability DOR(1) for the candidate point DPH(1) and a reliability DOR(2) for the candidate point DPH(2) are compared to each other, and one of the candidate point DPH(1) or the candidate point DPH(2) is consequently selected” Mori, para. [0100]) and to determine whether or not to correct the information on the state-related item included in the object detection information with the correction information; and
to correct, on a basis of a determination result as to whether or not to correct the information on the state-related item included in the object detection information with the correction information, the information on the state-related item included in the object detection information on a basis of the correction information, and to generate the object information. (“FIG. 15 is a flowchart for illustrating processing that branches when the determination result of the determination processing in Step S22 of FIG. 13 is “Yes”. In Step S51, the temporary setting unit 33 determines whether or not the detection data dd received from the selected external information sensor includes the speed V of the detection point DP. When the temporary setting unit 33 determines that the detection data dd received from the selected external information sensor 1 includes the speed V of the detection point DP, the process proceeds from Step S51 to Step S52.” Mori, para. [0171])
Regarding claim 3, Mori teaches wherein the processing circuitry corrects the information on the state-related item included in the object detection information on a basis of the object detection information acquired, the correction information, and a history of the object detection information acquired, and generates the object information. (“In the example of FIG. 8, the position P of the detection point DP with respect to the external information sensor 1 of detection data DD.sub.before before the correction is identified based on the position HP of the candidate point DPH(1), and is then corrected to the center point in the object as detection data DD.sub.after after the correction. The track data TD of FIG. 7 is corrected based on the position P of the detection point DP with respect to the external information sensor 1 after the correction included in the detection data DD.sub.after.” Mori, para. [0118])
Regarding claim 7, Mori teaches A non-transitory tangible computer readable storage medium storing a correction program for causing a computer to function as the correction device. (“it can be considered that those programs cause the computer to execute procedures or methods of executing the time measurement unit 31,” Mori, para. [0245])
Regarding claim 8, Mori teaches A correction system comprising: the correction device according to claim 1; and the sensor to detect the object present in the target area. (“when a pedestrian and a vehicle exist as the objects around the own vehicle, there is brought about a state in which there simultaneously exist the pedestrian having low reflection intensity to the laser light irradiated from the LIDAR sensor and the vehicle having high reflection intensity thereto. Even under this state, the reflection light reflected from the pedestrian is not absorbed by the reflection light reflected from the vehicle, and the pedestrian can thus be detected.” Mori, para. [0054], area around the vehicle is the claimed target area)
Regarding claim 9, Mori teaches wherein the sensor is a radio wave sensor to detect the object by detecting reflected light that is obtained by light emitted into the target area being reflected by the object. (“The azimuth angle of the object is measured based on phase differences among the respective radio waves received by the plurality of reception antennas.” Mori, para. [0038]) and (“The light receiving unit of the LIDAR sensor has a function of receiving reflected light from an object during a light receiving time period set in advance.” Mori, para. [0047])
Regarding claim 10, Mori teaches wherein the sensor is a camera to image the target area. (“The monocular camera includes an image pickup element… The monocular camera continuously detects absence or presence of an object” Mori, para. [0055])
Claim(s) 5 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Mori and Mizuno as mentioned above and further in view of Weidenbach et al. (US 20220187832 A1) referred to as Weidenbach hereinafter.
Regarding claim 5, Mori teaches wherein there is a plurality of the state-related items, and the processing circuitry corrects the object detection information by selecting, for each of the state-related items, whether to keep the information on the state-related item of the object detection information as the information on the state-related item as it is. (“FIG. 15 is a flowchart for illustrating processing that branches when the determination result of the determination processing in Step S22 of FIG. 13 is “Yes”. In Step S51, the temporary setting unit 33 determines whether or not the detection data dd received from the selected external information sensor includes the speed V of the detection point DP. When the temporary setting unit 33 determines that the detection data dd received from the selected external information sensor 1 includes the speed V of the detection point DP, the process proceeds from Step S51 to Step S52. When the temporary setting unit 33 determines that the detection data dd received from the selected external information sensor 1 does not include the speed V of the detection point DP, the process proceeds from Step S51 to Step S81 described below with reference to FIG. 16.” Mori, para. [0171])
However, the combination of Mori and Mizuno does not teach or to convert the information on the state-related item of the object detection information into information based on the correction information, and generates the object information.
Weidenbach teaches or to convert the information on the state-related item of the object detection information into information based on the correction information, and generates the object information. (“FIGS. 5A-5B illustrate a homography transformation for converting from an image space 502 to a world space 512. FIG. 5B is an example showing distortion correction applied. The homography transformation uses camera distortion correction, in various embodiments. Without the distortion correction, the transformation shown in FIG. 5A includes taking a position (x, y) in an acquired crop row image, augmenting it to (x, y, 1) and multiplying by a 3×3 matrix that is determined based on the height and pitch of the camera, in an embodiment. This calculation provides a new location (X, Y, Z) that, when divided by Z, results with (u, v, 1) where u=X/Z and v=Y/Z, such that u and v are then the coordinates of the same point projected onto a different plane.” Weidenbach, para. [0044])
Mori, Mizuno, and Weidenbach are combinable because from the same field of endeavor, object recognition. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mori and Mizuno in light of Weidenbach’s converting information. One would have been motivated to do so because it can improve the reliability or accuracy (hereinafter, “quality”) of parameters for control of an agricultural machine and thereby enhance the control of the agricultural machine. (Weidenbach, para. [0008])
Regarding claim 6, Mori teaches wherein there is a plurality of the state-related items, and the processing circuitry corrects the object detection information by selecting, for each of the state-related items, whether to keep the information on the state-related item of the object detection information as the information on the state-related item as it is, to convert the information on the state-related item of the object detection information into information based on the correction information, (“FIG. 15 is a flowchart for illustrating processing that branches when the determination result of the determination processing in Step S22 of FIG. 13 is “Yes”. In Step S51, the temporary setting unit 33 determines whether or not the detection data dd received from the selected external information sensor includes the speed V of the detection point DP. When the temporary setting unit 33 determines that the detection data dd received from the selected external information sensor 1 includes the speed V of the detection point DP, the process proceeds from Step S51 to Step S52. When the temporary setting unit 33 determines that the detection data dd received from the selected external information sensor 1 does not include the speed V of the detection point DP, the process proceeds from Step S51 to Step S81 described below with reference to FIG. 16.” Mori, para. [0171])
Mizuno teaches information on the state-related item included in the history of the object detection information, and generates the object information (“The learned model 50 outputs the probability for each type of object on condition that the movement history information B is input. For example, as shown in FIG. 1, the probability that the type of the object is “vehicle” is “0.1 (10%)”, the probability that the type of the object is “person” is “0.1 (10%)”, The probability that the type of the object is “animal” is “0.8 (80%)”.” Mizuno, p. 3, para. [002])
However the combination of Mori and Mizuno does not teach or to convert the information on the state-related item of the object detection information into information,
Weidenbach teaches or to convert the information on the state-related item of the object detection information into information (“FIGS. 5A-5B illustrate a homography transformation for converting from an image space 502 to a world space 512. FIG. 5B is an example showing distortion correction applied. The homography transformation uses camera distortion correction, in various embodiments. Without the distortion correction, the transformation shown in FIG. 5A includes taking a position (x, y) in an acquired crop row image, augmenting it to (x, y, 1) and multiplying by a 3×3 matrix that is determined based on the height and pitch of the camera, in an embodiment. This calculation provides a new location (X, Y, Z) that, when divided by Z, results with (u, v, 1) where u=X/Z and v=Y/Z, such that u and v are then the coordinates of the same point projected onto a different plane.” Weidenbach, para. [0044])
Mori, Mizuno, and Weidenbach are combinable because from the same field of endeavor, object recognition. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Mori and Mizuno in light of Weidenbach’s converting information. One would have been motivated to do so because it can improve the reliability or accuracy (hereinafter, “quality”) of parameters for control of an agricultural machine and thereby enhance the control of the agricultural machine. (Weidenbach, para. [0008])
Allowable Subject Matter
Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Regarding claim 4, Eimiya (JP 2014115119 A) teaches third reliability that is reliability of the information on the state-related item included in the history of the object detection information; (“An object detector that inputs the positions of detected objects into a tracking filter and tracks the objects, thereby detecting the objects around a vehicle comprises: position detecting means for detecting the position of each object; distance difference calculation means for calculating the respective differences between the tracked positions of arbitrary two objects, calculated by the tracking filter, and the object detectors; and position correcting means for determining whether the two objects are the same object on the basis of the speed obtained from the history of the tracked positions of the two objects” Eimiya, abstract)
However, none of the cited prior art, alone or in combination, provides the motivation to teach the ordered combination of : “to compare the first reliability, the second reliability, and the third reliability calculated with each other, and to determine whether or not to correct the information on the state-related item included in the object detection information with the correction information or the information on the state-related item included in the history of the object detection information; and to correct, on a basis of a determination result as to whether or not to correct the information on the state-related item included in the object detection information with the correction information or the information on the state-related item included in the history of the object detection information, the information on the state-related item included in the object detection information on a basis of the correction information or the information on the state-related item included in the history of the object detection information, and to generate the object information.”
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARDIS SOHRABY whose telephone number is (571)270-0809. The examiner can normally be reached Monday - Friday 9 am till 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PARDIS SOHRABY/ Examiner, Art Unit 2664
/JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664