Prosecution Insights
Last updated: April 18, 2026
Application No. 18/559,045

JUDGMENT DEVICE, JUDGMENT METHOD, AND JUDGMENT PROGRAM

Final Rejection §102§103
Filed
Nov 04, 2023
Examiner
RHIM, WOO CHUL
Art Unit
2676
Tech Center
2600 — Communications
Assignee
NTT, Inc.
OA Round
2 (Final)
80%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
112 granted / 140 resolved
+18.0% vs TC avg
Strong +21% interview lift
Without
With
+21.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
168
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
23.2%
-16.8% vs TC avg
§112
19.0%
-21.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 140 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendments Submission dated 02/25/2026 amends claims 1 and 6-7. Claims 1-7 are pending. In view of the amendment to claim 7, the previously set forth claim objection has been withdrawn, and in view of the amendments to claims 1, 6 and 7, the previously set forth 101 rejections have been withdrawn. Response to Arguments Applicant's arguments for the 102 rejection have been fully considered but they are not persuasive. On pages 8-9, the applicant argues that Nagaoka as applied does not teach the amended claim language in the independent claims. The examiner disagrees because as shown below, Nagaoka as applied teaches the amended claim language: estimate a speed difference between the target object and the observation vehicle using a time-series change in a region representing the target object captured in the group of time-series images (see, e.g., pars. 53-54 and 75-77 and FIG. 4 of Nagaoka, which teach estimating the relative speed between the vehicle and the monitored object by using a times series change in region representing the monitored object in the images) by dynamically comparing regions of images of the group of time-series images through real-time pixel-by-pixel basis (see, e.g., pars. 53-54, 59-64 and 80-84 and FIGS. 4, 5 and 7, which teach calculating the rate of change in size of the image portion of the monitored object in the images by dynamically comparing a size of the image portion of the monitored object in the previous imaging time to a size of the image portion of the monitored object in the current imaging time, wherein, for example, the width is measured in the image space, i.e., in pixels ) and further by translating the dynamically compared regions over a time series into the speed difference (see, e.g., pars. 53-54 and 75-77 and FIG. 4, which teach translating the rate of change in size into the relative speed), wherein the estimating the speed difference further comprises: determining the time-series change of the region over the time series by calculating a ratio between a size of the region representing the target object at a reference position and a size of the region at a respective time of the time series (see, e.g., pars. 48, 53-54, 63-65, and 80-82, which teach calculating the rate of change in size of the image portion of the monitored object between previous and current imaging times), and estimating the speed difference according to a predetermined speed difference by comparing the determined time-series change of the region with a predetermined time-series change of the region at the predetermined speed difference (see, e.g., pars. 80-82, which teach determining the relative speed, which includes calculating the rate of the change by comparing sizes of the image portion of the monitored object to sizes of the images multiplied with the predetermined correlation factors, where the difference in size corresponds to the extent of the speed difference). As such, the examiner finds the above arguments unpersuasive. On pages 9-10, the applicant argues that Nagaoka in view of Oami and Guarneri does not teach the limitations of claim 2 because the combined teaching would not describe “determining a dangerous state of the target object by estimating a speed difference between the target object and the observation vehicle according to a predetermined speed difference as described in the claim” (see page 10 of the submission). The examiner disagrees because 1) the quoted language is describing subject matter recited in claim 1, to which Oami and Gaurneri are not applied, and 2) as shown above, the quoted language is disclosed by Nagaoka. For these reasons, the examiner finds the above argument unpersuasive. Claim Objections Claim 1 is objected to because of the following informalities: It recite “A determination device that determines a dangerous state of a target object that is capturable by a camera onboard an observation vehicle is in a dangerous state” in lines 1-3. The examiner suggests deleting “is in a dangerous state” in line 3 so that the claim recites “A determination device that determines a dangerous state of a target object that is capturable by a camera onboard an observation vehicle Appropriate correction is required. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1 and 3-7 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Us patent application publication no. 2007/0171033 to Nagaoka et al. (hereinafter Nagaoka). For claim 1, Nagaoka as applied discloses a determination device that determines a dangerous state of a target object that is capturable by a camera onboard an observation vehicle is in a dangerous state (see, e.g., pars. 45-48, 51-59 and 79 and FIGS. 1 and 3-4, which teach that the disclosure is directed to determining where there is possibility of contact between the vehicle and an object capturable by a camera mounted on the vehicle), the determination device comprising: a memory (see, e.g., par. 47); and at least one processor coupled to the memory (see, e.g., pars. 28-33, 47 and 49 and FIG. 1), the at least one processor being configured to: acquire a group of time-series images captured by a camera mounted at the observation vehicle (see, e.g., pars. 45-48 and 51-59 and FIGS. 3-4, which teach acquiring images captured by a camera mounted on the vehicle); estimate a speed difference between the target object and the observation vehicle [[by]] using a time-series change in a region representing the target object captured in the group of time-series images (see, e.g., pars. 53-54 and 75-77 and FIG. 4, which teach estimating the relative speed between the vehicle and the monitored object by using a times series change in region representing the monitored object in the images) by dynamically comparing regions of images of the group of time-series images through real-time pixel-by-pixel basis (see, e.g., pars. 53-54, 59-64 and 80-84 and FIGS. 4, 5 and 7, which teach calculating the rate of change in size of the image portion of the monitored object in the images by dynamically comparing a size of the image portion of the monitored object in the previous imaging time to a size of the image portion of the monitored object in the current imaging time, wherein, for example, the width is measured in the image space, i.e., in pixels ) and further by translating the dynamically compared regions over a time series into the speed difference (see, e.g., pars. 53-54 and 75-77 and FIG. 4, which teach translating the rate of change in size into the relative speed), wherein the estimating the speed difference further comprises: determining the time-series change of the region over the time series by calculating a ratio between a size of the region representing the target object at a reference position and a size of the region at a respective time of the time series (see, e.g., pars. 48, 53-54, 63-65, and 80-82, which teach calculating the rate of change in size of the image portion of the monitored object between previous and current imaging times), and estimating the speed difference according to a predetermined speed difference by comparing the determined time-series change of the region with a predetermined time-series change of the region at the predetermined speed difference (see, e.g., pars. 80-82, which teach determining the relative speed, which includes calculating the rate of the change by comparing sizes of the image portion of the monitored object to sizes of the images multiplied with the predetermined correlation factors, where the difference in size corresponds to the extent of the speed difference); and determine whether the target object or the observation vehicle is in a dangerous state based on the speed difference (see, e.g., pars. 77-79 and 83-84 and FIG. 4, which teach determining whether there is a possibility of contact between the monitored object and the vehicle). For claim 3, Nagaoka as applied discloses that the at least one processor is further configured to: calculate, for each time, a ratio between a size of a region representing a target object at a reference position and a size of a region at the time (see, e.g., pars. 80-81, which teach calculating a rate of size change of the monitored object between two images), and compare a time-series change of the ratio with a time-series change of the ratio obtained in advance for each speed difference (see, e.g., pars. 81-82, which teach correlating the size of the object in the image with a plurality of the multiplied images; the examiner interprets each of the multiplied image as representing each speed difference because the size of the multiplied image correspond to the speed difference) and estimate the speed difference between the target object and the observation vehicle (see, e.g., pars. 77-78, which teach estimating the relative speed). For claim 4, Nagaoka as applied discloses that the at least one processor is further configured to: calculate, for each time, a distance to the target object at the time using a reference distance to the target object at a reference position, a size of a region representing the target object at the reference position, and a size of a region at the time (see, e.g., pars. 67-76 and FIG. 4, which teach calculating a distance to the monitored object using the rate of size change and the distance in one of the previous and current images as the reference distance), and estimate the speed difference from a time-series change in the distance (see, e.g., pars. 77-78, which teach estimating the relative speed). For claim 5, Nagaoka as applied discloses that the at least one processor is further configured to calculate the distance to the target object at the time by using a relational expression represented using the reference distance, a ratio between the size of a region representing the target object at the reference position and the size of a region at the time, and the distance to the target object at the time (see, e.g., pars. 67-76 and FIG. 4, which teach calculating a distance to the monitored object using relational expressions, such as the equations 7-12, that use the rate of size change and positional changes in the previous and current images, where one of the previous and current images serves as the reference). For claim 6, Nagaoka as applied discloses a determination method in a determination device that determines a dangerous state of a target object that is capturable by a camera onboard an observation vehicle or the observation vehicle (see, e.g., pars. 45-48, 51-59 and 79 and FIGS. 1 and 3-4, which teach that the disclosure is directed to determining where there is possibility of contact between the vehicle and an object capturable by a camera mounted on the vehicle), the determination method comprising: acquiring a group of time-series images captured by a camera mounted at the observation vehicle (see, e.g., pars. 45-48 and 51-59 and FIGS. 3-4, which teach acquiring images captured by a camera mounted on the vehicle); estimating a speed difference between the target object and the observation vehicle using a time-series change in a region representing the target object captured in the group of time-series images (see, e.g., pars. 53-54 and 75-77 and FIG. 4, which teach estimating the relative speed between the vehicle and the monitored object by using a times series change in region representing the monitored object in the images) by dynamically comparing regions of images of the group of time-series images through real-time pixel-by-pixel basis (see, e.g., pars. 53-54, 59-64 and 80-84 and FIGS. 4, 5 and 7, which teach calculating the rate of change in size of the image portion of the monitored object in the images by dynamically comparing a size of the image portion of the monitored object in the previous imaging time to a size of the image portion of the monitored object in the current imaging time, wherein, for example, the width is measured in the image space, i.e., in pixels ) and further by translating the dynamically compared regions over a time series into the speed difference (see, e.g., pars. 53-54 and 75-77 and FIG. 4, which teach translating the rate of change in size into the relative speed), wherein the estimating the speed difference further comprises: determining the time-series change of the region over the time series by calculating a ratio between a size of the region representing the target object at a reference position and a size of the region at a respective time of the time series (see, e.g., pars. 48, 53-54, 63-65, and 80-82, which teach calculating the rate of change in size of the image portion of the monitored object between previous and current imaging times), and estimating the speed difference according to a predetermined speed difference by comparing the determined time-series change of the region with a predetermined time-series change of the region at the predetermined speed difference (see, e.g., pars. 80-82, which teach determining the relative speed, which includes calculating the rate of the change by comparing sizes of the image portion of the monitored object to sizes of the images multiplied with the predetermined correlation factors, where the difference in size corresponds to the extent of the speed difference); and determining whether the target object is in a dangerous state based on the speed difference (see, e.g., pars. 77-79 and 83-84 and FIG. 4, which teach determining whether there is a possibility of contact between the monitored object and the vehicle). For claim 7, Nagaoka as applied discloses a non-transitory computer-readable storage medium storing a program executable by a computer so as to execute determination processing of determining a dangerous state of a target object that is capturable by a camera onboard an observation vehicle (see, e.g., pars. 45, 47-49, 51-59, and 79 and FIGS. 1 and 3-4, which teach that the disclosure is directed to determining where there is possibility of contact between the vehicle and the monitored object), the determination processing including: acquiring a group of time-series images captured by a camera mounted at the observation vehicle (see, e.g., pars. 45-48 and 51-59 and FIGS. 3-4, which teach acquiring images captured by a camera mounted on the vehicle); estimating a speed difference between the target object and the observation vehicle by using a time-series change in a region representing the target object captured in the group of time-series images (see, e.g., pars. 53-54 and 75-77 and FIG. 4, which teach estimating the relative speed between the vehicle and the monitored object by using a times series change in region representing the monitored object in the images) by dynamically comparing regions of images of the group of time-series images through real-time pixel-by-pixel basis (see, e.g., pars. 53-54, 59-64 and 80-84 and FIGS. 4, 5 and 7, which teach calculating the rate of change in size of the image portion of the monitored object in the images by dynamically comparing a size of the image portion of the monitored object in the previous imaging time to a size of the image portion of the monitored object in the current imaging time, wherein, for example, the width is measured in the image space, i.e., in pixels ) and further by translating the dynamically compared regions over a time series into the speed difference (see, e.g., pars. 53-54 and 75-77 and FIG. 4, which teach translating the rate of change in size into the relative speed), wherein the estimating the speed difference further comprises: determining the time-series change of the region over the time series by calculating a ratio between a size of the region representing the target object at a reference position and a size of the region at a respective time of the time series (see, e.g., pars. 48, 53-54, 63-65, and 80-82, which teach calculating the rate of change in size of the image portion of the monitored object between previous and current imaging times), and estimating the speed difference according to a predetermined speed difference by comparing the determined time-series change of the region with a predetermined time-series change of the region at the predetermined speed difference (see, e.g., pars. 80-82, which teach determining the relative speed, which includes calculating the rate of the change by comparing sizes of the image portion of the monitored object to sizes of the images multiplied with the predetermined correlation factors, where the difference in size corresponds to the extent of the speed difference); and determining whether the target object is in a dangerous state on the basis of the speed difference (see, e.g., pars. 77-79 and 83-84 and FIG. 4, which teach determining whether there is a possibility of contact between the monitored object and the vehicle). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nagaoka in view of US patent application publication no. 2025/0037477 to Oami et al. (hereinafter Oami) and further in view of Us patent application publication no. 2018/0082133 to Guarneri et al. (hereinafter Guarneri). For claim 2, Nagaoka as applied teaches that the at least one processor is further configured to: acquire a speed of the observation vehicle at a time at which the group of time-series images is captured (see, e.g., pars. 45 and 67-69 and FIG. 4, which teach detecting a traveling speed of the vehicle); and estimate a speed of the target object from a speed of the observation vehicle and the speed difference (see, e.g., pars. 67 and 91-94 and FIGS. 4, 8 and 9(a), which teach determining a moving speed of the monitored object), and determine whether the target object or the observation vehicle is in a dangerous state by using the speed of the target object instead of the speed difference (see, e.g., pars. 85-93, which teach determining whether there is a possibility of contact between the vehicle and the monitored object using the speed of the monitored object). While Nagaoka as applied teaches determining a speed of the target object, it does not explicitly teach that the speed of the target object is determined from a speed of the observation vehicle and the speed difference. In the analogous art, Oami teaches determining a speed of the object from the speed of the host vehicle and the speed difference (see, e.g., pars. 103-105 of Oami). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Nagaoka to determine the speed of the overtaking vehicle as taught by Oami because doing so would yield a predictable results of only needing to determine the speed of the host vehicle and circumventing the need to separately measure the speed of the object (see MPEP 2143(I)(D)). While Nagaoka in view of Oami teaches determining a dangerous state using the speed of the target object, it does not explicitly teach the speed of the target object is used instead of the speed difference. Guarneri in the analogous art teaches determining whether the target object or the observation vehicle is in a dangerous state by using the speed of the target object in one example, unlike other examples where the speed difference is used (see, e.g., pars. 110-112 of Guarneri, which teach determining the risk level thresholds as a function of the velocity of the overtaking vehicle). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Nagaoka in view of Oami to use the speed of the monitored object instead of the speed difference as taught by Guarneri because doing so would allow dynamically determining thresholds for determining a dangerous state (see, e.g., par. 110 of Guarneri). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to WOO RHIM whose telephone number is (571)272-6560. The examiner can normally be reached Mon - Fri 9:30 am - 6:00 pm et. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Henok Shiferaw can be reached at 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WOO C RHIM/Examiner, Art Unit 2676 /Henok Shiferaw/Supervisory Patent Examiner, Art Unit 2676
Read full office action

Prosecution Timeline

Nov 04, 2023
Application Filed
Nov 21, 2025
Non-Final Rejection — §102, §103
Feb 25, 2026
Response Filed
Apr 02, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12601667
AUTOMATED TURF TESTING APPARATUS AND SYSTEM FOR USING SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12596134
DEVICE, MOVEMENT SPEED ESTIMATION SYSTEM, FEEDING CONTROL SYSTEM, MOVEMENT SPEED ESTIMATION METHOD, AND RECORDING MEDIUM IN WHICH MOVEMENT SPEED ESTIMATION PROGRAM IS STORED
2y 5m to grant Granted Apr 07, 2026
Patent 12591997
ARRANGEMENT DEVICE AND METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12586169
Mass Image Processing Apparatus and Method
2y 5m to grant Granted Mar 24, 2026
Patent 12579607
DEMOSAICING METHOD AND APPARATUS FOR MOIRE REDUCTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+21.4%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 140 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month