Prosecution Insights
Last updated: April 18, 2026
Application No. 18/274,198

MONITORING APPARATUS, MONITORING SYSTEM, MONITORING METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM

Final Rejection §103
Filed
Jul 25, 2023
Examiner
RUDOLPH, VINCENT M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
44%
Grant Probability
Moderate
3-4
OA Rounds
5y 1m
To Grant
86%
With Interview

Examiner Intelligence

Grants 44% of resolved cases
44%
Career Allow Rate
114 granted / 260 resolved
-18.2% vs TC avg
Strong +42% interview lift
Without
With
+42.0%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
37 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 260 resolved cases

Office Action

§103
DETAILED ACTION Response to Arguments Claims 5-6, 8-11, 17-18, 23 have been cancelled. Claims 1-4, 7, 12-16, 19-21 are currently pending. Applicant’s arguments, see pages 9-11 filed 10/31/2025, with respect to the 35 USC 101 rejection(s) of the amended claim(s) have been fully considered and are persuasive. Therefore, the 35 USC 101 rejection(s) of the amended claim(s) has been withdrawn. Applicant’s arguments, see pages 11-13 filed 10/31/2025, with respect to the 35 USC 102/103 rejection(s) of the amended claim(s) have been fully considered and are persuasive. Therefore, the 35 USC 102/103 rejection(s) of the amended claim(s) has been withdrawn. However, upon further consideration and as necessitated by amendments, a new ground(s) of rejection is made in view of newly found prior art Tani. Further discussion can be found in the prior art rejection below. As such, this action is FINAL. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 7, 12-13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oami (US 20160078883 A1) in view of Fields (US 10055967 B1) and Tani (WO 2014174738 A1, the attached English language translation is used hereinafter as the Official English language translation of this document). Regarding claim 1, Oami teaches a monitoring apparatus comprising: at least one memory storing instructions; and at least one processor configured to execute the instructions to (Oami, computer with CPU and corresponding memory, [0012, 0031]): detect an occurrence of an abnormal situation based on a sound or heat that is detected by a sensor provided in a monitoring target area (Oami, Fig. 1, detect the occurrence of the abnormal situation based on a sound detected by a sensor provided in the monitoring area (a microphone, within the area monitored by acoustic and visual sensor, detects an “abnormality,” which the time difference determination unit then uses to determine the timing and positioning (cross-referenced with the abnormality in the video information) of the abnormality), [0034, 0042]); acquire an occurrence position of the abnormal situation in the monitoring target area (Oami, Fig. 1, time difference determination unit acquires occurrence position of an abnormal situation in a monitoring target area (the position of abnormal occurrence, as indicated by sound and image data, within “the position monitored by the camera”), [0007-0008, 0034-0035]); analyze a state of a crowd around the occurrence position of the abnormal situation by performing an analysis process on video data of a camera that images the monitoring target area (Oami, Figs. 1-2, crowd action analysis unit analyzes the crowd around the occurrence position of the abnormal situation (“the abnormal state of the crowd from the video,” with the video being of the position of occurrence) based on video data 31 of a camera that images the monitoring target area, [0034-0035, 0042]); and estimate a severity of the abnormal situation based on a result of the analysis (Oami, Figs. 1-2, analysis result integration unit determines level/”index indicating seriousness of the situation,” corresponding to severity estimation, based on the result of the crowd action analysis, and the resulting crowd action determination result includes the “value indicating the degree of abnormal action,” [0053, 0060]). However, Oami fails to teach where Fields teaches wherein the at least one processor is further configured to execute the instructions to: determine whether or not the severity is a predetermined threshold or more (Fields, determine whether severity level, based on crowd behavior and/or abnormal, dangerous occurrences, is a predetermined threshold or more, Col. 6, Lines 41-50, Col. 8, Lines 4-9, Col. 10, Lines 48-59, Col. 15, Line 59-Col. 16, Lines 7, 36-45); and output a predetermined signal when the severity is the predetermined threshold or more (Fields, output a predetermined signal (“alert”) when the severity is the predetermined threshold or more (vibrate, visualize, or “sound the alert only when a crowd has at least a certain threshold number of people [indicating higher severity], [and/or] only when a safety concern is above a threshold severity level”), Col. 6, Lines 41-65, Col. 7, Lines 13-44, Col. 19, Lines 16-26). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Oami using the teachings of Fields to include Fields’ comparison of the severity value to a threshold and outputting a signal according to the comparison to Oami’s severity value. Doing so would improve the severity value by providing a threshold and signal, which would be used to identify the danger level of the situation, as well as provide a warning when the situation is sufficiently severe. However, the combination of Oami and Fields fails to teach where Tani teaches to estimate a severity of the abnormal situation based on a result of the analysis, wherein the analysis process is executed when the occurrence of the abnormal situation is detected based on the sound or the heat and is not executed before detecting the occurrence of the abnormal situation based on the sound or the heat (Tani, Fig. 5, estimate severity/degree of abnormality based on the result of the crowd analysis, wherein the crowd analysis is executed when the occurrence of the abnormal situation is detected based on the sound (“sound source identification result”) and is not executed before detecting the occurrence of the abnormal situation based on sound (though it can be performed before the occurrence in an alternative embodiment), pgs. 35-36/42). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Oami, as modified by Fields, using the teachings of Tani to include Tani’s estimation of severity of the abnormal situation after the abnormality is detected using sound to Oami’s, as modified by Fields, estimation of severity of the abnormal situation. Doing so would improve estimation of severity of the abnormal situation by providing estimation after the detection using sound, which would be used to only use these computing resources when an abnormal situation has been confirmed. Regarding claim 7, the combination of Oami, Fields, and Tani teaches the monitoring apparatus according to claim 1, wherein, in multiple embodiments, in the analyzing the state of the crowd, the analysis process on only video data of a camera that images an area including the occurrence position of the abnormal situation among a plurality of the cameras is executed (Oami, Figs. 1-2, crowd action analysis unit includes an analysis process on only video data (video crowd action analysis unit) of the camera that images the area including the occurrence position of the abnormal situation among a plurality of the cameras (“Any number of cameras may be connected to the action analysis device. The crowd action analysis unit 30 may receive a plurality of pieces of video information from one camera,” with the pertinent information being from the camera with the identified abnormal “video event”); similarly, if the occurrence position spans multiple cameras, then only those that captured the abnormal state are analyzed to save computational resources, [0024-0025; 0082]). Regarding claims 12-13, the rationale provided in the rejection of claim 1 is incorporated herein. In addition, the method of claim 12 and the non-transitory computer-readable medium of claim 13 (Oami, embodied in computer, [0012, 0031]) correspond to the apparatus of claim 1 and performs the steps disclosed herein. Claim(s) 2-4, 14-16, 19-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Oami in view of Fields and Tani as applied to claim 1 above, in further view of Misawa (JP2018148402A, the attached English language translation is used hereinafter as the Official English language translation of this JP document as applied in the previous Office Action). Regarding claim 2, the combination of Oami, Fields, and Tani teaches the monitoring apparatus according to claim 1. However, the combination of Oami, Fields, and Tani fails to teach where Misawa teaches wherein the at least one processor is configured to execute the instruction to, as the analysis of the state of the crowd, estimate a line of sight of each of people forming the crowd (Misawa, Figs. 10a-b, analyze crowd (group of people) by estimating line of sight for each of the people in the crowd (“determining the gaze direction of the person for each of the person areas…determining that the gaze directions of a plurality of people are directed toward a specific position”) directed towards the occurrence position of the abnormal situation (e.g. the car crash as depicted in Fig. 10a), [0007, 0048-0049, 0052]) and analyze the number of people whose line of sight is directed to a direction of the occurrence position of the abnormal situation (Misawa, Figs. 10a-b, analyze the number of people whose line of sight is directed toward the position of the abnormal occurrence (“when it is detected that the gaze directions (facial orientations) of a predetermined number or more persons match,” they are looking at the abnormality) [0007, 0048-0049, 0052]) or a ratio of the number of people whose line of sight is directed to a direction of the occurrence position of the abnormal situation to the number of people in the crowd. It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Oami, as modified by Fields and Tani, using the teachings of Misawa to include Misawa’s analysis of each person in the crowd’s line of sight, including analyzing the number whose lines of sight are directed to towards the position of the abnormal situation, to Oami’s, as modified by Fields and Tani, crowd analysis including the occurrence position of an abnormal situation. Doing so would improve crowd analysis by providing line of sight estimation and number analysis, which would be used to confirm where the occurrence position is, and assess the severity depending on how many are looking towards the abnormal situation. Regarding claim 3, the combination of Oami, Fields, and Tani teaches the monitoring apparatus according to claim 1. However, the combination of Oami, Fields, and Tani fails to teach where Misawa teaches wherein the at least one processor is configured to execute the instructions to, as the analysis of the state of the crowd, recognize a facial expression of each of people forming the crowd (Misawa, Fig. 10a, recognize facial expressions of each of the people forming the crowd, [0014, 0030, 0048]) and analyze the number of people whose recognized facial expression corresponds to a predetermined facial expression (Misawa, analyze the number of people whose recognized facial expression corresponds to a predetermined facial expression (“negative expression”) by considering “if multiple people have negative expressions,” and which expression is “most frequently occurring,” [0037-0038]) or a ratio of the number of people whose recognized facial expression corresponds to the predetermined facial expression to the number of people in the crowd (Misawa, analyze the ratio/proportion of the number of people whose recognized facial expression corresponds to the predetermined facial expression (“negative expression”) to the number of people in the crowd (“determines the proportion of entries of "negative expressions”…among the extracted entries, and determines whether this proportion is equal to or greater than a predetermined threshold value”), [0034-0035]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Oami, as modified by Fields and Tani, using the teachings of Misawa to include Misawa’s analysis of each person in the crowd’s facial expression, including analyzing the number and/or ratio whose facial expressions are of a particular kind, to Oami’s, as modified by Fields and Tani, crowd analysis. Doing so would improve crowd analysis by providing facial expression recognition and number analysis, which would be used to assess the type and severity of the abnormal situation, depending on the intensity and frequency of particular facial expressions. Regarding claim 4, the combination of Oami, Fields, and Tani teaches the monitoring apparatus according to claim 1. However, the combination of Oami, Fields, and Tani fails to teach where Misawa teaches wherein the at least one processor is configured to execute the instructions to acquire the occurrence position of the abnormal situation by estimating a generation source of the sound or the heat that is detected by the sensor provided in the monitoring target area (Misawa, detect the direction/generation source of “peak sound” detected by a sensor provided in the monitoring target area (“surveillance camera” and its monitored area), and use the estimated generation source to acquire the occurrence position of the abnormal situation/”problem,” [0066-0069]). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have further modified Oami, as modified by Fields and Tani, using the teachings of Misawa to include Misawa’s acquisition of the occurrence position of the abnormal situation based on estimation of a generation source of sound as detected by a sensor in the monitoring target area, to Oami’s, as modified by Fields and Tani, detection of an abnormal situation based on sound detected by a sensor in the monitoring target area. Doing so would improve acquisition of the occurrence position of the abnormal situation by providing estimation of the generation source of the sound, which would be used to better locate the abnormality. Regarding claims 14-16, 19-21, the rationale provided in the rejection of claims 2-4 is incorporated herein. In addition, the method of claims 14-16 and the non-transitory computer-readable medium of claims 19-21 (Oami, embodied in computer, [0012, 0031]) correspond to the apparatus of claims 2-4, and performs the steps disclosed herein. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KEELY G YEARGIN whose telephone number is (571)272-5126. The examiner can normally be reached M-Th 8am-6pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEELY GWYNNE YEARGIN/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Jul 25, 2023
Application Filed
Jul 29, 2025
Non-Final Rejection — §103
Oct 16, 2025
Examiner Interview Summary
Oct 16, 2025
Applicant Interview (Telephonic)
Oct 31, 2025
Response Filed
Jan 26, 2026
Final Rejection — §103
Mar 30, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12525104
SURVEILLANCE SYSTEM AND SURVEILLANCE DEVICE
2y 5m to grant Granted Jan 13, 2026
Patent 12492533
SYSTEM AND METHOD OF CONTROLLING CONSTRUCTION MACHINERY
2y 5m to grant Granted Dec 09, 2025
Patent 12430871
OBJECT ASSOCIATION METHOD AND APPARATUS AND ELECTRONIC DEVICE
2y 5m to grant Granted Sep 30, 2025
Patent 12333853
FACE PARSING METHOD AND RELATED DEVICES
2y 5m to grant Granted Jun 17, 2025
Patent 12321856
METHOD, COMPUTER PROGRAM AND DEVICE FOR EVALUATING THE ROBUSTNESS OF A NEURAL NETWORK AGAINST IMAGE DISTURBANCES
2y 5m to grant Granted Jun 03, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
44%
Grant Probability
86%
With Interview (+42.0%)
5y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 260 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month