Prosecution Insights
Last updated: April 19, 2026
Application No. 18/695,043

MONITORING SYSTEM, MONITORING METHOD, NON-TRANSITORY COMPUTER-READABLE MEDIUM STORING PROGRAM

Non-Final OA §102§103§112
Filed
Mar 25, 2024
Examiner
LU, TOM Y
Art Unit
2667
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
91%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
826 granted / 941 resolved
+25.8% vs TC avg
Minimal +3% lift
Without
With
+3.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
23 currently pending
Career history
964
Total Applications
across all art units

Statute-Specific Performance

§101
12.6%
-27.4% vs TC avg
§103
28.7%
-11.3% vs TC avg
§102
37.2%
-2.8% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 941 resolved cases

Office Action

§102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Preliminary Amendment The amendment filed 03/05/2024 has been entered. Claims 1-18 and 23 have been amended. Claims 19-22 have been cancelled. Claim 24 has been added. Claims 1-18 and 23-24 are pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/04/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. The information disclosure statement (IDS) submitted on 03/25/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-18 and 23-24 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "the same target" in line 13. There is insufficient antecedent basis for this limitation in the claim. Claims 2-11 and 24 are rejected as being dependent upon claim 1. Claim 12 is rejected for the similar reason as of claim 1. Claims 13-18 are rejected as being dependent upon claim 12. Claim 23 is rejected for the similar reason as of claim 1. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 7, 12-14, 18, 23 and 24 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Sharma et al (“Sharma” hereinafter, U.S. Publication No. 2021/0407108A1). As per claim 1, Sharma discloses a monitoring system (abstract: an object tracking multi-camera system), comprising: a plurality of image capturing device installed at a plurality of locations within a region being monitored (paragraph [0021] & figure 1 & 2: a plurality of cameras 10s-10e are installed with corresponding viewing space in a building ); at least one memory storing instruction; and at least one processor (computing devices 12 and server 14) configured to execute the instructions to: identify targets being monitored from videos captured by the plurality of image capturing device (paragraph [0022]: “tracking objects” in different view spaces in a building, and the object is identified through a pattern recognition 152 in the object tracking system 150); determine whether, of the identified targets being monitored, the same target being monitored is in the videos of the plurality of image capturing device captured in the same time span (paragraph [0020]: “the fidelity manager module 158 ensures that objects are associated with the correct tracking identifications by performing pattern/facial recognition at such predetermined time intervals. In one implementation, the predetermined time interval may be based on various other factors, such as how many objects are in various view areas 130-138”; paragraph [0021]-[0022]: once an object, such as a person, John Doe, is identified, the person will be tracked by comparing the person with different images taken by the different cameras at “different angles, distances”); and select, based on a predetermined condition, at least one image capturing device from image capturing device that have captured videos including the same target being monitored, and control image capturing of the selected image capturing device or feeding of video from the selected image capturing device (paragraph [0013] & [0018]: if the object/person has not been identified, at least one high resolution image is required for pattern recognition, and once the facial pattern recognition is performed, the images from the cameras can be tracked with low resolution images). As per claim 2, Sharma discloses wherein at least one processor is further configured to execute the instructions to, if the video includes the target being monitored, control the selected image capturing device so as to change an image quality of the video (as explained above, when the facial pattern recognition is performed, the tracking video will be captured in a lower resolution). As per claim 3, Sharma discloses wherein at least one processor is further configured to execute the instructions to identify a type of a person included in the video, and change the image quality in accordance with the type of the person in the video captured by the selected image capturing device (as explained above in paragraph [0013], if the person’s facial recognition has not been performed, the image resolution will be acquired at a higher level, and if no facial recognition is required, the image resolution will be acquired at a lower level to reduce use of computation resources. The examiner notes a type of person may be the type of person that was previously identified). As per claim 7, Sharm discloses wherein at least one processor is further configured to execute the instructions to, if a plurality of image capturing device have been selected, perform control that causes different processes to be performed in the selected plurality image capturing device (as explained above, for a person to be identified/pattern recognized, one camera needs to acquire image data of the person in a high resolution, while other cameras can acquire image data in a lower resolution). As per claim 12, see explanation in claim 1. As per claim 13, see explanation in claim 2. As per claim 14, see explanation in claim 3. As per claim 18, see explanation in claim 7. As per claim 23, see explanation in claim 1, the examiner notes Sharma’s system is a computer-like system, which inherently includes a non-transitory computer readable medium. As per claim 24, Sharma discloses wherein, the region being monitored is a healthcare facility, targets being monitored are persons to be checked by a staff of the healthcare facility, and at least one processor is further configured to execute the instruction to; identify targets using an image analyzing of a machine learning type, acquire a video captured after the image capturing or the feeding of the video has been controlled, and detect a situation of the persons in the acquired video; and make a decision to output notification information to the staff based on the situation of the person (Sharma in paragraph [0021] teaches the facility is a building, which can be any facility including a healthcare facility. Additionally, Sharma in paragraph [0024] teaches an alarm signal may be triggered if a target person is detected and tracked, and the machine learning type can be a neural network in paragraph [0021]). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-6, 15, 16 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma in view of Mahbub et al (“Mahbub” hereinafter, U.S.P.N. 11,381,743 B1). As per claim 4, Sharma teaches using a multi-camera system for tracking object/person with different image qualities for person facial recognition. However, Sharma does not explicitly teach setting the image quality of a target region that includes the target region that includes the target being monitored in the video higher than the image quality of a region other than the target region when transmitting the video captured by the image capturing device, by performing control of setting a compression rate of the target region lower than a compression rate of the region other than the target region. Mahbub in figures 2 and 5 teaches a multi-sensor system for tracking object with different image qualities. In particular, Mahbub in column 33, lines 8-30 teaches using multi-resolution features for object recognition, in which, the ROI in a scene is to be captured and stored in high resolution, and background/small objects are captured and stored in low resolution. Sharma and Mahbub are combinable because they are from the same field of endeavor, ie. object tracking using multi-cameras. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to modify Sharma in light of Mahbub’s teaching to use multi-resolution features to capture and store ROI in high resolution and background/smaller objects in low resolution. One would be motivated to do so because it would further reduce the computation resources in object recognition and tracking. As per claim 5, Mahbub teaches selecting at least one image capturing device in accordance with an environment surrounding the target being monitored within an image capturing range of each of the plurality of image capturing device, and perform control of transmitting only a video captured by the selected image capturing device (Mahbub at column 34, lines 41-51, teaches the image frames with no ROI are to be discarded). As per claim 6, see explanation in claim 5, Mahbub teaches if no ROI is detected in the images, they are discarded. As per claim 15, see explanation in claim 4. As per claim 16, see explanation in claim 5. As per claim 17, see explanation in claim 6. Claims 8-11 are rejected under 35 U.S.C. 103 as being unpatentable over Sharma in view of Hirose Shota (“Hirose” hereinafter, JP 2021087103 A, a copy of translation is attached herein). As per claim 8, Sharma teaches using a multi-camera system for identifying and tracking an object/person. However, Sharma does not explicitly teach “calculate a risk level based on a condition set in advance for the person being monitored and the situation of the person being monitored; and output notification information to a predetermined notification recipient based on the risk level”. Hirose in paragraphs [0034]-[0035] teaches “The monitoring target person information recording unit 107 stores the monitoring target person information. The monitored person information in this embodiment is information for explaining the monitored person, such as the identification ID of the monitored person. The monitoring target person information is preset by the administrator of this system … The notification determination unit 108 (judgment unit) is based on the human body information acquired from the human body identification unit 106, the walking behavior information acquired from the walking behavior acquisition unit 105, and the monitoring target person information acquired from the monitoring target person information recording unit 107. The degree of risk is calculated according to the notification method determination rule. Then, the notification level is determined based on the degree of risk. The notification method determination rule is a determination formula for determining the degree of risk and the notification level”. At the time of the invention, it would have been obvious to a person of ordinary skill in the art to modify Sharma in light of Hirose’s teaching to detect risk level of an elder based his/her walking behavior/speed in a health care facility and notify the administrator when the walking speed exceeds a certain threshold. One would be motivated to do so because Sharma already teaches tracking/monitoring people in a building using multiple cameras, and Hirose’s teaching would extend Sharma’s tracking system into long-term care health facilities, such as nursing homes. As per claim 9, Sharma already teaches changing image quality when a facial/pattern recognition is required. As per claim 10, Hirose in paragraphs [0098]-[0102] and figure 8, teaches if a facility staff 801 is detected with a person 802, the walking behavior acquisition unit 105 will lower the risk level. As per claim 11, as explained above, the risk level is lower in accordance with facility staff 801. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TOM Y LU whose telephone number is (571)272-7393. The examiner can normally be reached Monday - Friday, 9AM - 5PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella can be reached at (571) 272 - 7778. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TOM Y LU/Primary Examiner, Art Unit 2667
Read full office action

Prosecution Timeline

Mar 25, 2024
Application Filed
Jan 30, 2026
Non-Final Rejection — §102, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597133
TRAINING END-TO-END WEAKLY SUPERVISED NETWORKS AT THE SPECIMEN (SUPRA-IMAGE) LEVEL
2y 5m to grant Granted Apr 07, 2026
Patent 12591967
DISPLACEMENT ESTIMATION OF INTERVENTIONAL DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12591296
REDUCING POWER CONSUMPTION OF EXTENDED REALITY DEVICES
2y 5m to grant Granted Mar 31, 2026
Patent 12573037
LEARNING APPARATUS, LEARNING METHOD, TRAINED MODEL, AND PROGRAM
2y 5m to grant Granted Mar 10, 2026
Patent 12564867
METHOD AND DEVICE FOR DETECTING CONTAINERS WHICH HAVE FALLEN OVER AND/OR ARE DAMAGED IN A CONTAINER MASS FLOW
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
91%
With Interview (+3.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 941 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month