Prosecution Insights
Last updated: April 19, 2026
Application No. 18/692,368

METHOD, APPARATUS, SYSTEM AND NON-TRANSITORY COMPUTER READABLE MEDIUM FOR ADAPTIVELY ADJUSTING DETECTION AREA

Non-Final OA §102
Filed
Mar 15, 2024
Examiner
NOH, JAE NAM
Art Unit
2481
Tech Center
2400 — Computer Networks
Assignee
NEC Corporation
OA Round
1 (Non-Final)
86%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
76%
With Interview

Examiner Intelligence

Grants 86% — above average
86%
Career Allow Rate
382 granted / 445 resolved
+27.8% vs TC avg
Minimal -10% lift
Without
With
+-10.0%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 2m
Avg Prosecution
26 currently pending
Career history
471
Total Applications
across all art units

Statute-Specific Performance

§101
14.2%
-25.8% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
31.5%
-8.5% vs TC avg
§112
7.8%
-32.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 445 resolved cases

Office Action

§102
DETAILED ACTION This action is in response to the application filed on 3/15/2024. Claims 1-16 and 18 are pending. Acknowledgment is made of a claim for foreign priority. All of the certified copies of the priority documents have been received. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The references listed on the Information Disclosure Statement submitted on 4/15/2024 has/have been considered by the examiner (see attached PTO-1449). Claim Mapping Notation In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference. In this office action, following notations are being used to refer to the paragraph numbers or column number and lines of portions of the cited reference. [0005] (Paragraph number [0005]) C5 (Column 5) Pa5 (Page 5) S5 (Section 5) Furthermore, unless necessary to distinguish from other references in this action, “et al.” will be omitted when referring to the reference. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-16 and 18 are rejected under 35 U.S.C. 102(a1) and (a2) as being anticipated by Tsunematsu (20160142680 A1). 1. A method executed by a computer for adaptively adjusting a detection area of an image to be captured by an image capturing device including: detecting appearances of one or more persons in the detection area from each of a plurality of input images previously captured by the image capturing device over a period of time; “[0040] The monitoring target may be at least one of a moving body, a human body, and a face of a subject. Alternatively the monitoring target may be any moving object, such as a vehicle for example, other than the moving human body as described above.” “[0046] The encoding unit 122 encodes the digital image signal input from the image acquisition unit 121, by setting a framerate and performing compression for network distribution.” generating a first map corresponding to the detection area based on the respective appearances of the one or more persons, “[0101] Thus, the server apparatus 40 generates a human body detection frequency map (a map indicating an area, in the monitoring target range, where a human body detection frequency is high) in the monitoring target range based on the image analysis result obtained by the camera 20, and sets the preset information suitable for monitoring and image analysis based on the human body detection frequency map.” wherein the first map comprises a first measure of the appearances of the one or more persons detected in each of a plurality of portions of the detection area across the plurality of the input images; “[0101] Thus, the server apparatus 40 generates a human body detection frequency map (a map indicating an area, in the monitoring target range, where a human body detection frequency is high) in the monitoring target range based on the image analysis result obtained by the camera 20, and sets the preset information suitable for monitoring and image analysis based on the human body detection frequency map.” determining if a ratio of unutilized portions to the plurality of portions of the detection area exceeds a threshold ratio, wherein each of the unutilized portions is a portion of the plurality of portions of the detection area associated with no first measure of appearances or a first measure of appearances being zero, indicating no appearances of the one or more persons were detected in that portion across the plurality of input images; and “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” Note: In the reference, the stated score determines a ratio of unused portion of the image. adjusting the detection area such that a focus area within the detection area which comprises at least a part of utilized portions of the plurality of portions is positioned at or near to a center of the adjusted detection area of the image to be captured by the image capturing device in response to the determination. “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” 2. The method of claim 1, further including: generating, from each of the plurality of input images, more than one second map, “[0040] The monitoring target may be at least one of a moving body, a human body, and a face of a subject. Alternatively, the monitoring target may be any moving object, such as a vehicle for example, other than the moving human body as described above.” “[0101] Thus, the server apparatus 40 generates a human body detection frequency map (a map indicating an area, in the monitoring target range, where a human body detection frequency is high) in the monitoring target range based on the image analysis result obtained by the camera 20, and sets the preset information suitable for monitoring and image analysis based on the human body detection frequency map.” wherein each of more than one second map corresponds to the detection area and comprises a different second measure of the appearances of the one or more persons detected in each of the plurality of portions of the detection area from the each of the plurality of input images based on at least one of a facial feature, a body part, a characteristic or a motion of the one or more persons, and the “[0040] The monitoring target may be at least one of a moving body, a human body, and a face of a subject. Alternatively, the monitoring target may be any moving object, such as a vehicle for example, other than the moving human body as described above.” “[0109] The score is the number of human body detections according to the present exemplary embodiment. Alternatively, the score may be the number of moving body detections or the number of facial recognitions.” “[0119] In the score tabulation in step S14 described above, the scores obtained with the camera orientations are simply merged. For example, in the score tabulation, an image analysis result smaller than a predetermined size and a facial recognition result with likelihood lower than predetermined likelihood may be filtered out. Thus, the effect caused by a faulty image analysis result becomes less critical. Information, such as a position of the monitoring target in the captured image, as well as the size and the detected likelihood of the monitoring target, may be weighted, and the score may be calculated further taking such information into consideration.” generating the first map includes determining the first measure of the appearances of the one or more persons in the first map detected in each of the plurality of portions of the detection area across the plurality of the input images based on the respective second measures of the appearances of the one or more persons detected in the each of the plurality of portions of the detection area in relation to the more than one second map generated from the each of the plurality of input image. “[0040] The monitoring target may be at least one of a moving body, a human body, and a face of a subject. Alternatively the monitoring target may be any moving object, such as a vehicle for example, other than the moving human body as described above.” “[0101] Thus, the server apparatus 40 generates a human body detection frequency map (a map indicating an area, in the monitoring target range, where a human body detection frequency is high) in the monitoring target range based on the image analysis result obtained by the camera 20, and sets the preset information suitable for monitoring and image analysis based on the human body detection frequency map.” 3. The method of claim 2, further including: determining the second measure of the appearances of the person based on a count of the appearances of the one or more persons detected in the each of the plurality of portions of the detection area from the each of the plurality of input images based on the at least one of the body part, the characteristic or the motion of the one or more persons, and “[0040] The monitoring target may be at least one of a moving body, a human body, and a face of a subject. Alternatively the monitoring target may be any moving object, such as a vehicle for example, other than the moving human body as described above.” “[0101] Thus, the server apparatus 40 generates a human body detection frequency map (a map indicating an area, in the monitoring target range, where a human body detection frequency is high) in the monitoring target range based on the image analysis result obtained by the camera 20, and sets the preset information suitable for monitoring and image analysis based on the human body detection frequency map.” a detection weightage pre-configured for an appearance detected based on the at least one of the facial feature, the body part, the characteristic or the motion by the image capturing device. “[0119] In the score tabulation in step S14 described above, the scores obtained with the camera orientations are simply merged. For example, in the score tabulation, an image analysis result smaller than a predetermined size and a facial recognition result with likelihood lower than predetermined likelihood may be filtered out. Thus, the effect caused by a faulty image analysis result becomes less critical. Information, such as a position of the monitoring target in the captured image, as well as the size and the detected likelihood of the monitoring target, may be weighted, and the score may be calculated further taking such information into consideration.” “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” 4. The method of claim 2, further including: generating a third map corresponding to the detection area, wherein the third map comprises a third measure of the appearances of the one or more persons detected in each of the plurality of portions of the detection area in the each of the plurality of input images, “[0110] Next, in step S14, the server apparatus 40 tabulates the scores calculated in step S13 for each of the camera orientations, and the processing proceeds to step S15. FIG. 14 is a diagram illustrating an example where the scores illustrated in FIGS. 11 to 13 each corresponding to a different one of the camera orientations are tabulated. By thus merging the scores obtained for each of the camera orientations, the human body detection frequency map in the monitoring target range can be generated.” the third measure is a sum of the respective second measures of the appearances of the one or more persons of the more than one second map detected in the each of the plurality of portions of the detection area from the each of the plurality of input images; and “[0110] Next, in step S14, the server apparatus 40 tabulates the scores calculated in step S13 for each of the camera orientations, and the processing proceeds to step S15. FIG. 14 is a diagram illustrating an example where the scores illustrated in FIGS. 11 to 13 each corresponding to a different one of the camera orientations are tabulated. By thus merging the scores obtained for each of the camera orientations, the human body detection frequency map in the monitoring target range can be generated.” wherein the generating the first map includes determining the first measure of the appearances of the one or more persons in the first map detected in each of the plurality of portions of the detection area across the plurality of the input images based on the respective third measures of the appearances of the one or more persons detected in the each of the plurality of portions of the detection area in relation to the third map generated from the plurality of input images. “[0101] Thus, the server apparatus 40 generates a human body detection frequency map (a map indicating an area, in the monitoring target range, where a human body detection frequency is high) in the monitoring target range based on the image analysis result obtained by the camera 20, and sets the preset information suitable for monitoring and image analysis based on the human body detection frequency map.” “[0110] Next, in step S14, the server apparatus 40 tabulates the scores calculated in step S13 for each of the camera orientations, and the processing proceeds to step S15. FIG. 14 is a diagram illustrating an example where the scores illustrated in FIGS. 11 to 13 each corresponding to a different one of the camera orientations are tabulated. By thus merging the scores obtained for each of the camera orientations, the human body detection frequency map in the monitoring target range can be generated.” 5. The method of claim 1, further including: determining if each of the at least a part of the utilized portions is associated with a measure of the appearances of the one or more persons that is equal or greater than a threshold measure of the appearances of the one or more persons. “[0101] Thus, the server apparatus 40 generates a human body detection frequency map (a map indicating an area, in the monitoring target range, where a human body detection frequency is high) in the monitoring target range based on the image analysis result obtained by the camera 20, and sets the preset information suitable for monitoring and image analysis based on the human body detection frequency map.” 6. The method of claim 5, further including: determining a highest measure of the appearances of the one or more persons among the first measures of the appearances of the one or more persons generated in the first map; and “[0116] Then, in step S17, the server apparatus 40 determines an image capturing order, which is one of the setting items for the preset tour, based on the score tabulation result obtained in step S14. The image capturing order is set in such manner that the image capturing is performed in descending order of the score from an area with a highest score. The image capturing order may be registered in advance.” calculating the threshold measure of the appearances of the person based on the highest measure of the appearances of the one or more persons. “[0132] For example, in a case where the score is set to be higher in an area with a higher monitoring target detection frequency, the preset information is determined in such a manner that the tour image capturing is performed on a path where a monitoring target (for example, a person) often passes by and the like. Thus, the crowdedness checking, monitoring of the action of a suspicious person, and the like can be appropriately performed. On the other hand, in a case where the score is set to be high for an area with a low monitoring target detection frequency, the preset information is determined in such a manner that the preset tour image capturing is performed on a location where a person rarely appears and the like.” 7. The method of claim 1, wherein the adjusting the detection area includes at least one of: rotating the image capturing device around a horizontal and/or vertical axis to change a device angle of the image capturing device in relation to the detection area; and “[0032]…More specifically, the information related to the preset image capturing position indicates an image capturing angle of view (pan, tilt, and zoom positions).” “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” increasing or decreasing a magnification of the image capturing device such that the focus area takes up a pre-configured center portion around the center of the adjusted detection area of the image to be captured by the image capturing device. “[0032]…More specifically, the information related to the preset image capturing position indicates an image capturing angle of view (pan, tilt, and zoom positions).” “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” 8. The method of claim 1, further including: calculating an amount of a detection area adjustment required to move the focus area to a center of the detection area; and “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” determining if the amount of the detection area adjustment is greater than a pre-configured minimum adjustment threshold, “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” It is inherent that there will be a minimum amount of adjustment needed in order to move the camera view. wherein the adjusting the detection area is carried out in response to the determination of the amount of the detection being greater than the pre-configured minimum adjustment threshold. “[0111] In step S15, the server apparatus 40 determines the angle of view, which is one of the setting items for the preset tour, based on the result of tabulating the scores in step S14. More specifically, a position (area) with the score higher than a predetermined threshold is selected, and the angle of view is selected to set the position as the center of the captured image. The angle of view may be set in such a manner that the selected position is included within a predetermined range from the center of the captured image, instead of strictly matching the center of the captured image.” It is inherent that there will be a minimum amount of adjustment needed in order to move the camera view. Regarding the claims 9-16 and 18, they recite elements that are at least included in the claims 1-8 and 1above but in a different claim form. Therefore, the same rationale for the rejection of the claims applies. Regarding the processor, memory and storage medium in the claims, see the reference [154]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. XIAO (US 20220001891 A1) and Pan et al. (US 20220157081 A1) disclose relevant art related to the subject matter of the present invention. A shortened statutory period for reply to this action is set to expire THREE MONTHS from the mailing date of this action. An extension of time may be obtained under 37 CFR 1.136(a). However, in no event, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAE N. NOH whose telephone number is (571) 270-0686. The examiner can normally be reached on Mon-Fri 8:30AM-5PM. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Vaughn can be reached on (571) 272-3922. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JAE N NOH/ Primary Examiner Art Unit 2481
Read full office action

Prosecution Timeline

Mar 15, 2024
Application Filed
Dec 01, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604025
METHOD FOR VERIFYING IMAGE DATA ENCODED IN AN ENCODER UNIT
2y 5m to grant Granted Apr 14, 2026
Patent 12593071
ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12587679
LOW-LATENCY MACHINE LEARNING-BASED STEREO STREAMING
2y 5m to grant Granted Mar 24, 2026
Patent 12574571
FRAME SELECTION FOR STREAMING APPLICATIONS
2y 5m to grant Granted Mar 10, 2026
Patent 12574529
IMAGE ENCODING AND DECODING METHOD AND APPARATUS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
86%
Grant Probability
76%
With Interview (-10.0%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 445 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month