Prosecution Insights
Last updated: April 19, 2026
Application No. 18/274,633

AREA DETERMINATION APPARATUS, CONTROL METHOD, COMPUTER READABLE MEDIUM, MONITORING SYSTEM, MONITORING METHOD

Final Rejection §102§103
Filed
Jul 27, 2023
Examiner
GARCIA, CARLOS E
Art Unit
2686
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
2y 2m
To Grant
94%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
683 granted / 889 resolved
+14.8% vs TC avg
Strong +17% interview lift
Without
With
+16.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
32 currently pending
Career history
921
Total Applications
across all art units

Statute-Specific Performance

§101
1.6%
-38.4% vs TC avg
§103
49.2%
+9.2% vs TC avg
§102
34.3%
-5.7% vs TC avg
§112
12.7%
-27.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 889 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments, see pages 7-9, filed 11/03/2025, with respect to the rejection(s) of claim(s) 1-15 rejected under 35 U.S.C. 102(a)(2) as being anticipated by TSUBOTA (JP 2018147015 A) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of newly found prior art applied to address the amended limitations. TSUBOTA maintained as primary reference given the amendments which read on the as modified below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over TSUBOTA (JP 2018147015 A) in view of JANES (US 20040042588 A1). [Claim 1] TSUBOTA discloses (abstract) an area determination apparatus (FIG.1-4) comprising: at least one memory 13 that is configured to store instructions; and at least one processor 12 that is configured to execute the instructions to: acquire three-dimensional facility data (i.e. 3D information from camera – as explained below – cameras obtain images) of a target facility that is constituted by a plurality of constituent components (emphasized phrases are not considered as part of the apparatus since they are drawn to a target facility not the actual structure or functions of apparatus, thus are not given patentable weight), the three-dimensional facility data including three-dimensional data of the plurality of constituent components to the target facility (i.e. components can be any of areas used during monitoring – gaze region, detection region, gaze area, monitoring area, etc. – given that constituent components are not part of the apparatus structure, any 3D data of monitored areas could include any type of internal components of target facility); determine, as a monitoring area or a non-monitoring area (i.e. monitoring area limited to gaze region – such that any area outside gaze region is non-monitored area – to reduce processing and increase speed), a three-dimensional area that is determined based on three-dimensional data of the one or more designated components (i.e. monitoring area limited to gaze area). According to a first aspect of the present invention for solving the above-described problems, three-dimensional information of a monitoring area is acquired from a plurality of camera images obtained by photographing the monitoring area with at least two cameras that are spaced apart, and the three-dimensional information is obtained. A three-dimensional intrusion detection system that detects an object that has entered the monitoring area based on the image acquisition unit that acquires the plurality of camera images, and based on the plurality of camera images, 3D measurement unit that measures 3D position and outputs 3D information of the monitoring area, and intrusion detection that detects an object that has entered the monitoring area based on the change status of the 3D information A map image that visualizes the three-dimensional information, and a mark image that indicates an object that has entered the monitoring area, and the camera image and the image selected by the user's input operation are generated. A structure in which and a screen generating section for outputting a monitor screen displaying said marked image and at least one image of the map image. The region setting unit 21 sets a detection region and a gaze region in accordance with a user input operation at the operation input unit 15. Here, the range of the detection area and the gaze area may be individually designated by the user, but the user designates the range of the detection area, and the range of the gaze area is set based on the range of the detection area. It may be set by the unit 21. According to a first aspect of the present invention for solving the above-described problems, three-dimensional information of a monitoring area is acquired from a plurality of camera images obtained by photographing the monitoring area with at least two cameras that are spaced apart, and the three-dimensional information is obtained. A three-dimensional intrusion detection system that detects an object that has entered the monitoring area based on the image acquisition unit that acquires the plurality of camera images, and based on the plurality of camera images, 3D measurement unit that measures 3D position and outputs 3D information of the monitoring area, and intrusion detection that detects an object that has entered the monitoring area based on the change status of the 3D information and parts, in accordance with the input operation of the user, and the area setting unit for setting a fixation region on the camera image, the map image to visualize the three-dimensional information, and the object that has entered the monitored area And generates to mark images, and a screen generating section for outputting a monitor screen displaying the at least one image the mark image of the camera image and the map image selected by the input operation of the user, the The area setting unit sets the measurement area to be a target of the three-dimensional measurement to a range that includes a detection area to be an intrusion detection target and is the same as the gaze area, and the screen generation unit sets the display range to the At least one image of the map image and the camera image is displayed on the monitoring screen in a state limited to the gaze area. According to this, the monitor can confirm whether or not it is a false detection by a camera image that captures the actual situation of the monitoring area limited to the gaze area that is important in the monitoring work. With the map image that visualizes, the monitor can check whether or not the intrusion detection based on the three-dimensional information is normal, and the monitoring work can be performed efficiently. In particular, since the measurement region is set to include the detection region, intrusion detection can be appropriately performed based on the three-dimensional information generated by the three-dimensional measurement, and the measurement region is in the same range as the gaze region. Since it is set, only the map image of the gaze area needs to be calculated and displayed, so the load of the three-dimensional information processing can be reduced, the screen display processing can be speeded up, and the cost of the apparatus can be reduced. it can. However, TSUBOTA fails to explicitly disclose: acquire an input operation on an input screen, the input operation designating an area of an image of the target facility; and determine, as one or more designated components, one or more of the constituent components corresponding to the area designated by the input operation. JANES teaches (abstract) in a similar field of invention, a camera-type system (FIG.1) capable of processing three-dimensional image data used in a medical facility, including [0034] acquire an input operation on an input screen 28 [0042], the input operation designating an area of an image of the target facility and determine, as one or more a designated components [0039, 0040, 0043], one or more of the constituent components corresponding to the area designated by the input operation ([0042] i.e. based on three-dimensional map – selecting to view specific device). [0042] The operator may also select an image section on the display device 28 for viewing a specific portion of the base three-dimensional image exposure, e.g., for viewing the angioplasty device moving through the vein or artery. The image selection module may allow the operator to select more than one image sections. The image section may correspond to the area where the procedure is taking place. The image selection module may divide up the base three-dimensional image exposures into different image sections which may be indicated on a display of the computing device. In one embodiment of the present invention, the user may select one or a plurality of the image sections for updating. In one embodiment, the user may select all of the image sections for updating. Unlike prior art systems, the updating of the selected imaging sections, even if all of the imaging sections are selected, may occur in real-time or continuously. Under broadest reasonable interpretation, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to try using an input screen so that an operator could select a device or constituent component to be viewed and monitored with an operation of the medical facility in order to provide a means for operator to control how and what three-dimensional image data is prioritized. [Claim 2] TSUBOTA discloses the area determination apparatus according to claim 1, wherein the at least one processor is configured to execute the instructions further to: determine the one or more of the constituent components at least a part of which overlaps the area that is designated in the input screen as the one or more designated components. According to a first aspect of the present invention for solving the above-described problems, three-dimensional information of a monitoring area is acquired from a plurality of camera images obtained by photographing the monitoring area with at least two cameras that are spaced apart, and the three-dimensional information is obtained. A three-dimensional intrusion detection system that detects an object that has entered the monitoring area based on the image acquisition unit that acquires the plurality of camera images, and based on the plurality of camera images, 3D measurement unit that measures 3D position and outputs 3D information of the monitoring area, and intrusion detection that detects an object that has entered the monitoring area based on the change status of the 3D information and parts, in accordance with the input operation of the user, and the area setting unit for setting a fixation region on the camera image, the map image to visualize the three-dimensional information, and the object that has entered the monitored area And generates to mark images, and a screen generating section for outputting a monitor screen displaying the at least one image the mark image of the camera image and the map image selected by the input operation of the user, the The area setting unit sets the measurement area to be a target of the three-dimensional measurement to a range that includes a detection area to be an intrusion detection target and is the same as the gaze area, and the screen generation unit sets the display range to the At least one image of the map image and the camera image is displayed on the monitoring screen in a state limited to the gaze area. [Claim 3] TSUBOTA discloses (FIG.2) the area determination apparatus according to claim 1, wherein the at least one processor is configured to execute the instructions further to: determine a three-dimensional area obtained by adding a margin (i.e. gaze area define as a shape) around the one or more designated components as the monitoring area or the non-monitoring area. In the example shown in FIG. 2, the rectangular gaze area is set so as to share the left and right sides of the camera image, but this gaze area can be set at an arbitrary position on the camera image. Further, the shape of the gaze area is not limited to a rectangle, and the gaze area can be set to an arbitrary shape. [Claim 4] TSUBOTA discloses the area determination apparatus according to claim 3, wherein information indicating an association (i.e. by depth map) between an attribute of each constituent component and a value of the margin is stored in a storage device, and wherein the attribute of each constituent component represents a type of each constituent component or characteristics of a substance handled by each constituent component (i.e. three-dimensional information acquired by the three-dimensional measurement, a partial depth map (map image) – such data would contain some type of data of representing type of constituent), wherein the at least one processor is configured to execute the instructions further to: acquire a value of the margin corresponding to the attribute of the one or more designated components from the storage device; and determine a three-dimensional area obtained by adding the margin of the acquired value around the one or more designated components as the monitoring area or the non-monitoring area. The storage unit 13 stores a camera image input to the image input unit 11, a depth map generated by the control unit 12, and the like. The storage unit 13 stores a program executed by the control unit 12. Next, based on the three-dimensional information acquired by the three-dimensional measurement, a partial depth map (map image) that visualizes the three-dimensional information of the gaze area is generated. Further, based on the position information of the intruding object acquired by intrusion detection, a frame image (mark image) surrounding the intruding object is generated, and image composition is performed to superimpose the frame image on the position of the intruding object in the partial camera image. And the monitoring screen which displayed the partial camera image and partial depth map after image composition side by side is generated. On the monitoring screen in the two-split display mode, the partial camera image 41 and the partial depth map 42 (map image) are displayed side by side on the image display unit 35. The partial camera image 41 is obtained by cutting out the gaze area from the camera image acquired from the camera 1. In this partial camera image 41, an intruding object that has entered the monitoring area is shown, and a frame image 43 (mark image) indicating the intruding object is displayed based on the detection result of the intrusion detection. The partial depth map 42 is obtained by visualizing the three-dimensional information of the gaze area generated by the three-dimensional measurement unit 22 and is displayed in a state limited to the gaze area, like the partial camera image 41. [Claim 5] TSUBOTA discloses the area determination apparatus according to claim 1, wherein the at least one processor is configured to execute the instructions further to: output area information indicating the determined monitoring area or non-monitoring area. According to a first aspect of the present invention for solving the above-described problems, three-dimensional information of a monitoring area is acquired from a plurality of camera images obtained by photographing the monitoring area with at least two cameras that are spaced apart, and the three-dimensional information is obtained. A three-dimensional intrusion detection system that detects an object that has entered the monitoring area based on the image acquisition unit that acquires the plurality of camera images, and based on the plurality of camera images, 3D measurement unit that measures 3D position and outputs 3D information of the monitoring area, and intrusion detection that detects an object that has entered the monitoring area based on the change status of the 3D information and parts, in accordance with the input operation of the user, and the area setting unit for setting a fixation region on the camera image, the map image to visualize the three-dimensional information, and the object that has entered the monitored area And generates to mark images, and a screen generating section for outputting a monitor screen displaying the at least one image the mark image of the camera image and the map image selected by the input operation of the user, the The area setting unit sets the measurement area to be a target of the three-dimensional measurement to a range that includes a detection area to be an intrusion detection target and is the same as the gaze area, and the screen generation unit sets the display range to the At least one image of the map image and the camera image is displayed on the monitoring screen in a state limited to the gaze area . [Claim 6] (as applied for claim 1 given the similarities in structures and functions) [Claim 7] [Claim 12] As for [Claim 2] [Claim 8] [Claim 13] As for [Claim 3] [Claim 9] [Claim 14] As for [Claim 4] [Claim 10] [Claim 15] As for [Claim 5] [Claim 11] (as applied for claim 1 given the similarities in structures and functions) Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to CARLOS E GARCIA whose telephone number is (571)270-1354. The examiner can normally be reached M-Th 9-6pm F 9-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Zimmerman can be reached at (571) 272-3059. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. CARLOS E. GARCIA Primary Examiner Art Unit 2686 /Carlos Garcia/Primary Examiner, Art Unit 2686 11/28/2025
Read full office action

Prosecution Timeline

Jul 27, 2023
Application Filed
Jul 30, 2025
Non-Final Rejection — §102, §103
Nov 03, 2025
Response Filed
Nov 28, 2025
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597310
METHOD AND DEVICES FOR CONFIGURING ELECTRONIC LOCKS
2y 5m to grant Granted Apr 07, 2026
Patent 12594905
CONTROL SYSTEM AND METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12597305
LOCKING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12583417
SMART KEY SYSTEM FOR VEHICLE AND METHOD OF CONTROLLING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12579856
ULTRA-WIDEBAND-BASED METHOD FOR ACTIVATING A FUNCTION OF A VEHICLE WITH A PORTABLE USER EQUIPMENT ITEM, ASSOCIATED SYSTEM AND DEVICE FOR ACTIVATING A FUNCTION
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
94%
With Interview (+16.8%)
2y 2m
Median Time to Grant
Moderate
PTA Risk
Based on 889 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month