Prosecution Insights
Last updated: April 19, 2026
Application No. 18/538,401

OBJECT DETECTION DEVICE, OBJECT DETECTION METHOD, AND OBJECT DETECTION PROGRAM

Non-Final OA §103§112
Filed
Dec 13, 2023
Examiner
CHEN, JOSHUA NMN
Art Unit
2665
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
34 granted / 40 resolved
+23.0% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
20 currently pending
Career history
60
Total Applications
across all art units

Statute-Specific Performance

§101
18.7%
-21.3% vs TC avg
§103
52.0%
+12.0% vs TC avg
§102
15.7%
-24.3% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 12/13/2023 was and is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Status Claims 1-17 are pending in the present application. Claims 9-13, 15, and 17 are rejected under 35 USC 112(b). Claims 1-3, 6-8, 14, and 16 are rejected under 35 USC 103 as being unpatentable over HISADA (US 2019/0392606 A1, hereinafter Hisada) in view of Gaidon et al. (US 9,443,320 B1, hereinafter Yang). Claims 4-5 are rejected under 35 USC 103 as being unpatentable over HISADA (US 2019/0392606 A1, hereinafter Hisada) in view of Gaidon et al. (US 9,443,320 B1, hereinafter Yang) and Dye (US 2003/0235338 A1, hereinafter Dye). No prior art rejection were applied to claims 9-13, 15, and 17 due to the multiple 112b rejections to the claims. Even under broadest possible interpretation, it is still unclear how the claims should be interpreted and a complete search for the claims cannot be achieved. As such, no prior art rejection is made for claims 9-13, 15, and 17. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9-13, 15, and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regrading claim 9, the limitation “detect an object by performing object detection on the generated composite image” is indefinite as there are two different composite image in claim 9. It is unclear to the examiner which one composite image is being used for object detection. In addition, regrading claim 9, 15, and 17, the limitation “generate one composite image by compositing the processed object presence area and the generated object presence partial area” is indefinite as there are two different “generated object presence partial area”. There is at least one “generated object presence partial area” that were generated in claims 1, 14, and 16, which claims 9, 15, and 17 depended upon respectively; and another “generated object presence partial area” generated within claims 9, 15, and 17. It is unclear to the examiner which “generated object presence partial area” is used to generate the composite image. Claims 9, 15, and 17 recites the limitation “the multiple predicted object presence areas” in line 4-5 of claim 9, line 2-3 of claim 15, and line 3-4 of claim 17. There is insufficient antecedent basis for this limitation in the claim. For the above reason, claims 9, 15, and 17 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite. Claims 10-13 are also rejected for depending upon claim 9. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 6-8, 14, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over HISADA (US 2019/0392606 A1, hereinafter Hisada) in view of Gaidon et al. (US 9,443,320 B1, hereinafter Gaidon). Regarding claims 1, 14, and 16 Hisada discloses Claim 1: An object detection device comprising: a memory (Para [0144]: “The secondary storage device 1003 is an example of a non-transitory tangible medium. Other examples of such a non-transitory tangible medium include a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, and the like connected via the interface 1004.”) configured to store instructions; and a processor (Para [0148]: “Further, some or all components of each device are implemented with general-purpose or dedicated circuitry, a processor and the like, or a combination thereof.”) configured to execute the instructions to: Claim 14: An object detection method comprising: Claim 16: A computer-readable recording medium (Para [0144]: “The secondary storage device 1003 is an example of a non-transitory tangible medium. Other examples of such a non-transitory tangible medium include a magnetic disk, a magneto-optical disk, a CD-ROM, a DVD-ROM, a semiconductor memory, and the like connected via the interface 1004.”) recording an object detection program causing a computer to execute: generate an object presence partial area that includes a partial area of the predicted object presence area together with partial information, which is information indicating a position of the partial area in the predicted object presence area and a size of the partial area (Fig.2, Fig. 6, Para [0103]: “In the sliding window processing on the partial region of the image, the object detection unit 4 first sets the coordinates ( x ,   y ) of the detection region to ( x c ,   y c , ) as shown in FIG. 10 (step S521). Herein, the coordinates ( x ,   y ) denotes center coordinates of the detection region, but the center coordinates have an error equivalent to the shift threshold R.”, Para [0106]: “Next, the object detection unit 4 determines whether x is greater than x c + W (step S524). When x is equal to or less than x c + W (No in step S524), the processing returns to step S522, and a confidence value with updated coordinates ( x ,   y ) is acquired. On the other hand, when x is greater than x c + W (Yes in step S524), the processing proceeds to step S525 to perform a sliding direction in the vertical direction.”); detect partial objects that are part of the object by performing object detection on the generated object presence partial area (Para [0104]: “Next, the object detection unit 4 passes the coordinates ( x ,   y ) to the degree-of-certainty computation unit 5 and acquires a confidence value of the detection region (step S522).”); and restore results of object detection in the current image by placing the partial objects detected using the generated partial information in the predicted object presence area (Para [0109]: “In step S527, the object detection unit 4 outputs, as a detection result, sets of the coordinates ( x ,   y ) of the detection region and the degrees of certainty acquired so far.”). However Hisada does not disclose predict an object presence area, which is an area where an object is present in a current image, which is an image of the object captured at a predetermined time, based on results of object detection in a past image, which is an image of the object captured in the past than at the predetermined time. Gaidon teaches predict an object presence area, which is an area where an object is present in a current image, which is an image of the object captured at a predetermined time, based on results of object detection in a past image, which is an image of the object captured in the past than at the predetermined time (Fig. 3, Col. 5 Lns. 39-50: “With reference also to FIG. 3, at runtime, the proposal extractor 40 takes as input a first frame 14 corresponding to a respective time t of the input video sequence 12. The frame may include one or more objects 70, 72, 74, 76, 78, etc. The proposal extractor 40 generates generic object proposals (i.e., predicts locations of the objects). The proposals may be in the form of a list of windows (bounding boxes) that are likely to contain any kind of object in the frame. For example, a set of windows 80, 82, 84, 86, 88, etc., is generated, each locating a candidate object. As is evident, some of the windows may locate objects which are not in any of the categories of interest.”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hisada with proposing possible location of objects based on past object detections of Gaidon to effectively reduce the computational resource needed when performing object detection. Regarding claim 2, dependent upon claim 1, Hisada in view of Gaidon teaches everything regarding claims 1. Hisada further discloses generate the object presence partial area that includes at least two partial areas with two different vertices each located on the diagonal of the object presence area (Fig. 2 Detection Region). Regarding claim 3, dependent upon claim 1, Hisada in view of Gaidon teaches everything regarding claims 1. Hisada further discloses generate the object presence partial area that includes at least four partial areas, each with four different edges of the object presence area (Fig. 2 Detection Region). Regarding claim 6, dependent upon claim 1, Hisada in view of Gaidon teaches everything regarding claims 1. Hisada further discloses generate the object presence partial area that includes at least a plurality of partial areas located on the diagonal of the object presence area among N areas make up the object presence area N-divided in a grid-like pattern (Fig. 2 Detection Region, Fig. 6 Granularity). Regarding claim 7, dependent upon claim 6, Hisada in view of Gaidon teaches everything regarding claims 6. Hisada further discloses determine N according to object detection results in the past image (Fig. 6 Granularity, Para [0082]: “Next, the parameter setting unit 9 determines granularity t j that serves as a level boundary for the detection granularity j (step S203). The parameter setting unit 9 may compute t j (j=l, 2, ... , Dt-1), for example, by equally dividing the total ∑count of the detection counts for the granularity t into Dt.”). Regarding claim 8, dependent upon claim 1, Hisada in view of Gaidon teaches everything regarding claims 1. Hisada further discloses perform object detection to multiple object presence partial areas in a batch (Para [0088]: “Next, the object detection unit 4 acquires a confidence value for the target object of each detection region from the degree-of-certainty computation unit 5 while shifting the detection region by SW and SH within a range indicated by scope across the detection image thus received (step S302: sliding window processing).”). Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over HISADA (US 2019/0392606 A1, hereinafter Hisada) in view of Gaidon et al. (US 9,443,320 B1, hereinafter Gaidon) and Dye (US 2003/0235338 A1, hereinafter Dye). Regarding claim 4, dependent upon claim 1, Hisada in view of Gaidon teaches everything regarding claims 1. However Hisada in view of Gaidon does not explicitly teach generate the object presence partial area that includes at least one partial area with one vertex of the object presence area and two partial areas with two different edges each of the object presence area without the vertex. Dye teaches generate the object presence partial area that includes at least one partial area with one vertex of the object presence area and two partial areas with two different edges each of the object presence area without the vertex (Figs. 6-7, Para [0138]: “FIG. 6A represents a typical image example containing a background object 970 and three foreground objects 940, 945, 950. Information regarding each of these 4 objects may be transported through the transport medium in a compressed fashion as described herein.”, Para [0139]: “Each of the objects may be broken down into a plurality of blocks. FIG. 6A shows a single block 942 on the grid of the pyramid object 940. This single block 942 is used to illustrate the process of object recognition and depth determination for each block of pixels on each object as represented in FIGS. 6B and 6C.”). It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Hisada in view of Gaidon with dividing the window into grid like structure with more than 4 regions of Dye as Hisada suggests having finer granularity of division of sliding window in Fig. 6 and Para [0040]. Dye provides an illustration of how the granularity of Hisada can be performed and thus is reasonable for a person of ordinary skill in art to combine Dye with Hisada in view of Gaidon. Regarding claim 5, dependent upon claim 1, Hisada in view of Gaidon teaches everything regarding claims 1. However Hisada in view of Gaidon does not explicitly teach generate the object presence partial area that includes at least two partial areas, each with two different vertices of the object presence area and a partial area with one edge of the object presence area without those vertices. Dye teaches generate the object presence partial area that includes at least two partial areas, each with two different vertices of the object presence area and a partial area with one edge of the object presence area without those vertices (Figs. 6-7, Para [0138]: “FIG. 6A represents a typical image example containing a background object 970 and three foreground objects 940, 945, 950. Information regarding each of these 4 objects may be transported through the transport medium in a compressed fashion as described herein.”, Para [0139]: “Each of the objects may be broken down into a plurality of blocks. FIG. 6A shows a single block 942 on the grid of the pyramid object 940. This single block 942 is used to illustrate the process of object recognition and depth determination for each block of pixels on each object as represented in FIGS. 6B and 6C.”). Relevant Prior Art Directed to State of Art YANG et al. (Visual Tracking With Long-Short Term Based Correlation Filter, hereinafter Yang) is prior art not applied in the rejection(s) above. Yang discloses a real-time long-short-term multi-model based tracking method. Nishino et al. (US 8,548,202 B2, hereinafter Nishino) is prior art not applied in the rejection(s) above. Nishino discloses an apparatus for detecting movement of an object captured by an imaging device, the apparatus includes a moving object detection unit, that is (1) operable to detect movement of an object based on a first moving object detecting process, and (2) operable to detect movement of the object based on a second moving object detecting process. The apparatus also includes an output unit operable to generate an output based on the detection by the moving object detection unit based on at least one of the first and second moving object detecting processes. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA CHEN whose telephone number is (703)756-5394. The examiner can normally be reached M-Th: 9:30 am - 4:30pm ET F: 9:30 am - 2:30pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, STEPHEN R KOZIOL can be reached at (408)918-7630. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J. C./Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Dec 13, 2023
Application Filed
Jan 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602747
METHOD AND APPARATUS FOR DENOISING A LOW-LIGHT IMAGE
2y 5m to grant Granted Apr 14, 2026
Patent 12592090
COMPENSATION OF INTENSITY VARIANCES IN IMAGES USED FOR COLONY ENUMERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579614
IMAGING DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12579678
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 17, 2026
Patent 12573065
Vision Sensing Device and Method
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+26.1%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month