Prosecution Insights
Last updated: April 19, 2026
Application No. 18/018,802

PRODUCT DETECTION DEVICE, PRODUCT DETECTION METHOD, AND RECORDING MEDIUM

Final Rejection §103
Filed
Jan 30, 2023
Examiner
BILODEAU, DUSTIN E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
NEC Corporation
OA Round
2 (Final)
88%
Grant Probability
Favorable
3-4
OA Rounds
3y 3m
To Grant
93%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
71 granted / 81 resolved
+25.7% vs TC avg
Moderate +5% lift
Without
With
+5.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
111
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
75.7%
+35.7% vs TC avg
§102
9.9%
-30.1% vs TC avg
§112
2.8%
-37.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 81 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s response to the last Office Action, filed 8/6/2025, has been entered and made of record. Applicant has amended claims 1, 9, and 16. Claims 1-7, 9-16, and 23-27 are currently pending. Applicant’s arguments, filed 8/6/2025, with respect to the rejection of claim 16 under 35 U.S.C. 101 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. Applicant's arguments filed 8/6/2025, with respect to the rejection of claims 1, 9, and 16 under 35 U.S.C. 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of Skaff (U.S. Patent Pub. No. 2022/0083959). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-7, 9-16, 23-27 are rejected under 35 U.S.C. 103 as being unpatentable over Akatsuka (U.S. Patent Pub. No. 2020/0394599) in view of Skaff (U.S. Patent Pub. No. 2022/0083959) in view of Yanagi (U.S. Patent Pub. No. 2020/0334620). Regarding Claim 1, Akatsuka teaches a product detection device comprising: one or more memories storing instructions; and one or more processors configured to execute the instructions to (¶38 the processor 1001 reads a program (program code), a software module, and data from the storage 1003 and/or the communication device 1004 into the memory 1002 and executes various processes:) acquire an image of a shelf on which a product is displayed (¶45 The image acquiring unit 11 acquires an image acquired by imaging product display shelves on which a plurality of products are arranged;) determine, from the image, product display information including a shape of the product; (¶49 The detection unit 12 detects a product area image representing products from an image of product display shelves acquired by the image acquiring unit 11. More specifically, the detection unit 12, for example, recognizes each object extracted using a technique such as a known edge detection technique or the like for an image of product display shelves as a product area image representing products. In addition, the detection unit 12, for example, has learned a shape of each product in advance using a technique of known deep learning or the like and detects a product area image representing products from the image of product display shelves using learned data) select a model to be used for detecting the image based on the determined product display information; and (¶51 The product recognizing unit 13 recognizes a product represented by a product area image detected by the detection unit 12 on the basis of information relating (model) to images of products stored in advance. In this embodiment, the information relating to images of products used for recognition of products is stored in the product data storing unit 30) Akatsuka does not explicitly disclose determine, from the image, a shape of the shelf, a shape of the product, and a condition of a display of the product; select a model to be used for detecting the image based on a combination of the shape of the shelf, the shape of the product, and a condition of a display of the product; detect, from the image, a display state of the product displayed on the shelf by using the selected model. Skaff is in the same field of art of image analysis. Further, Skaff teaches determine, from the image, a shape of the shelf (Fig. 4; ¶75 the image 300 is processed by a classifier 414 that classifies each pixel of the image 300 determine if the pixel is part of a shelf or not part of a shelf, to produce a binary mask, having pixels located on shelves flagged as a binary “1” in pixels not located on shelves flagged as a binary “0” ... The mask in FIG. 8 is the result of processing the image of FIG. 3 with the shelf segment classifier 414,) a shape of the product (¶65 Product detector 402 produces, as an output, the image with a bounding box as shown in FIG. 5. In some embodiments, the bounding box is represented in a data structure as a tuple of data of the form BB={x, y, w, h}. A tuple may comprise, for example, the x,y coordinates of a corner of the bounding box as well as the width (w) and the height (h) of the bounding box. Other information may be included in the tuple, for example, depth information,) and a condition of a display of the product (¶76 The binary mask showing the location of the shelves may be used to determine which of the shelf labels identified by shelf/section label classifier 408 are shelf labels representing product sitting on a shelf (condition) or are peg labels representing products hanging from a peg (condition);) select a model to be used for detecting the image based on a combination of the shape of the shelf, the shape of the product, and a condition of a display of the product (Fig. 4, 430 and 436; ¶86 At box 430 in pipeline 400, it is determined which shelf products are out-of-stock. This happens by consulting the dictionary of shelf label tuples and associated product tuples and determining which shelf label tuples have no associated product tuples. That is, which shelf labels have no products associated therewith; ¶87 at box 436, it is determined if products which have been classified as peg products are out-of-stock. In box 424, the peg labels were associated with peg products. In box 436, those peg label tuples in the dictionary having no associated product tuple; The model that determines out of stock state for the product is determined from the features detected in the image.) Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Akatsuka by detecting the shape of the shelf, the product, and condition of the product, and selecting a model to be used based on those features that is taught by Skaff; thus, one of ordinary skilled in the art would be motivated to combine the references to be able to automate the tracking of inventory to determine when various products are out-of-stock, have been repositioned, or are otherwise not where they are expected to be (Skaff ¶9). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Yanagi is in the same field of art of image analysis. Further, Yanagi teaches detect, from the image, a display state of the product displayed on the shelf by using the selected model (¶137 When the background image appearing in monitoring area MA corresponds with the template image by pattern matching, lacking detector 213 detects that product 70 is not displayed in a region where the background image appears. In other words, shortage or lacking of product 70 corresponding to shelf label 51 is detected). Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Akatsuka in view of Skaff by detecting a display state that is taught by Yanagi; thus, one of ordinary skilled in the art would be motivated to combine the references to monitor a display condition of goods or articles on the display shelf accurately (Yanagi ¶11). Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention. Regarding Claim 2, Akatsuka in view of Skaff in view of Yanagi discloses the product detection device according to claim 1, wherein the one or more memories store one or more models learned for detecting the product from the image, the one or more models related to the product display information, and wherein the one or more processors configured to execute the instructions to: select the model matching the product display information from one or more memories (Akatsuka, ¶51 The product recognizing unit 13 recognizes a product represented by a product area image detected by the detection unit 12 on the basis of information relating to images of products stored in advance. In this embodiment, the information relating to images of products used for recognition of products is stored in the product data storing unit 30.) Regarding Claim 3, Akatsuka in view of Skaff in view of Yanagi discloses the product detection device according to claim 1, wherein the shape of the product includes a shape of the product imaged from a plurality of angles (Yanagi, ¶182 In addition, as described above, the recognition model appropriate for video recognition changes depending on the imaging direction, and thus, the recognition model may be prepared in advance for each of different imaging directions.) Regarding Claim 4, Akatsuka in view of Skaff in view of Yanagi discloses the product detection device according claim 1, wherein the shape of the product includes a shape of the product placed on one stage and a shape of the products placed on a plurality of stages in a stacking manner (Akatsuka, ¶59 the planogram analyzing unit 14 acquires planogram data on the basis of a positional relation between the position of a shelf board acquired from the image of product display shelves and the position of a product area image detected by the detection unit and information of products represented by a product area image recognized by the product recognizing unit 13 ... The number of stacking stages is the number of products of the same kind stacked at a certain position on a shelf board.) Regarding Claim 5, Akatsuka in view of Skaff in view of Yanagi discloses the product detection device according to claim 1, wherein one or more models includes a first model, for a certain product, in which a first difference between a displayable region, in a first image of a shelf on which the product is displayed, in which the product is allowed to be displayed and the displayable region in a second image acquired after acquisition of the first image is learned (Yanagi, ¶96 Lacking detector 213 is an example of a monitoring section and, for example, monitors a stock (or may be referred to as “display condition”) of product 70 in display shelf 50 based on a change in video corresponding to the presence or absence of a product in the monitoring area set by monitoring area setter 212; ¶101 a template image (i.e., “recognition model”) used for pattern matching of video recognition corresponding to the camera position may be prepared. In this embodiment, the recognition model is a template image, and the shape of shelf label and/or the background image is recognized by pattern matching. However, other implementation methods are possible. For example, a trained model generated by machine learning of each of the shelf label and/or the background image may be used as the recognition model to recognize the shelf label and/or the background image.) Regarding Claim 6, Akatsuka in view of Skaff in view of Yanagi discloses the product detection device according to claim 5, wherein the one or more models includes a second model, for the certain product, in which association between the first difference and a second difference between the number of the products appearing in the first image and the number of the products appearing in the second image is learned (Yanagi, ¶168 since the number of products 70, which are monitoring targets in monitoring area MA, can be known, for example, lacking detector 213 can more accurately perform stepwise detecting of lacking. Examples of the stepwise detecting of lacking will be described later.) Regarding Claim 7, Akatsuka in view of Skaff in view of Yanagi discloses the product detection device according to claim 1, wherein the one or more processors configured to execute the instructions to: notify an external terminal of a result of the detection when an anomaly in a display state of the product is detected (Yanagi, ¶103 Output section 214 is, an example of a notification information generator that generates and outputs information to be presented (e.g., sent as a notification) to, for example, the stock manager. Output section 214, for example, generates notification information including a detection result of lacking detector 213 and/or information based on the detection result and outputs the notification information to output device 203 and/or communicator 206.) Regarding claim 9, claim 9 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Akatsuka further teaching on: a product detection method (¶151 a method described in the present disclosure, elements of various steps are presented in an exemplary order, and the method is not limited to the presented specific order.) Claim 10 recites limitations similar to claim 2 and is rejected under the same rationale and reasoning. Claim 11 recites limitations similar to claim 3 and is rejected under the same rationale and reasoning. Claim 12 recites limitations similar to claim 4 and is rejected under the same rationale and reasoning. Claim 13 recites limitations similar to claim 5 and is rejected under the same rationale and reasoning. Claim 14 recites limitations similar to claim 6 and is rejected under the same rationale and reasoning. Claim 15 recites limitations similar to claim 7 and is rejected under the same rationale and reasoning. Regarding claim 16, claim 16 has been analyzed with regard to claim 1 and is rejected for the same reasons of obviousness as used above as well as in accordance with Akatsuka further teaching on: A recording medium storing a product detection program (¶39 The memory 1002 is a computer-readable recording medium and, for example, may be configured by at least one of a read only memory (ROM).) Claim 23 recites limitations similar to claim 3 and is rejected under the same rationale and reasoning. Claim 24 recites limitations similar to claim 4 and is rejected under the same rationale and reasoning. Claim 25 recites limitations similar to claim 5 and is rejected under the same rationale and reasoning. Claim 26 recites limitations similar to claim 6 and is rejected under the same rationale and reasoning. Claim 27 recites limitations similar to claim 7 and is rejected under the same rationale and reasoning. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DUSTIN BILODEAU whose telephone number is (571)272-1032. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUSTIN BILODEAU/Examiner, Art Unit 2664 /CHARLOTTE M BAKER/Primary Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Jan 30, 2023
Application Filed
Apr 30, 2025
Non-Final Rejection — §103
Aug 06, 2025
Response Filed
Oct 14, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602802
ELECTRONIC DEVICE FOR GENERATING DEPTH MAP AND OPERATING METHOD THEREOF
2y 5m to grant Granted Apr 14, 2026
Patent 12597293
System and Method for Authoring Human-Involved Context-Aware Applications
2y 5m to grant Granted Apr 07, 2026
Patent 12592084
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR IDENTIFYING STATE OF LIGHTING
2y 5m to grant Granted Mar 31, 2026
Patent 12591959
METHOD, APPARATUS, AND DEVICE FOR PROCESSING IMAGE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12581041
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND INFORMATION COLLECTION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
88%
Grant Probability
93%
With Interview (+5.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 81 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month