Prosecution Insights
Last updated: April 19, 2026
Application No. 18/512,206

AUTOMATIC CHECKOUT SYSTEM, METHOD FOR CONTROLLING AUTOMATIC CHECKOUT SYSTEM, AND STORAGE MEDIUM STORING INSTRUCTIONS TO PERFORM METHOD FOR CONTROLLING AUTOMATIC CHECKOUT SYSTEM

Non-Final OA §103
Filed
Nov 17, 2023
Examiner
JONES, RAVEN SIMONE
Art Unit
2665
Tech Center
2600 — Communications
Assignee
Research & Business Foundation Sungkyunkwan University
OA Round
1 (Non-Final)
100%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 100% — above average
100%
Career Allow Rate
1 granted / 1 resolved
+38.0% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
5 currently pending
Career history
6
Total Applications
across all art units

Statute-Specific Performance

§101
12.5%
-27.5% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
12.5%
-27.5% vs TC avg
§112
18.8%
-21.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/07/2023 has been made record of and considered by the examiner. The submission is in compliance with the provisions of 37 CFR 1.97. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 2, 3, 9, 10, 11, and 17 are rejected under 35 U.S.C 103 as being unpatentable over McDaniel et al. (US20230252443 A1), in further view of Chaubard (US11481751 B1). Regarding claim 1, McDaniel et al. teaches an automatic checkout system comprising a storage medium (¶ 36; each transaction terminal comprises at least one processor and a non-transitory computer-readable storage medium) configured to store a product detection model trained based on training images in which parameters of a plurality of sample images are manipulated and one or more commands (¶ 38; Machine learning model is trained as an item/product segmentation model trainer) and a processor configured to execute the one or more commands stored in the storage medium, wherein the instructions, when executed by the processor, cause the processor to sequentially load a plurality of frame images, input the plurality of loaded frame images into the product detection model, (¶ 36; The executable instructions when provided or obtained by the processor from medium cause the processor to perform operations) detect at least one product from each of the plurality of frame images, track at least one product detected in each frame image, and count at least one product detected based on a tracking result. (¶ 20; Accuracy of a total item count for the multiple items and item recognition for each of the multiple items is improved through processing the pipeline of operations.) McDaniel et al. is silent on the remaining limitations of claim 1. However, Chaubard discloses tracking at least one product detected in each frame image (col. 10 lines 10-42; It is often the case that the products can be tracked from frame to frame to help smooth predictions and carry over information as the system calculates more Fts). The McDaniel et al. reference teaches an automatic checkout system employing a trained product detection model that processes a plurality of frame images and count multiple products during a transaction. However, McDaniel fails to teach tracking detected products in each frame image. In a similar field of endeavor, Chaubard teaches tracking products from frame to frame. It would have been obvious to a person having ordinary skill in the art at the time the invention was filed to incorporate the frame-to-frame tracking techniques taught by Chaubard into the system of McDaniel in order to improve detection stability, reduce misclassification, and enhance counting accuracy. Regarding claim 2, Chaubard teaches wherein the parameters of the plurality of sample images includes at least one of a number of products per training image, a rotation angle of each sample image, a scale ratio of each sample image, and a gamma adjustment value of each sample image. (col. 14 lines 5-28; These parameters can be used to compute an Extrinsic Matrix, H, per camera which includes the following transformations: Rotation, Translation, and Scale). Regarding Claim 3, McDaniel teaches the training images are generated by manipulating the parameter of each sample image to which a binary mask is applied. (¶ 41; Generates masks for each unique item in the scene associated with the images, each mask corresponding to a unique item). The proposed combination as well as the motivation for combining the McDaniel et al. and Chaubard reference in rejection of claim 1 will also apply to claim 9 and are incorporated herein by reference. The proposed teaching from Chaubard presented in Claim 2, apply to Claim 10 and are incorporated herein by reference. The proposed teaching from McDaniel presented in Claim 3, apply to Claim 11 and are incorporated herein by reference. Claims 4 and 12 are rejected under 35 U.S.C 103 as being unpatentable over McDaniel et al. and Chaubard, in further view of Yang et al. (US20220343308 A1). Regarding claim 4, McDaniel and Chaubard teach the limitations of claim 1 as described above. However, they fail to teach the remaining limitations of claim 4. Yang et al., in view of McDaniel and Chaubard, teaches these limitations as follows; wherein the training images are generated by placing sample images to which the binary mask is applied on a checkout counter image, and placing a degree of overlap between products of each sample image within a predetermined cluster ratio range. (Abstract; for example, the item recognition system may apply an overlap detection model to a top image and a pixel-wise mask for the top image to detect whether an item is overlapping with another in the top image). It would have been obvious to one of the ordinary skills in the art at the time the invention was filed to combine Yang et al. with McDaniel et al. and Chaubard as described above to achieve the known and expected uses and benefits of improving recognition performance in scenarios involving overlapping products, which are common in automatic checkout systems. The proposed combination as well as the motivation for combining the McDaniel et al. and Chaubard references with Yang et al. reference in the rejection of claim 4 will also apply to claim 12 and are incorporated herein by reference. Claims 5, 6, 13, 14, 18, and 19 are rejected under 35 U.S.C 103 as being unpatentable over McDaniel et al. and Chaubard, in further view of Fisher et al. (WO2019032305 A2). Regarding claim 5, McDaniel and Chaubard teach the limitations of claim 1 as described above. However, they fail to teach the remaining limitations of claim 5. Fisher et al. in view of McDaniel et al. and Chaubard teaches these limitations as follows; wherein the storage medium is configured to store a hand detection model trained to detect a hand region from a plurality of input images, and wherein the processor is configured to input the plurality of frame images into the hand detection model to predict at least one hand region from each frame image. (¶ 26; The classifications of the hand from a sequence of images can be processed, using a convolutional neural network in some embodiments, to identify an action by the subject. The actions can be put and takes of inventory items as set out in embodiments described herein, or other types of actions decipherable by processing images of hands). Regarding claim 6, Fisher et al. teaches the processor is configured to identify a product held in the hand (¶ 160; The third image processors subsystem further comprise foreground image recognition engines. In one embodiment, the foreground image recognition engines recognize semantically significant objects in the foreground (i.e. shoppers, their hands and inventory items)). among the detected at least one product based on the predicted hand region. (¶ 25; the classification identifies whether the identified subject is holding an inventory item. The classification also identifies whether a hand of the identified subject is near a shelf or whether a hand of the identified subject is near the identified subject.) As described above, McDaniel and Chaubard collectively teach a system that detects, tracks, and counts products from frame images using trained machine learning models but they fail to teach detecting hand regions. A person of ordinary skill in the art would have been motivated to incorporate Fisher’s hand detection and hand-based product identification techniques into the system of McDaniel et al. and Chaubard in order to improve the accuracy of product identification and counting, particularly in scenarios where products are being picked up or occluded by the shopper’s hand. The proposed combination as well as the motivation for combining the McDaniel et al. and Chaubard references with Fisher et al. reference in rejection of claim 5 and 6 will also apply to claim 13, 14, 18, and 19 and are incorporated herein by reference. Claims 7 and 15 are rejected under 35 U.S.C 103 as being unpatentable over McDaniel et al. and Chaubard, in further view of Rhee (KR 20210004818 A). Regarding claim 7, McDaniel and Chaubard teach the limitations of claim 1 as described above. However, they fail to teach the remaining limitations of claim 7. Rhee in view of McDaniel and Chaubard, teaches these limitations as follows; the storage medium is configured to include an input queue, a detection queue, and a counting queue, and (Rhee Abstract: a step of collecting, by a portable terminal, egocentric images; a step of detecting, by at least one apparatus among the plurality of apparatuses, a product to be purchased from the egocentric images; a step of calculating, by the at least one apparatus among the plurality of apparatuses.) wherein the processor is configured to sequentially store the loaded plurality of frame images in the input queue and detect the at least one product from each frame image sequentially stored in the input queue, sequentially store the detected at least one product in the detection queue to track the detected at least one product within the plurality of frame images, and sequentially store the tracking results in the counting queue to count the detected at least one product. A person of ordinary skill in the art at the time of the invention would have been motivated to incorporate the queue-based processing architecture taught by Rhee into the system of McDaniel et al. and Chaubard, in order to improve processing efficiency, scalability, and organization of sequential image-based product detection and counting. The proposed combination as well as the motivation for combining the McDaniel et al. and Chaubard references with Rhee reference in rejection of claim 7 will also apply to claim 15 and are incorporated herein by reference. Claims 8 and 20 are rejected under 35 U.S.C 103 as being unpatentable over McDaniel et al. and Chaubard, in further view of Fisher et al. (US20230079018 A1). Regarding claim 8, McDaniel and Chaubard teach the limitations of claim 1 as described above. However, they fail to teach the remaining limitations of claim 8. Fisher et al. in view of McDaniel and Chaubard, teaches these limitations as follows; the processor is configured to load the plurality of frame images according to a predetermined batch size. (¶ 005; The method includes inputting, the cropped out image of the inventory item and another image of an inventory item to a trained size determination model. The trained size determination model determines whether the size of the cropped out image of the inventory item matches a size of the other image of the inventory item.) A person having ordinary skill in the art at the time of the invention would have been motivated to incorporate the batch-based image loading and processing techniques taught by fisher into the system of McDaniel et al. and Chaubard in order improve computational efficiency and accuracy of image-based processing. The proposed combination as well as the motivation for combining the McDaniel et al. and Chaubard references with Fisher et al. reference in rejection of claim 8 will also apply to claim 20 and are incorporated herein by reference. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAVEN S. JONES whose telephone number is (571)272-7759. The examiner can normally be reached M-Th 7:00a.m. - 5:00p.m.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Koziol can be reached at 571-438-5758. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAVEN SIMONE JONES/Examiner, Art Unit 2665 /Stephen R Koziol/Supervisory Patent Examiner, Art Unit 2665
Read full office action

Prosecution Timeline

Nov 17, 2023
Application Filed
Jan 12, 2026
Non-Final Rejection — §103
Apr 08, 2026
Response Filed

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586386
OBJECT RECOGNITION APPARATUS FOR AN AUTONOMOUS VEHICLE AND AN OBJECT RECOGNITION METHOD THEREFOR
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
100%
Grant Probability
99%
With Interview (+100.0%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 1 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month