Prosecution Insights
Last updated: April 19, 2026
Application No. 18/461,279

SYSTEMS AND METHODS FOR ADDING A NEW ITEM TO A CONTACTLESS SALES SYSTEM

Non-Final OA §103
Filed
Sep 05, 2023
Examiner
MITCHELL, NATHAN A
Art Unit
3627
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Alwaysai Inc.
OA Round
3 (Non-Final)
73%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
83%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
689 granted / 940 resolved
+21.3% vs TC avg
Moderate +10% lift
Without
With
+10.1%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
36 currently pending
Career history
976
Total Applications
across all art units

Statute-Specific Performance

§101
16.4%
-23.6% vs TC avg
§103
44.3%
+4.3% vs TC avg
§102
19.9%
-20.1% vs TC avg
§112
11.2%
-28.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 940 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 2/6/2026 has been entered. Response to Arguments Argument: PNG media_image1.png 158 654 media_image1.png Greyscale PNG media_image2.png 234 680 media_image2.png Greyscale Response: The examiner disagrees. The broadest reasonable interpretation of “zone” is an area and Chaubard clearly discloses identifying an area within an image that contains an object. Argument: PNG media_image3.png 262 664 media_image3.png Greyscale Response: The examiner disagrees. Chaubard stores a segmentation mask which is a representation of the area containing the object. Regardless, as claimed “metadata including indication of the zone” is not used for anything in claim 1 and would thus be nonfunctional descriptive material that cannot provide an inventive concept. "Claim limitations directed to printed matter are not entitled to patentable weight unless the printed matter is functionally related to the substrate on which the printed matter is applied." Praxair Distribution, Inc. v. Mallinckrodt Hosp. Prods. IP Ltd., 890 F.3d 1024, 1031 (Fed. Cir. 2018) (emphasis added). Our reviewing court has also explained that this printed matter doctrine is not strictly limited to "printed" materials. Mallinckrodt, 890 F.3d at 1032. More specifically, "a claim limitation is directed to printed matter 'if it claims the content of information." Mallinckrodt, 890 F.3d at 1032 (quoting In re DiStefano, 808 F.3d 845, 848 (Fed. Cir. 2015)). In addition, it has held that non-functional descriptive material cannot lend patentability to an invention that would have otherwise been unpatentable. See In re Ngai, 367 F.3d 1336, 1339 (Fed. Cir. 2004); see also In re Gulack, 703 F.2d 1381, 1385 (Fed. Cir. 1983) (when descriptive material is not functionally related to the substrate, the descriptive material will not distinguish the invention from the prior art in terms of patentability). The content of non-functional descriptive material is not entitled to weight in the patentability analysis. Cf. In re Lowry, 32 F.3d 1579, 1583 (Fed. Cir. 1994) ("Lowry does not claim merely the information content of a memory."). In Ex parte Nehls, 88 USPQ2d 1883, 1888 (BPAI 2008) (precedential), the Board held that the nature of the information being manipulated by the computer should not be given patentable weight absent evidence that the information is functionally related to the process "by changing the efficiency or accuracy or any other characteristic" of the steps. See also Ex parte Curry, 84 USPQ2d 1272, 1274 (BPAI 2005) (non-precedential) (holding "wellness-related" data stored in a database and communicated over a network was non-functional descriptive material as claimed because the data "does not functionally change" the system). Argument: PNG media_image4.png 352 634 media_image4.png Greyscale Response: The term “unbiased training dataset” is not used in the spec. The spec does say in [54] “The system can generate datasets such that each object of interest trains the machine learning model equally to prevent biases in item identification.” Absent a special definition, claim language is given the broadest reasonable interpretation. Chaubard discloses training on a random orientation of images and therefore is not biased to any particular angle of the product (see paragraphs 51-53). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-3, 6, 7, 11-14, 20, 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chaubard (US 20200089997 A1) in view of Yang (US 20220269907 A1) Regarding claim 1, Chaubard discloses: 1. A computer-implemented method comprising:receiving images depicting a retail product from different perspectives(paragraph 4, fig. 2, fig. 5 505); identifying the retail product in the images (fig. 5 510); localizing the retail product in the image by zone (fig. 5 515); storing metadata of the retail product in the image (paragraph 24), the metadata including indication of the zone (abstract optimized segmentation mask); cropping the images about the retail product such the images are bounded by dimensions of the retail product (paragraph 41); adding the cropped image to an asset bin designated for storing assets pertaining to the retail product (fig. 5 540); adding the asset bin to a repository of asset bins, the repository comprising a database of retail products (paragraph 32); compiling a configuration file based on the cropped images and the metadata (paragraph 24, paragraph 27, paragraph 4); Chaubard fails to explicitly disclose and Yang discloses: generating an unbiased training dataset based on the compiled configuration file (paragraph 54). It would have been obvious to one of ordinary skill in the art to combine this teaching with Chaubard by including ground truth labels such that a model can be trained for object detection. The motivation for the combination is enhanced functionality (paragraph 2). Regarding claim 2, Chaubard discloses: 2. The computer-implemented method of claim 1, wherein the images comprise one or more controlled zones that are subsets of the image (fig. 2 staging area). Regarding claim 3, Chaubard discloses: 3. The computer-implemented method of claim 1, wherein the images comprise a second image (fig. 2, paragraph 4), the method further comprising localizing and identifying the retail product in the second image (fig. 515); cropping the second image about the retail product such that only pixels remain that are associated with the retail product (paragraph 41, fig. 5); and adding the cropped second image to the asset bin corresponding to the respective retail product (fig. 5 545, paragraph 32). Regarding claim 6, Chaubard discloses: 6. The computer-implemented method of claim 1, wherein the images comprise alpha channel pixels (paragraph 41, transparency). Regarding claim 7, Chaubard discloses: 7. The computer-implemented method of claim 1, wherein cropping the image comprises replacing pixels that do not contain the retail product with zero value alpha channel pixels (paragraph 41 transparency set to zero). Regarding claim 11, Chaubard discloses: 11. The computer-implemented method of claim 1, wherein adding the asset bin to the repository comprises adding the asset bin to the configuration file (paragraph 32). Regarding claim 12, Chaubard discloses: 12. A system, comprising:a plurality of cameras (fig. 2); a memory (paragraph 58); and at least one processor configured to execute machine-readable instructions stored in the memory to (paragraph 58): receive images captured from different perspectives, each of the images comprising alpha channel pixels (fig. 2, paragraph 4, fig. 5 505, paragraph 41); localize a retail product depicted in the image (fig. 5 515) according zone (abstract segmentation mask); crop the image by replacing pixels that do not contain the retail product with zero value alpha channel pixels (fig. 5 520-525, paragraph 41); sort the image to an asset folder corresponding to the retail product; and add the asset folder to a repository (paragraph 32, fig. 5 540); compiling a configuration file based on the cropped images and the metadata (paragraph 24, paragraph 27, paragraph 4); Chaubard fails to explicitly disclose and Yang discloses: generating an unbiased training dataset based on the compiled configuration file (paragraph 54). It would have been obvious to one of ordinary skill in the art to combine this teaching with Chaubard by including ground truth labels such that a model can be trained for object detection. The motivation for the combination is enhanced functionality (paragraph 2). Regarding claim 13, Chaubard discloses: 13. The system of claim 12, wherein each of the images comprise one specific zone that is a subset of the image (fig. 2 staging area). Regarding claim 14, Chaubard discloses: 14. The system of claim 12, wherein the images comprise a second image (paragraph 4); and the machine-readable instructions further cause the at least one processor to: identify the retail product in the second image (fig. 5 510); localize the retail product in the second image (fig. 5 515); generate metadata about the retail product (paragraph 24, 45); crop the second image about the retail product such that an image size corresponds to the pixels containing the retail product (fig. 5, paragraph 41); and add the cropped second image to an asset bin corresponding to a respective retail product (paragraph 32, fig. 5 540). Regarding claim 20, Chaubard discloses: 20. The system of claim 12, wherein the machine-readable instructions further cause the at least one processor to add the asset folder to the configuration file in the repository (paragraph 32). Regarding claim 21, Chaubard further discloses training one or more machine learning components to detect the retail product (abstract). Claim(s) 4, 5, 8-10, 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chaubard (US 20200089997 A1) in view of Yang (US 20220269907 A1) and further in view of Turkelson (US 20200210768 A1). Regarding claims 4 and 5, Chaubard as modified fails to disclose and Turkelson discloses: 4. The computer-implemented method of claim 1, further comprising:receiving an additional plurality of images; and determining that a number of total images exceeds a threshold number of needed image (fig. 2-5, paragraph 42). 5. The computer-implemented method of claim 4, further comprising displaying information on a client device indicating that sufficient images have been received (paragraph 86). It would have been obvious to one of ordinary skill in the art to combine this teaching with those of Chaubard by providing directions and feedback regarding the collection of images. The motivation for the combination is improved objection recognition scope and accuracy (paragraph 30). Claims 15 and 16 are rejected for the same reasons as above, but applied to claim 12. Regarding claims 8-10, Chaubard discloses: 8. The computer-implemented method of claim 1, wherein the images comprise the metadata (paragraph 24), wherein the metadata comprises 9. The computer-implemented method of claim 8, wherein the item identifiers comprise at least one of SKU numbers, item name, and item shape (paragraph 24 SKU). 10. The computer-implemented method of claim 8, wherein the item identifiers are determined by receiving information from a client device indicating an item identifier (paragraph 13 manually typed in or received by scanning). Chaubard as modified fails to disclose and Turkelson discloses the metadata comprises camera identifiers (paragraph 34, paragraph 52 camera pose) , subzone identifiers (paragraph 52 lighting conditions). It would have been obvious to one of ordinary skill in the art to combine these teachings with Chaubard by including further information in the image meta data. The motivation for the combination is improved objection recognition scope and accuracy (paragraph 30). Claims 17-18 are rejected for the same reasons as claims 8-10. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Buibas (US 20210174145 A1) discloses a product onboarding system. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NATHAN A MITCHELL whose telephone number is (571)270-3117. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ryan Zeender can be reached at 571-272-6790. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NATHAN A MITCHELL/Primary Examiner, Art Unit 3627
Read full office action

Prosecution Timeline

Sep 05, 2023
Application Filed
May 16, 2025
Non-Final Rejection — §103
Aug 14, 2025
Examiner Interview Summary
Aug 14, 2025
Applicant Interview (Telephonic)
Aug 20, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103
Feb 06, 2026
Request for Continued Examination
Mar 01, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602677
SALES TRANSACTION PROCESSING SYSTEM, SALES TRANSACTION PROCESSING APPARATUS, AND METHOD PERFORMED BY SALES TRANSACTION PROCESSING APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12597007
SELF-SERVICE CHECKOUT TERMINAL WITH A SECURITY FUNCTION BASED ON DETECTION OF WEIGHT OF ITEMS
2y 5m to grant Granted Apr 07, 2026
Patent 12591774
ORDERING INFRASTRUCTURE USING APPLICATION TERMS
2y 5m to grant Granted Mar 31, 2026
Patent 12579529
Measurement Information Processing Mode Switching System
2y 5m to grant Granted Mar 17, 2026
Patent 12579591
Artificial Intelligence for Vehicular Drive-Through Based Exchanges
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
73%
Grant Probability
83%
With Interview (+10.1%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 940 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month