Prosecution Insights
Last updated: April 19, 2026
Application No. 17/508,021

VISION PRODUCT INFERENCE BASED ON PACKAGE DETECT AND BRAND CLASSIFICATION WITH ACTIVE LEARNING

Final Rejection §101
Filed
Oct 22, 2021
Examiner
ABOUZAHRA, REHAM K
Art Unit
3625
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Rehrig Pacific Company
OA Round
4 (Final)
12%
Grant Probability
At Risk
5-6
OA Rounds
3y 12m
To Grant
21%
With Interview

Examiner Intelligence

Grants only 12% of cases
12%
Career Allow Rate
17 granted / 142 resolved
-40.0% vs TC avg
Moderate +9% lift
Without
With
+8.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 12m
Avg Prosecution
39 currently pending
Career history
181
Total Applications
across all art units

Statute-Specific Performance

§101
42.3%
+2.3% vs TC avg
§103
39.8%
-0.2% vs TC avg
§102
2.1%
-37.9% vs TC avg
§112
14.0%
-26.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 142 resolved cases

Office Action

§101
DETAILED ACTION Status of Claims The following is a Final Office Action in response to Applicant’s response received 08/19/2025. Claims 1-10 and 28-30 are cancelled. Claims 41-42 are newly added. Claims 11-27 and 31-42 are considered in this Office Action. Claims 11-27 and 31-42 are currently pending. Information Disclosure Statement The information disclosure statement (IDS) submitted on 07/14/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Arguments Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Applicant’s arguments with respect to 35 U.S.C. §101 rejection have been considered, but are found not persuasive. Applicant asserts that claims 11 and 19 require using the chosen machine learning model to “infer a brand of the first package.” A machine learning model, as claimed, is a computational construct (e.g., a convolutional neural network) trained many images ([0010]). This is computationally infeasible for the human mind. A human cannot mentally simulate the matrix multiplications, non-linear activations, or probabilistic computations of a neural network. If the Examiner posits that choosing a model is mental, “having” the model mentally does not enable executing its inference process, as this requires computational resources far beyond human capability. This logical inconsistency shows the Examiner’s mental process characterization is flawed, as the steps are interdependent and inherently computational. The 2025 Memorandum states that AI/ML claims involving computations that cannot practically be performed mentally (e.g., neural network inference) are not mental processes. ADASA Inc. v. Avery Dennison Corp., 55 F.4th 900, 909 (Fed. Cir. 2022) (hardware-based data structures not mental processes). SRI Int’l, Inc. v. Cisco Sys., Inc., 930 F.3d 1295, 1304 (Fed. Cir. 2019), supports that complex classifiers are non-abstract. The specification confirms the models’ complexity. Both claim limitations involve complex computational processes. Training distinct machine learning models on specific image datasets requires processing thousands of images and applying algorithms (e.g., neural networks), which are beyond human mental capacity. Similarly, “inferring a package type” and “inferring a brand” using these models involve real-time image analysis and probabilistic computations, not mere human observation or judgment. The 2025 Memorandum notes that claims involving AI that cannot be practically performed mentally do not fall within the mental process grouping, citing examples like a hardware-based data structure in ADASA Inc. v. Avery Dennison Corp., 55 F.4th 900, 909 (Fed. Cir. 2022). The examiner respectfully disagrees. The examiner notes that “based upon the package type inferred for each of the plurality of packages of beverage containers, choosing at least one of the plurality of brand machine learning models” is a concept that can be performed in the human mind and/or using a pen and paper, and examples of mental processes include observations, evaluations, judgments, and opinions. The examiner further note that the claims recite an abstract idea by reciting concepts of performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions, which falls into the “Mental processes” group within the enumerated groupings of abstract ideas. Claims can recite a mental process even if they are claimed as being performed on a computer. MPEP 2106.04(a)(2)(II). Although the Examiner acknowledges that “machine learning” itself cannot be performed in the human mind, the Examiner maintains that, but for the generic computing elements (processor that executes instructions) relied on to perform “machine learning” recited in claim can be performed in the human mind and/or using a pen and paper. The examiner notes that the machine learning models are used in high level of generality that amounts to algorithm and/or steps that can be performed in the human mind and/or using a pen and paper. The examiner notes “machine learning” itself cannot be performed in the human mind, the Examiner maintains that, but for the generic computing elements (processor that executes instructions) relied on to perform “machine learning” recited in claim can be performed in the human mind and/or using a pen and paper. The examiner notes that the machine learning models are used in high level of generality that amounts to algorithm and/or steps that can be performed in the human mind and/or using a pen and paper. The examiner notes that the “machine learning” is relied on to perform activity that, but for the claimed reliance on a generic computer, mimics human thought processes of evaluating collected data perceptible only in the human mind. See In re TLI Comme'ns LLC Patent Litig., 823 F.3d 607, 611 (Fed. Cir. 2016); FairWarning IP, LLC v. Tatric Sys., Inc., 839 F.3d 1089, 1093-94 (Fed. Cir. 2016). The Federal Circuit has held similar concepts to be abstract. Thus, for example, the Federal Circuit has held that abstract ideas include the concepts of collecting data, analyzing the data, and reporting the results of the collection and analysis, including when limited to particular content. See, e.g., Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1340-41 (Fed. Cir. 2017) (identifying the abstract idea of organizing, displaying, and manipulating data); Elec. Power Grp., LLC v Alstom S.A., 830 F.3d 1350,13 54 (Fed. Cir. 2016) (characterizing collecting information, analyzing information by steps people go through in their minds, or by mathematical algorithms, and presenting the results of collecting and analyzing information, without more, as matters within the realm of abstract ideas). Thus, but for the computing system and its processor relied on to implement the machine learning, the steps involving the generation of a learning model fall within the scope of the abstract idea itself. Accordingly, while the machine learning recited in the claim is not part of the abstract idea itself, this does not negate the merit of the finding that the claims recite an abstract idea, but instead this finding merely shifts analysis of the impact of the recited machine learning (and the processor relied on to implement it) as an additional element under Step 2A Prong Two and Step 2B of the eligibility inquiry, which has been carried out as provided in the §101 rejection set forth. Applicant asserts that, in regard to claim 27, the steps of “detecting the plurality of package faces,” “associating each of the plurality of faces with one of the plurality of packages,” “inferring at least one package type … and assigning a confidence level,” and “overriding the inferred package type of the second face with the inferred package type of the first face” require processing multiple images, performing geometric and visual analysis, and applying statistical confidence metrics. These operations are computationally intensive and cannot be practically performed by a human mind, even with pen and paper. Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1146–47 (Fed. Cir. 2016) (complex computational processes are not mental processes). The examiner respectfully disagrees. It is first noted that the claim does not positively recite performing geometric and visual analysis. The steps of “detecting the plurality of package faces,” “associating each of the plurality of faces with one of the plurality of packages,” “inferring at least one package type … and assigning a confidence level,” and “overriding the inferred package type of the second face with the inferred package type of the first face” are considered a mental process steps characterizing collecting information, analyzing information by steps people go through in their minds, or by mathematical algorithms, and presenting the results of collecting and analyzing information, without more, as matters within the realm of abstract ideas. Applicant asserts that, in regard to claim 32, The step of “inferring a SKU of each of the plurality of packages” using machine learning models involves integrating package type and brand data through algorithmic processing, which exceeds human mental capabilities. The Court of Appeals for the Federal Circuit has recognized that processes requiring complex computations, such as machine learning-based image analysis, are not mental processes. See SRI Int’l, Inc. v. Cisco Sys., Inc., 930 F.3d 1295, 1304 (Fed. Cir. 2019) (finding network monitoring using hierarchical classifiers non-abstract). The Specification describes the complexity of the machine learning models ([0010], [0080], [0086]–[0087]), reinforcing that these are not human-performable tasks. Furthermore, per the 2025 Memorandum ("Distinguishing Claims That Recite a Judicial Exception from Those That Merely Involve One"), the claims merely involve any potential exception (e.g., through training neural networks, similar to Example 39 in the AI-SME Update) rather than reciting it, rendering them eligible without further analysis. The examiner respectfully disagrees. The examiner notes that “based upon the step of “inferring a SKU of each of the plurality of packages” using machine learning models involves integrating package type and brand data through algorithmic processing is a concept that can be performed in the human mind and/or using a pen and paper, and examples of mental processes include observations, evaluations, judgments, and opinions. The examiner further note that the claims recite an abstract idea by reciting concepts of performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions, which falls into the “Mental processes” group within the enumerated groupings of abstract ideas. Claims can recite a mental process even if they are claimed as being performed on a computer. MPEP 2106.04(a)(2)(II). Although the Examiner acknowledges that “machine learning” itself cannot be performed in the human mind, the Examiner maintains that, but for the generic computing elements (processor that executes instructions) relied on to perform “machine learning” recited in claim can be performed in the human mind and/or using a pen and paper. The examiner notes that the machine learning models are used in high level of generality that amounts to algorithm and/or steps that can be performed in the human mind and/or using a pen and paper. The examiner notes “machine learning” itself cannot be performed in the human mind, the Examiner maintains that, but for the generic computing elements (processor that executes instructions) relied on to perform “machine learning” recited in claim can be performed in the human mind and/or using a pen and paper. The examiner notes that the machine learning models are used in high level of generality that amounts to algorithm and/or steps that can be performed in the human mind and/or using a pen and paper. The examiner notes that the “machine learning” is relied on to perform activity that, but for the claimed reliance on a generic computer, mimics human thought processes of evaluating collected data perceptible only in the human mind. See In re TLI Comme'ns LLC Patent Litig., 823 F.3d 607, 611 (Fed. Cir. 2016); FairWarning IP, LLC v. Tatric Sys., Inc., 839 F.3d 1089, 1093-94 (Fed. Cir. 2016). The Federal Circuit has held similar concepts to be abstract. Thus, for example, the Federal Circuit has held that abstract ideas include the concepts of collecting data, analyzing the data, and reporting the results of the collection and analysis, including when limited to particular content. See, e.g., Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1340-41 (Fed. Cir. 2017) (identifying the abstract idea of organizing, displaying, and manipulating data); Elec. Power Grp., LLC v Alstom S.A., 830 F.3d 1350,13 54 (Fed. Cir. 2016) (characterizing collecting information, analyzing information by steps people go through in their minds, or by mathematical algorithms, and presenting the results of collecting and analyzing information, without more, as matters within the realm of abstract ideas). Thus, but for the computing system and its processor relied on to implement the machine learning, the steps involving the generation of a learning model fall within the scope of the abstract idea itself. Accordingly, while the machine learning recited in the claim is not part of the abstract idea itself, this does not negate the merit of the finding that the claims recite an abstract idea, but instead this finding merely shifts analysis of the impact of the recited machine learning (and the processor relied on to implement it) as an additional element under Step 2A Prong Two and Step 2B of the eligibility inquiry, which has been carried out as provided in the §101 rejection set forth. Applicant asserts that even if the claims recite an abstract idea, they integrate it into a practical application. The claims address a specific technical problem in logistics: manual verification of product shipments is time-consuming and error-prone ([0003]). The claimed system improves this process by using machine vision and machine learning to automate SKU identification in a stack of packages, enhancing accuracy and efficiency ([0005]). In regard to claims 11 and 19, the applicant argues that the specific, sequential use of a package type machine learning model to infer package type (defined as “a number of beverage containers and a form of the beverage containers”) followed by selecting a brand-specific machine learning model (trained on distinct image sets) provides a technical solution to accurately identify SKUs in a complex, real-world logistics environment. This is not a generic automation but a tailored application to a specific problem. In regard to claim 27, the applicant argues that the multi-step process of detecting package faces, associating them with packages, assigning confidence levels, and overriding lower-confidence inferences ensures robust identification from multiple image perspectives, addressing challenges in warehouse settings where packages may be partially obscured, misoriented, or lighting may not be ideal. Different package faces may be more distinctive than others too. In regard to claim 32, the applicant argues that the integration of SKU inference with error notification based on comparison to expected SKUs directly improves inventory management by reducing errors in verification. The examiner respectfully disagrees. The examiner notes that under Step 2A Prong Two, the Examiner has evaluated the impact of “machine learning” in performing the step of “choosing at least one of the pluralities of machine learning models,” but maintains that the machine learning has not been shown to improve upon any technology or the apparatus itself. Furthermore, the machine learning and the computer-related elements (e.g. a processor, non- transitory computer readable media, computing system) fail to provide an improvement to the functioning of a computer or to any other technology or technical field, do not apply the exception with a particular machine, do not apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, and does not effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Furthermore, these elements have been fully considered, however they are directed to the use of generic computing elements (Applicant’s Specification [0211] describes high level general purpose computer) to perform the abstract idea, which is not sufficient to amount to a practical application and is tantamount to simply saying “apply it” using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment (computer based operating environment) by using the computer as a tool to perform the abstract idea, which is not sufficient to amount to particular application. With respect to claim 27, the examiner notes that associating each of the plurality of faces from the plurality of images with one of the plurality of packages and then assigning a package type for the first package based upon the confidence levels of the inferred package types are concepts that can be performed in the human mind and/or using a pen and paper, and examples of mental processes include observations, evaluations, judgments, and opinions. The examiner further note that the claims recite an abstract idea by reciting concepts of performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions, which falls into the “Mental processes” group within the enumerated groupings of abstract ideas. In accordance MPEP 2106.04(a)(2)(II), claims can recite a mental process even if they are claimed as being performed on a computer. Although the Examiner acknowledges that “machine learning” itself cannot be performed in the human mind, the Examiner maintains that, but for the generic computing elements (processor that executes instructions) relied on to perform “machine learning” recited in claim can be performed in the human mind and/or using a pen and paper. Moreover, the examiner notes that the machine learning models are used in high level of generality that amounts to algorithm and/or steps that can be performed in the human mind and/or using a pen and paper. With respect to claim 32, The examiner notes that using a package type model and a brand model to infer a SKU of each of a plurality of packages, comparing the inferred SKUs to a list of expected SKUs and determining whether to generate an error based upon the comparison in light of the list of expected SKUs are concepts that can be performed in the human mind and/or using a pen and paper, and examples of mental processes include observations, evaluations, judgments, and opinions. The examiner further note that the claims recite an abstract idea by reciting concepts of performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions, which falls into the “Mental processes” group within the enumerated groupings of abstract ideas. In accordance MPEP 2106.04(a)(2)(II), claims can recite a mental process even if they are claimed as being performed on a computer. Although the Examiner acknowledges that “machine learning” itself cannot be performed in the human mind, the Examiner maintains that, but for the generic computing elements (processor that executes instructions) relied on to perform “machine learning” recited in claim can be performed in the human mind and/or using a pen and paper. Moreover, the examiner notes that the machine learning models are used in high level of generality that amounts to algorithm and/or steps that can be performed in the human mind and/or using a pen and paper. The additional elements are directed to computing system, at least one processor, at least one non-transitory computer-readable media storing, a plurality of machine learning models that have been trained with a plurality of images of packages, the plurality of machine learning models including at least one package type model and a plurality of brand models, the plurality of brand models including a first brand model and a second brand model, wherein the first brand model has been trained with a plurality of images of first packages but not a plurality of images of second packages, and wherein the second brand model has been trained with the plurality of images of second packages but not the plurality of images of first packages (machine learning model recited at high level of generality), instructions that, when executed by the at least one processor, cause the computer system to perform function, receiving a plurality of images of the stack of the plurality of packages (recited at high level of generality and amounts to pre-solution activity) to implement the abstract idea. However, these elements fail to integrate the abstract idea into a practical application because they fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Furthermore, these elements have been fully considered, however they are directed to the use of generic computing elements (Applicant’s Specification [0211] describes high level general purpose computer) to perform the abstract idea, which is not sufficient to amount to a practical application and is tantamount to simply saying “apply it” using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment (computer based operating environment) by using the computer as a tool to perform the abstract idea, which is not sufficient to amount to particular application. Applicant argues that the claims include additional elements that amount to significantly more than any alleged abstract idea. The ordered combination of steps—particularly the use of distinct machine learning models trained on specific image datasets, the sequential process of inferring package type then selecting a brand model, and the confidence-based overriding in claim 27—is unconventional. The Examiner’s reliance on the Specification’s description of a general-purpose computer and prior art (e.g., Adato, [0358]) does not address the specific configuration of the claimed models or their application to SKU identification in a stack of packages. Claims 11, 19, and 32 specify that the first and second brand models are trained on distinct image sets (e.g., first packages but not second packages), which is a specific technical implementation that enhances classification accuracy. Claim 27 specifies overriding lower-confidence inferences which is a technical solution to ensure robust identification from multiple image perspectives, addressing challenges in warehouse settings where packages may be partially obscured, misoriented, or lighting may not be ideal. These configurations, combined with the multi-step process of claims 11, 19, 27, and 32, are not well-understood, routine, or conventional, as evidenced by the Examiner’s finding that the claims are allowable over prior art (Office Action, 12). See BASCOM, 827 F.3d at 1350 (unconventional ordered combination can provide inventive concept); Berkheimer v. HP Inc., 881 F.3d 1360, 1369 (Fed. Cir. 2018) (requiring evidence that additional elements are conventional). The examiner respectfully disagrees. The examiner notes that in Bascom, the Federal Circuit found that the claims included a “non-conventional and non-generic arrangement” of the additional elements, including installation of a filtering tool at a specific location, remote from end-users, with customizable filtering features specific to each end user. However, Applicant's claims do not include similar features or provide a non-conventional arrangement of the additional elements, instead merely incorporating elements of a general purpose computer. In addition, the Examiner emphasizes that the claimed filtering technique in Bascom, similar to the solution discussed in DDR, appears to be rooted in Internet technology (tied to an ISP server), which is distinguishable from Applicant's invention that is not rooted in internet technology as was the ISP-based content filtering scheme of Bascom. Instead, Applicant’s claims merely involve the use of a general purpose computer to collect, arrange, and display information. Accordingly, the reasons for eligibility in the Bascom decision are not applicable to Applicant’s claims. As best understood by the Examiner, Applicant’s argument appears to be based on a misunderstanding of the recent Berkheimer decision, which the Examiner emphasizes is germane only to Step 2B eligibility inquiry and only for “additional elements” (i.e., not the elements that actually recite the abstract idea). In particular, the Berkheimer memo provides guidelines for evaluating whether certain claim limitations (the “additional elements”) are well-understood, routine, and conventional, and describes the evidentiary requirements to support factual findings related thereto. Berkheimer v. HP Inc., 881 F.3d 1360 (Fed. Cir. 2018). Accordingly, the Examiner emphasizes that a §101 rejection, including one based on a judicial exception, does not hinge on whether or not the entire claimed subject matter is directed to “well-understood, routine, and conventional activities,” as suggested by Applicant. Notably, a §101 rejection may be proper even if there are no claim elements deemed well-understood, routine, and conventional. We may assume that the techniques claimed are “[g]roundbreaking, innovative, or even brilliant,” but that is not enough for eligibility. Ass’n for Molecular Pathology v. Myriad Genetics, Inc., 569 U.S. 576, 591 (2013); accord buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1352 (Fed. Cir. 2014). Nor is it enough for subject-matter eligibility that claimed techniques be novel and nonobvious in light of prior art, passing muster under 35 U.S.C. §§ 102 and 103. See Mayo Collaborative Servs. v. Prometheus Labs., Inc., 566 U.S. 66, 89–90 (2012); Synopsys, Inc. v. Mentor Graphics Corp., 839 F.3d 1138, 1151 (Fed. Cir. 2016) (“[A] claim for a new abstract idea is still an abstract idea. The search for a § 101 inventive concept is thus distinct from demonstrating §102 novelty.”); Intellectual Ventures I LLC v. Symantec Corp., 838 F.3d 1307, 1315 (Fed. Cir. 2016) (same for obviousness) (Symantec). Accordingly, the 35 U.S.C. §101 rejection to claims are maintained and an updated 35 U.S.C. §101 rejection will address applicant’s newly added claims. Applicant’s arguments with respect to the 35 U.S.C. §112(a) rejection. The arguments are found persuasive (remarks pages 1-3) and reflect why the 35 U.S.C. §112(a) rejection is withdrawn. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 11-27 and 31-42 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claims are directed to an abstract idea without significantly more. Claims 11-27 and 31-42 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The eligibility analysis in support of these findings is provided below, in accordance with the “Patent Subject Matter Eligibility Guidance”. With respect to Step 1 of the eligibility inquiry, and in particular Step 2A Prong I (as explained in MPEP 2106), it is first noted that the computing system (claims 11-18, 36-38, and 41), the method (claim 19-26 and 42), the computing system (claims 27, 31, 39, and 40), and the computing system (claim 32-35) are directed to an eligible category of subject matter (i.e., process, machine, and article of manufacture respectively). Thus, Step 1 is satisfied. With respect to Step 2, and in particular Step 2A Prong II, it is next noted that the claims recite an abstract idea by reciting concepts of performed in the human mind, and examples of mental processes include observations, evaluations, judgments, and opinions, which falls into the “Mental processes” group within the enumerated groupings of abstract ideas. Claims can recite a mental process even if they are claimed as being performed on a computer. MPEP 2106.04(a)(2)(III). The limitations reciting the abstract idea are highlighted in italics and the limitation directed to additional elements highlighted in bold, as set forth in exemplary claim 11, are: A computing system for identifying SKUs in a stack of a plurality of packages of beverage containers comprising: at least one processor; and at least one non-transitory computer-readable media storing: a plurality of machine learning models that have been trained with a plurality of images of packages of beverage containers, wherein the plurality of machine learning models includes a package type machine learning model and a plurality of brand machine learning models, the plurality of brand machine learning models including a first brand machine learning model and a second brand machine learning model, wherein the first brand machine learning model has been trained with a plurality of images of first packages of beverage containers but not a plurality of images of second packages of beverage containers, and wherein the second brand machine learning model has been trained with the plurality of images of second packages of beverage containers but not the plurality of images of first packages of beverage containers; and instructions that, when executed by the at least one processor, cause the computer system to perform the following operations: a) receiving at least one image of the stack of the plurality of packages of beverage containers; b) inferring a package type of each of the plurality of packages of beverage containers based upon the at least one image using the package type machine learning model, wherein the package type includes a number of beverage containers and a form of the beverage containers; c) based upon the package type inferred for each of the plurality of packages of beverage containers, choosing at least one of the plurality of brand machine learning models; and d) using the at least one of the plurality of brand machine learning models chosen in step c) for each of the plurality of packages of beverage containers, inferring a brand of each of the plurality of packages of beverage containers based upon the at least one image. Claims 19 recites substantially the same limitation as claim 11 and therefore subject to the same rationale. The limitations reciting the abstract idea are highlighted in italics and the limitation directed to additional elements highlighted in bold, as set forth in exemplary claim 27, are: A computing system for identifying SKUs in a stack of a plurality of packages comprising: at least one processor; and at least one non-transitory computer-readable media storing: a plurality of machine learning models that have been trained with a plurality of images of packages, the plurality of machine learning models including at least one package type model and a plurality of brand models, the plurality of brand models including a first brand model and a second brand model, wherein the first brand model has been trained with a plurality of images of first packages but not a plurality of images of second packages, and wherein the second brand model has been trained with the plurality of images of second packages but not the plurality of images of first packages; and instructions that, when executed by the at least one processor, cause the computer system to perform the following operations: a) receiving a plurality of images of the stack of the plurality of packages, each of the plurality of packages having a plurality of package faces; b) detecting the plurality of package faces that are visible in the plurality of images; c) associating each of the plurality of faces from the plurality of images with one of the plurality of packages, wherein a first package of the plurality of packages is associated with a first face and a second face of the plurality of faces from the plurality of images; d) using the at least one package type model, inferring at least one package type of each of the first face and the second face based upon the plurality of images and assigning a confidence level to each inferred package type, wherein the confidence level assigned to the inferred package type of the first face is higher than the confidence level assigned to the inferred package type of the second face; e) overriding the inferred package type of the second face with the inferred package type of the first face; f) based upon the package type assigned to the first face, choosing the first brand model; g) using the first brand model, inferring a first brand from the first face at a first confidence level and inferring a second brand from the second face at a second confidence level; h) choosing either the first brand or the second brand based upon which of the first confidence level or the second confidence level is higher; and i) associating a SKU with the first package based upon the inferred package type of the first face and the brand chosen in step h). Claims 32 recites substantially the same limitation as claim 27 and therefore subject to the same rationale. With respect to Step 2A Prong Two, the judicial exception is not integrated into a practical application. The additional elements are directed to computing system, at least one processor, at least one non-transitory computer-readable media storing, a plurality of machine learning models that have been trained with a plurality of images of packages, the plurality of machine learning models including at least one package type model and a plurality of brand models, the plurality of brand models including a first brand model and a second brand model, wherein the first brand model has been trained with a plurality of images of first packages but not a plurality of images of second packages, and wherein the second brand model has been trained with the plurality of images of second packages but not the plurality of images of first packages (machine learning model recited at high level of generality), instructions that, when executed by the at least one processor, cause the computer system to perform function, receiving a plurality of images of the stack of the plurality of packages (recited at high level of generality and amounts to pre-solution activity) to implement the abstract idea. However, these elements fail to integrate the abstract idea into a practical application because they fail to provide an improvement to the functioning of a computer or to any other technology or technical field, fail to apply the exception with a particular machine, fail to effect a transformation of a particular article to a different state or thing, and fail to apply/use the abstract idea in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. Furthermore, these elements have been fully considered, however they are directed to the use of generic computing elements (Applicant’s Specification [0211] describes high level general purpose computer) to perform the abstract idea, which is not sufficient to amount to a practical application and is tantamount to simply saying “apply it” using a general purpose computer, which merely serves to tie the abstract idea to a particular technological environment (computer based operating environment) by using the computer as a tool to perform the abstract idea, which is not sufficient to amount to particular application. The examiner notes “machine learning” itself cannot be performed in the human mind, the Examiner maintains that, but for the generic computing elements (processor that executes instructions) relied on to perform “machine learning” recited in claim can be performed in the human mind and/or using a pen and paper. The examiner notes that the machine learning models are used in high level of generality that amounts to algorithm and/or steps that can be performed in the human mind and/or using a pen and paper. The examiner notes that the “machine learning” is relied on to perform activity that, but for the claimed reliance on a generic computer, mimics human thought processes of evaluating collected data perceptible only in the human mind. See In re TLI Comme'ns LLC Patent Litig., 823 F.3d 607, 611 (Fed. Cir. 2016); FairWarning IP, LLC v. Tatric Sys., Inc., 839 F.3d 1089, 1093-94 (Fed. Cir. 2016). The Federal Circuit has held similar concepts to be abstract. Thus, for example, the Federal Circuit has held that abstract ideas include the concepts of collecting data, analyzing the data, and reporting the results of the collection and analysis, including when limited to particular content. See, e.g., Intellectual Ventures I LLC v. Capital One Fin. Corp., 850 F.3d 1332, 1340-41 (Fed. Cir. 2017) (identifying the abstract idea of organizing, displaying, and manipulating data); Elec. Power Grp., LLC v Alstom S.A., 830 F.3d 1350,13 54 (Fed. Cir. 2016) (characterizing collecting information, analyzing information by steps people go through in their minds, or by mathematical algorithms, and presenting the results of collecting and analyzing information, without more, as matters within the realm of abstract ideas). Thus, but for the computing system and its processor relied on to implement the machine learning, the steps involving the generation of a learning model fall within the scope of the abstract idea itself. Accordingly, while the machine learning recited in the claim is not part of the abstract idea itself, this does not negate the merit of the finding that the claims recite an abstract idea, but instead this finding merely shifts analysis of the impact of the recited machine learning (and the processor relied on to implement it) as an additional element under Step 2A Prong Two and Step 2B of the eligibility inquiry, which has been carried out as provided in the §101 rejection set forth. Accordingly, because the Step 2A Prong One and Prong Two analysis resulted in the conclusion that the claims are directed to an abstract idea, additional analysis under Step 2B of the eligibility inquiry must be conducted in order to determine whether any claim element or combination of elements amount to significantly more than the judicial exception. With respect to Step 2B of the eligibility inquiry, it has been determined that the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. The additional limitations are directed to: computing system, at least one processor, at least one non-transitory computer-readable media storing, a plurality of machine learning models that have been trained with a plurality of images of packages, the plurality of machine learning models including at least one package type model and a plurality of brand models, the plurality of brand models including a first brand model and a second brand model, wherein the first brand model has been trained with a plurality of images of first packages but not a plurality of images of second packages, and wherein the second brand model has been trained with the plurality of images of second packages but not the plurality of images of first packages (machine learning model recited at high level of generality), instructions that, when executed by the at least one processor, cause the computer system to perform function, receiving a plurality of images of the stack of the plurality of packages (recited at high level of generality and amounts to pre-solution activity). These elements have been considered, but merely serve to tie the invention to a particular operating environment (i.e., computer-based implementation), though at a very high level of generality and without imposing meaningful limitation on the scope of the claim. In addition, Applicant’s Specification ([0211]) describes generic off-the-shelf computer-based elements for implementing the claimed invention, and which does not amount to significantly more than the abstract idea, which is not enough to transform an abstract idea into eligible subject matter. Such generic, high-level, and nominal involvement of a computer or computer-based elements for carrying out the invention merely serves to tie the abstract idea to a particular technological environment, which is not enough to render the claims patent-eligible, as noted at pg. 74624 of Federal Register/Vol. 79, No. 241, citing Alice, which in turn cites Mayo. In addition, when taken as an ordered combination, the ordered combination adds nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements integrates the abstract idea into a practical application. Their collective functions merely provide conventional computer implementation. Therefore, when viewed as a whole, these additional claim elements do not provide meaningful limitations to transform the abstract idea into a practical application of the abstract idea or that the ordered combination amounts to significantly more than the abstract idea itself. The dependent claims have been fully considered as well, however, similar to the finding for claims above, these claims are similarly directed to the abstract idea of concepts mental process without integrating it into a practical application and with, at most, a general-purpose computer that serves to tie the idea to a particular technological environment, which does not add significantly more to the claims. The ordered combination of elements in the dependent claims (including the limitations inherited from the parent claim(s)) add nothing that is not already present as when the elements are taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. Accordingly, the subject matter encompassed by the dependent claims fails to amount to significantly more than the abstract idea. Examiner Notice Claims 11-27 and 31-42 would be allowable, if they were amended in such way to overcome the 35 USC § 101 rejections set forth in the action. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: US 20200302510 A1 System, Device, and Method of Augmented Reality based Mapping of a Venue and Navigation within a Venue Chachek; Amit et al. US 20190236531 A1 Comparing Planogram Compliance to Checkout Data Adato; Yair et al. US 20170323376 A1 System And Method for Computer Vision Driven Applications Within an Environment Glaser; William et al. US 20090065568 A1 Systems and Methods for Associating Production Attributes with Products Grant; Elliott et al. US 20200061839 A1 Inventory Management by Mobile Robot Deyle; Travis J. et al. US 10572854 B2 Order grouping in warehouse order fulfillment operations Johnson; Ryan et al. US 20190279017 A1 User Interface for Object Detection and Labeling Graham; Jamey et al. US 20140374478 A1 System And Method for Providing Real-Time Tracking of Items in A Distribution Network Dearing; Stephen M. et al. US 20190236530 A1 Product Inventorying Using Image Differences Cantrell; Robert et al. US 9821344 B2 Systems and methods for scanning information from storage area contents Zsigmond; Fabio et al. US 11482045 B1 Associating events with actors using digital imagery and machine learning Kim; Jaechul et al. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to REHAM K ABOUZAHRA whose telephone number is (571)272-0419. The examiner can normally be reached M-F 7:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Epstein can be reached at (571)-270-5389. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /REHAM K ABOUZAHRA/Examiner, Art Unit 3625 /BRIAN M EPSTEIN/Supervisory Patent Examiner, Art Unit 3625
Read full office action

Prosecution Timeline

Oct 22, 2021
Application Filed
Apr 21, 2022
Non-Final Rejection — §101
Jul 26, 2022
Response Filed
Aug 29, 2022
Final Rejection — §101
Dec 07, 2022
Response after Non-Final Action
Jan 09, 2023
Notice of Allowance
May 09, 2023
Response after Non-Final Action
May 24, 2023
Response after Non-Final Action
Aug 23, 2023
Response after Non-Final Action
Nov 07, 2023
Response after Non-Final Action
Nov 08, 2023
Response after Non-Final Action
Nov 09, 2023
Response after Non-Final Action
Nov 09, 2023
Response after Non-Final Action
Nov 18, 2024
Response after Non-Final Action
Jan 21, 2025
Request for Continued Examination
Jan 22, 2025
Response after Non-Final Action
Mar 12, 2025
Non-Final Rejection — §101
Aug 19, 2025
Response Filed
Nov 29, 2025
Final Rejection — §101 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591904
METHODS AND APPARATUS TO DETERMINE UNIFIED ENTITY WEIGHTS FOR MEDIA MEASUREMENT
2y 5m to grant Granted Mar 31, 2026
Patent 12586127
Stochastic Bidding Strategy for Virtual Power Plants with Mobile Energy Storages
2y 5m to grant Granted Mar 24, 2026
Patent 12419214
UTILITY VEHICLE
2y 5m to grant Granted Sep 23, 2025
Patent 12367506
DATA PROCESSING SYSTEMS AND METHODS FOR CONTROLLING AN AUTOMATED SURVEY SYSTEM
2y 5m to grant Granted Jul 22, 2025
Patent 12079751
CENTRAL PLANT WITH ASSET ALLOCATOR
2y 5m to grant Granted Sep 03, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
12%
Grant Probability
21%
With Interview (+8.8%)
3y 12m
Median Time to Grant
High
PTA Risk
Based on 142 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month