Prosecution Insights
Last updated: April 19, 2026
Application No. 18/400,874

PRODUCT AUTHENTICATION USING PACKAGING

Final Rejection §101§103§112
Filed
Dec 29, 2023
Examiner
HARRINGTON, MICHAEL P
Art Unit
3628
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Sys-Tech Solutions, Inc.
OA Round
2 (Final)
24%
Grant Probability
At Risk
3-4
OA Rounds
4y 7m
To Grant
41%
With Interview

Examiner Intelligence

Grants only 24% of cases
24%
Career Allow Rate
117 granted / 477 resolved
-27.5% vs TC avg
Strong +17% interview lift
Without
With
+16.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 7m
Avg Prosecution
35 currently pending
Career history
512
Total Applications
across all art units

Statute-Specific Performance

§101
30.2%
-9.8% vs TC avg
§103
40.8%
+0.8% vs TC avg
§102
6.9%
-33.1% vs TC avg
§112
19.2%
-20.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 477 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Status of Claims The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is a non-final, first office action in response to the application filed 29 December 2023. Claims 1-20 are currently pending and have been examined. Information Disclosure Statement The information disclosure statement (IDS) submitted on 18 June 2024 was filed before the mailing date of the first office action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. The information disclosure statement (IDS) submitted on 2 April 2025 was filed before the mailing date of the first office action. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5-7 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. With respect to claim 5, the Applicant claims, “wherein the at least one similarity comprises: a textual similarity between text included on the face of the first packaging and text included on the first face of the second packaging, and a graphical similarity between the reference image and the image.” The Applicant has rendered this claim indefinite and unclear for failing to particularly define their invention. In this case, the Applicant has referred to “the image;” however this recitation lacks antecedent basis. In particular, it is unclear as to what this recitation of “an image” refers to, as there are multiple images recited in depended upon claims 1 and 4. For the purpose of examination, the Examiner will interpret the claim to read, “wherein the at least one similarity comprises: a textual similarity between text included on the face of the first packaging and text included on the first face of the second packaging, and a graphical similarity between the reference image and the image of the first packaging.” With respect to claim 6, the Applicant claims, “wherein determining whether the first packaging is authentic comprises: in response to selecting the image of the second packaging as the reference image, determining whether the first packaging is authentic based on a comparison between the first packaging in the image and the second packaging in the reference image.” The Applicant has rendered this claim indefinite and unclear for failing to particularly define their invention. In this case, the Applicant has referred to “the image” as both the image of the second packaging (also calling it the reference image) and as image of the first packaging; thus rendering it unclear as the same term is used for different parameters. For the purpose of examination, the Examiner will interpret the claim to read, “wherein determining whether the first packaging is authentic comprises: in response to selecting the image of the second packaging as the reference image, determining whether the first packaging is authentic based on a comparison between the image of the first packaging and the second packaging in the reference image.” With respect to claim 7, the Applicant claims, “in response to determining that the image includes the data-encoding symbol, decoding data encoded by the data-encoding symbol, and determining a reference image based on the data, or in response to determining that the image does not include the data-encoding symbol, determining the reference image based on a graphical comparison between the image and the reference image.” The Applicant has rendered this claim indefinite and unclear for failing to particularly define their invention. In this case, the Applicant has referred to determining a reference image by comparing the image of the first packaging with the reference image, thus it remains unclear how a reference image is determined based on itself. For the purpose of examination, the Examiner will interpret the claim to read, “in response to determining that the image does not include the data-encoding symbol, determining a reference image based on a graphical comparison between the image and a second image.” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claims recite capturing, at a mobile device, an image using a camera of the mobile device; processing, at the mobile device, the image using one or more machine learning models, wherein the one or more machine learning models have been trained to identify a face of first packaging in the image, and determine whether the first packaging in the image satisfies one or more capture conditions; providing, at the mobile device, feedback for image capture based on a first output of the one or more machine learning models relating to the one or more capture conditions; and in response to output of the one or more machine learning models indicating that the one or more capture conditions are satisfied, and in response to the output of the one or more machine learning models indicating that the face of the first packaging is present in the image, sending the image for authentication of the first packaging. The limitations of capturing an image using a camera of the mobile device, processing the image using a model trained to identify a face of first packaging, determining whether the first packaging in the image satisfies capture conditions, providing feedback for image capture based on a output of the model, and in response to output of the model indicating that the capture conditions are satisfied and the output of the model indicating that the face of the first packaging is present in the image, sending the image for authentication of the first packaging; as drafted, under the broadest reasonable interpretation, encompasses the management of commercial activity (business relations), managing human behavior and relationships, and mental processes. That is, other than reciting the use of generic computer element (mobile device, camera, machine learning model, computers), the claims recite an abstract idea. In particular, capturing an image using a camera of the mobile device, processing the image using a model trained to identify a face of first packaging, determining whether the first packaging in the image satisfies capture conditions, providing feedback for image capture based on a output of the model, and in response to output of the model indicating that the capture conditions are satisfied and the output of the model indicating that the face of the first packaging is present in the image, sending the image for authentication of the first packaging; encompasses using a camera to photograph a package of an item, analyzing the photograph to recognize the face of the package, determining if the photograph follows conditions to be used correctly, and then authenticating the package; which is the management of commercial activity (business relations), managing human behavior and relationships. Thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” grouping of abstract ideas. In addition, the claims recite processing the image using a model trained to identify a face of first packaging, determining whether the first packaging in the image satisfies capture conditions, providing feedback for image capture based on a output of the model, and in response to output of the model indicating that the capture conditions are satisfied and the output of the model indicating that the face of the first packaging is present in the image; which encompass elements that can be performed in the human mind (observation, evaluation, opinion, and judgement). Thus, the claims recite elements that fall into the “Mental Processes” grouping of abstract ideas. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claims do not recite additional elements, when taken individually and in an ordered combination with the abstract idea, that improve the functioning of a computer, another technology, or technical field. The claims do not recite the use of, or apply the abstract idea with, a particular machine, the claims do not recite the transformation of an article from one state or thing into another. Finally, the claims do not recite additional elements, taken individually and in an ordered combination, that apply or use the abstract idea in some other meaningful way beyond generally linking the use of the abstract idea to a particular technological environment. Instead, the claims recite the use of generic computer elements (mobile device, camera, machine learning model, computers) as tools to carry out the abstract idea. The claims are directed to an abstract idea. The claim(s) does/do not include additional elements, when taken individually and in an ordered combination with the abstract idea, that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of using generic computer elements and machines to perform the steps amounts to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claims are directed to non-patent eligible subject matter. The dependent claims 2-18, when taken individually and in an ordered combination with the abstract idea, do not recite additional elements that integrate the abstract idea into a practical application, or add significantly more to the abstract idea. In particular, the claims further recite training the models to determine a face of the packaging and the type of face, which is deemed merely setting up a model to be used to analyze objects and make determinations; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claims 2 and 3). In addition, the claims further recite determining whether the first packaging is authentic comprising selecting a first face of a second package based on the face type of the face of the first packaging, determining similarity between the first face and the face of the first packaging, and selecting an image of the second packaging as a reference image based on the at least one similarity between the first face and the face of the first packaging; which encompass determining the authenticity of a package by comparing an image of it to another package’s image; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claims 4 and 6). In addition, the claims further identify the type of similarity determined, which is merely narrowing the field of use, and thus does not recite additional elements that integrate the abstract idea into a practical application, or add significantly more to the abstract idea (claim 5). In addition, the claims further recite determining whether the packaging is authentic comprising determining whether the image includes a data-encoding symbol, decoding data encoded by the data-encoding symbol and determining a reference image based on the data, or determining the reference image based on a graphical comparison between the image and the reference image when in response to determining that the image does not include the data-encoding symbol; which encompasses making the determining whether a symbol is on a package or not, and determining if the package is authentic by using the symbol or just comparing an image to a reference image; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 7). In addition, the claims further recite determining whether the packaging is authentic comprising receiving the image and processing the image using a model; which encompasses determining a package is authentic by using a model; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 8). In addition, the claims recite the use of generic computer elements (mobile device, machine learning model) as tools to carry out the abstract idea, which thus does not recite additional elements that integrate the abstract idea into a practical application, or add significantly more to the abstract idea (claim 8). In addition, the claims further recite determining whether the first packaging is authentic comprising determining a textual similarity between text in the image and text in a reference image, determining a graphical similarity between the image and the reference image, determining that the first packaging is not authentic based on the textual similarity or the graphical similarity, and determining a packaging of which the first packaging is a counterfeit; which encompasses determining a package is counterfeit when a test or graphic in the image of the package is not similar to a reference image; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 9). In addition, the claims further recite the type of capture conditions considered, which is merely narrowing the field of use, and thus does not recite additional elements that integrate the abstract idea into a practical application, or add significantly more to the abstract idea (claim 10). In addition, the claims further recite providing feedback regarding the image capture including an indication of data and corruption; which is merely narrowing the field of use, and thus does not recite additional elements that integrate the abstract idea into a practical application, or add significantly more to the abstract idea (claims 11 and 12). In addition, the claims further recite training models including obtaining an image of a reference, modifying capture conditions to generate more images, and training models using the images; which encompasses forming models for analyzing objects using different perspectives of images; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 13). In addition, the claims further recite determining a package is authentic comprising comparing a feature of the package to a blueprint that labels components of the package; which encompasses forming models for analyzing objects using different perspectives of images; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 14). In addition, the claims further recite generating blueprints by processing images of reference packages through the model; which encompasses forming models for analyzing objects using different perspectives of images; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 15). In addition, the claims further recite training the model using images of faces of packaging and data in the images; which encompasses forming a model to conduct analysis using previous data sets; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 16). In addition, the claims further recite the type of models used to analyze images and make determinations; which is merely narrowing the field of use, and thus does not recite additional elements that integrate the abstract idea into a practical application, or add significantly more to the abstract idea (claim 17). In addition, the claims further recite training the models comprising providing labels for objects in an image to a user and allowing them to change the labels; which encompasses forming a model to conduct analysis using data sets and user feedback; which further recite which is the management of commercial activity (business relations), managing human behavior and relationships, and elements that can be performed in the human mind (observation, evaluation, judgement); thus, the claims recite elements that fall into the “Certain Methods of Organizing Human Activity” and “Mental Processes” groupings of abstract ideas (claim 18). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-3, 7-11, 13-17, 19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Heikel et al. (US 2021/0374476 A1) (hereinafter Heikel), in view of Ryle et al. (US 2024/0378626 A1) (hereinafter Ryle). With respect to claims 1, 19, and 20, Heikel teaches: Capturing, at a mobile device, an image using a camera of the mobile device (See at least paragraphs 135, 138-139, 157-160, 183, and 194 which describe a user using a mobile device to image a package of a product). Processing the image using one or more machine learning models; Wherein the one or more machine learning models have been trained to identify a face of first packaging in the image (See at least paragraphs 129, 154, 194, 195, and 197-199 which describe processing the image using machine learning models in order to identify objects in the image, and determine the authenticity of the package). Determine whether the first packaging in the image satisfies one or more capture conditions (See at least paragraphs 186-191 which describe a user using a camera to image an item and determine the authenticity, wherein the models are used to determine if capture conditions have been met, before processing the image, including orientation and angle, and wherein instructions are provided to the user to adjust the conditions). Providing, at the mobile device, feedback for image capture based on a first output of the one or more machine learning models relating to the one or more capture conditions (See at least paragraphs 186-191 which describe a user using a camera to image an item and determine the authenticity, wherein the models are used to determine if capture conditions have been met, before processing the image, including orientation and angle, and wherein instructions are provided to the user to adjust the conditions). In response to output of the one or more machine learning models indicating that the one or more capture conditions are satisfied, and in response to the output of the one or more machine learning models indicating that the face of the first packaging is present in the image, sending the image for authentication of the first packaging (See at least paragraphs 129, 154, 194, 195, and 197-199 which describe in response to capture conditions being satisfied, sending the image to a server and processing the image using machine learning models in order to identify objects in the image, and determine the authenticity of the package). Heikel discloses all of the limitations of claims 1, 19, and 20 as stated above. Heikel dies not explicitly disclose the following, however Ryle teaches: Processing, at the mobile device, the image using one or more machine learning models (See at least paragraphs 44-50, 54, 56, and 69 which describe a user using their mobile device to image a package in order to determine its authenticity, wherein the mobile device uses machine learning models to analyze the image and determine if included features in the image indicate authenticity or not). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine the system and method of using a camera of a mobile device to image a package in order to determine its authenticity, wherein the mobile device presents instructions to the user in order to image the package under the correct the conditions, wherein upon imaging the package correctly, providing the image to a server to confirm the authenticity using machine learning models of Heikel, with the system and method of a user using their mobile device to image a package in order to determine its authenticity, wherein the mobile device uses machine learning models to analyze the image and determine if included features in the image indicate authenticity or not of Ryle. By processing images of a package on a user device instead of a server, a user will predictably be able to quickly and efficiently process images of items, instead of relying on a network connection to process images. With respect to claim 2, the combination of Heikel and Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Wherein the one or more machine learning models have been trained to determine a face type of the face of the first packaging (See at least paragraphs 155, 173, 175-178, 180, 185, 194, and 195 which describe using trained machine learning models to analyze images and determine authenticity, wherein the models are trained to recognize the sides of a package). With respect to claim 3, Heikel/Ryle discloses all of the limitations of claims 1 and 2 as stated above. In addition, Heikel teaches: Wherein the face type comprises a front face or a rear face (See at least paragraphs 155, 173, 175-178, 180, 185, 194, and 195 which describe using trained machine learning models to analyze images and determine authenticity, wherein the models are trained to recognize the sides of a package, which includes the front or rear of the package). With respect to claim 7, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Ryle teaches: Determining whether the first packaging is authentic, wherein determining whether the first packaging is authentic comprises: determining whether the image includes a data-encoding symbol; in response to determining that the image includes the data-encoding symbol, decoding data encoded by the data-encoding symbol, and determining a reference image based on the data, or in response to determining that the image does not include the data-encoding symbol, determining the reference image based on a graphical comparison between the image and the reference image (See at least paragraphs 44-50, 54, 56, and 69 which describe a user using their mobile device to image a package in order to determine its authenticity, wherein it is determined if the image includes a barcode/QR code, and wherein if it does, use the label to identify a reference image used for determining authenticity). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine the system and method of using a camera of a mobile device to image a package in order to determine its authenticity, wherein the mobile device presents instructions to the user in order to image the package under the correct the conditions, wherein upon imaging the package correctly, providing the image to a server to confirm the authenticity using machine learning models of Heikel, with the system and method of a user using their mobile device to image a package in order to determine its authenticity, wherein it is determined if the image includes a barcode/QR code, and wherein if it does, use the label to identify a reference image used for determining authenticity of Ryle. By using an included barcode of an image to identify a reference image that can be used for determining authenticity, a system will predictably be able to quickly and efficiently identify what the object being processed is supposed to be, thus making it easier to determine authenticity. With respect to claim 8, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Determining whether the first packaging is authentic, wherein determining whether the first packaging is authentic comprises: receiving the image from the mobile device, and processing the image using a machine learning model distinct from a first machine learning model, of the one or more machine learning models, that has been trained to identify the face of the first packaging in the image (See at least paragraphs 129, 154, 194, 195, and 197-199 which describe in response to capture conditions being satisfied, sending the image to a server and processing the image using machine learning models in order to identify objects in the image, and determine the authenticity of the package, wherein multiple machine learning models are used). With respect to claim 9, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Determining whether the first packaging is authentic, wherein determining whether the first packaging is authentic comprises: determining a textual similarity between text in the image and text in a reference image; determining a graphical similarity between the image and the reference image; determining, based on at least one of the textual similarity or the graphical similarity, that the first packaging is not authentic; and determining, based on the image, a packaging of which the first packaging is a counterfeit (See at least paragraphs 129, 154, 194, 195, and 197-199 which describe in response to capture conditions being satisfied, sending the image to a server and processing the image using machine learning models in order to identify objects in the image, wherein when text or graphics are analyzed with the models and are compared to a profile with reference images, and wherein the target is determined to be not authentic when the image and reference image are not similar). With respect to claim 10, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Wherein the one or more capture conditions are based on at least one of an orientation of the first packaging in the image or a level of corruption in the image (See at least paragraphs 186-191 which describe a user using a camera to image an item and determine the authenticity, wherein the models are used to determine if capture conditions have been met before processing the image, including orientation and angle, and wherein instructions are provided to the user to adjust the conditions). With respect to claim 11, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Wherein the feedback for image capture comprises at least one of: a graphical bound for placement of the first packaging during image capture, the graphical bound being moved to different locations on a display of the mobile device over capture of multiple images, an indication of whether an orientation of the first packaging satisfies an orientation condition, or a progress indicator that progresses based on satisfaction of the one or more capture conditions (See at least paragraphs 186-191 which describe a user using a camera to image an item and determine the authenticity, wherein the models are used to determine if capture conditions have been met, before processing the image, including orientation and angle, and wherein instructions are provided to the user to adjust the conditions, including reframing objects or changing the angle of the camera). With respect to claim 13, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Training the one or more machine learning models, wherein training the one or more machine learning models comprises: obtaining an image of reference packaging; generating a plurality of images by modifying at least one of orientation, background, or contrast of the image of the reference packaging; and training the one or more machine learning models using the plurality of images as training data (See at least paragraphs 155, 157-160, 164, 173, and 175-177 which describe training machine learning models in order recognize packages in images, wherein original images of an original product are collected and used to train a model to recognize the packages, including using different orientations of images). With respect to claim 14, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Determining whether the first packaging is authentic, wherein determining whether the first packaging is authentic comprises: comparing at least one feature of the first packaging to a digital blueprint of reference packaging, the digital blueprint comprising: a label indicating a face type of a face of the reference packaging, a graphical representation of the face of the reference packaging, and text included on the face of the reference packaging (See at least paragraphs 155, 157-160, 164, 173, and 175-177 which describe training machine learning models in order recognize packages in images, wherein original images of an original product are collected and used to train a model to recognize the packages, including using different orientations of images, wherein the reference images and labels for features in the images are stored in an item profile, including the sides, text, and graphics, which is referenced to check other items for authenticity). With respect to claim 15, Heikel/Ryle discloses all of the limitations of claims 1 and 14 as stated above. In addition, Heikel teaches: Generating the digital blueprint, wherein generating the digital blueprint comprises: processing an image of the reference packaging using a machine learning model that has been trained to determine the face type of the face of the reference packaging; and generating the digital blueprint based on an output of the machine learning model that has been trained to determine the face type of the face of the reference packaging (See at least paragraphs 155, 157-160, 164, 173, and 175-177 which describe training machine learning models in order recognize packages in images, wherein original images of an original product are collected and used to train a model to recognize the packages, including using different orientations of images, wherein the reference images and labels for features in the images are stored in an item profile, including the sides, text, and graphics, which is referenced to check other items for authenticity). With respect to claim 16, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Training the one or more machine learning models using as training data, images of faces of a plurality of packaging, and as labels for the training data, data indicative of types of faces of the plurality of packaging portrayed in the images (See at least paragraphs 155, 157-160, 164, 173, and 175-177 which describe training machine learning models in order recognize packages in images, wherein original images of an original product are collected and used to train a model to recognize the packages, including using different orientations of images, wherein the reference images and labels for features in the images are stored in an item profile, including the sides, text, and graphics, which is referenced to check other items for authenticity). With respect to claim 17, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Wherein the one or more machine learning models comprise a first machine learning model that has been trained to identify the face of the first packaging in the image, and a second machine learning model that has been trained to determine whether the first packaging in the image satisfies the one or more capture conditions (See at least paragraphs 155, 173, 175-178, 180, 185, 194, and 195 which describe using trained machine learning models to analyze images and determine authenticity, wherein the models are trained to recognize the sides of a package, which includes the front or rear of the package. Additionally, see at least paragraphs 186-191 which describe a user using a camera to image an item and determine the authenticity, wherein the models are used to determine if capture conditions have been met, before processing the image, including orientation and angle, and wherein instructions are provided to the user to adjust the conditions). Claims 4-6 are rejected under 35 U.S.C. 103 as being unpatentable over Heikel and Ryle as applied to claims 1 and 2 as stated above, and further in view of Yan et al. (US 2024/0412233 A1) (hereinafter Yan). With respect to claim 4, Heikel/Ryle discloses all of the limitations of claims 1 and 2 as stated above. Heikel and Ryle do not explicitly disclose the following, however Yan teaches: Determining whether the first packaging is authentic, wherein determining whether the first packaging is authentic comprises: selecting, from two or more faces of second packaging, a first face based on the face type of the face of the first packaging matching a face type of the first face of the second packaging; determining at least one similarity between the first face and the face of the first packaging; and selecting, from among a plurality of images of packaging, an image of the second packaging as a reference image based on the at least one similarity between the first face and the face of the first packaging (See at least paragraphs 26, 28, 29, 36, 40, and 48-50 which describe determining the authenticity of a target item by selecting a face of an item in reference images, determining the similarity between the target and the reference images, and selecting a reference, and comparing the features of the image and reference in order to confirm authenticity). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine the system and method of using a camera of a mobile device to image a package in order to determine its authenticity, wherein the mobile device presents instructions to the user in order to image the package under the correct the conditions, wherein upon imaging the package correctly, providing the image to a server to confirm the authenticity using machine learning models of Heikel, with the system and method of a user using their mobile device to image a package in order to determine its authenticity, wherein the mobile device uses machine learning models to analyze the image and determine if included features in the image indicate authenticity or not of Ryle, with the system and method of determining the authenticity of a target item by selecting a face of an item in reference images, determining the similarity between the target and the reference images, and selecting a reference, and comparing the features of the image and reference in order to confirm authenticity of Yan. By extracting features, such as the sides, from multiple images of another package and finding the similarity to the target image features, and using a correct reference image as the image used to confirm authenticity, a system will predictably be able to quickly and efficiently be able to identify the most appropriate image to use when analyzing products for authenticity, thus making more accurate determinations. With respect to claim 5, Heikel/Ryle/Yan discloses all of the limitations of claims 1, 2, and 4 as stated above. In addition, Heikel teaches: Wherein the at least one similarity comprises: a textual similarity between text included on the face of the first packaging and text included on the first face of the second packaging, and a graphical similarity between the reference image and the image (See at least paragraphs 129, 154, 194, 195, and 197-199 which describe in response to capture conditions being satisfied, sending the image to a server and processing the image using machine learning models in order to identify objects in the image, wherein when text or graphics are analyzed with the models and are compared to a profile with reference images). With respect to claim 6, Heikel/Ryle/Yan discloses all of the limitations of claims 1, 2, and 4 as stated above. In addition, Yan teaches: Wherein determining whether the first packaging is authentic comprises: in response to selecting the image of the second packaging as the reference image, determining whether the first packaging is authentic based on a comparison between the first packaging in the image and the second packaging in the reference image (See at least paragraphs 26, 28, 29, 36, 40, and 48-50 which describe determining the authenticity of a target item by selecting a face of an item in reference images, determining the similarity between the target and the reference images, and selecting a reference, and comparing the features of the image and reference in order to confirm authenticity). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine the system and method of using a camera of a mobile device to image a package in order to determine its authenticity, wherein the mobile device presents instructions to the user in order to image the package under the correct the conditions, wherein upon imaging the package correctly, providing the image to a server to confirm the authenticity using machine learning models of Heikel, with the system and method of a user using their mobile device to image a package in order to determine its authenticity, wherein the mobile device uses machine learning models to analyze the image and determine if included features in the image indicate authenticity or not of Ryle, with the system and method of determining the authenticity of a target item by selecting a face of an item in reference images, determining the similarity between the target and the reference images, and selecting a reference, and comparing the features of the image and reference in order to confirm authenticity of Yan. By extracting features, such as the sides, from multiple images of another package and finding the similarity to the target image features, and using a correct reference image as the image used to confirm authenticity, a system will predictably be able to quickly and efficiently be able to identify the most appropriate image to use when analyzing products for authenticity, thus making more accurate determinations. Claim 12 are rejected under 35 U.S.C. 103 as being unpatentable over Heikel and Ryle as applied to claim 1 as stated above, and further in view of Wang et al. (US 2025/0037256 A1) (hereinafter Wang). With respect to claim 12, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. Heikel and Ryle do not explicitly disclose the following, however Wang teaches: Wherein the feedback for image capture comprises an indicator of a location of corruption in the image (See at least paragraphs 11 and 53-60 which describe using a neural network to analyze an image of an object to recognize features in the image, wherein the user device presents an indication of a location of low quality in the image, including from glare, blurriness, and lack of clarity). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine the system and method of using a camera of a mobile device to image a package in order to determine its authenticity, wherein the mobile device presents instructions to the user in order to image the package under the correct the conditions, wherein upon imaging the package correctly, providing the image to a server to confirm the authenticity using machine learning models of Heikel, with the system and method of a user using their mobile device to image a package in order to determine its authenticity, wherein the mobile device uses machine learning models to analyze the image and determine if included features in the image indicate authenticity or not of Ryle, with the system and method of using a neural network to analyze an image of an object to recognize features in the image, wherein the user device presents an indication of a location of corruption in the image of Wang. By identifying areas of an image that have lack of quality, a system will predictably be able to ensure that a user uses the best image they can generate in order to confirm the authenticity of an item. Claim 18 are rejected under 35 U.S.C. 103 as being unpatentable over Heikel and Ryle as applied to claim 1 as stated above, and further in view of Olaleye et al. (US 2025/0068983 A1) (hereinafter Olaleye) With respect to claim 18, Heikel/Ryle discloses all of the limitations of claim 1 as stated above. In addition, Heikel teaches: Training the one or more machine learning models, wherein training the one or more machine learning models comprises: providing, in a user interface, a display of an image of reference packaging captured by a second mobile device (See at least paragraphs 155, 157-160, 164, 173, and 175-177 which describe training machine learning models in order recognize packages in images, wherein original images of an original product are collected and used to train a model to recognize the packages, including using different orientations of images, wherein the reference images and labels for features in the images are stored in an item profile, including the sides, text, and graphics, which is referenced to check other items for authenticity). Heikel discloses all of the limitations of claim 18 as stated above. Heikel and Ryle do not explicitly disclose the following, however Olaleye teaches: Processing the image of the reference packaging using a machine learning model that has been trained to identify a face of the reference packaging in the image of the reference packaging, to obtain, as an output, an auto-annotation indicative of at least one of text included in the face of the reference packaging, or a face type of the face of the reference packaging; Providing, in the user interface, one or more tools usable to manually alter the auto-annotation to obtain a modified annotation; and Training the one or more machine learning models using, as training data, the image of the reference packaging and the modified annotation (See at least paragraphs 38-45 which describe training a machine learning model to classify data using unlabeled training data, wherein the model initially generates labels for objects in the data, wherein the user is able to change the label annotations when determined to be incorrect, and wherein the corrected labels are used to further train the models). It would have been obvious to one of ordinary skill in the art at the time of filing the claimed invention to combine the system and method of using a camera of a mobile device to image a package in order to determine its authenticity, wherein the mobile device presents instructions to the user in order to image the package under the correct the conditions, wherein upon imaging the package correctly, providing the image to a server to confirm the authenticity using machine learning models of Heikel, with the system and method of a user using their mobile device to image a package in order to determine its authenticity, wherein the mobile device uses machine learning models to analyze the image and determine if included features in the image indicate authenticity or not of Ryle, with the system and method of training a machine learning model to classify data using unlabeled training data, wherein the model initially generates labels for objects in the data, wherein the user is able to change the label annotations when determined to be incorrect, and wherein the corrected labels are used to further train the models of Olaleye. By allowing a user to manually adjust labels of features in an imaged object, generated by using a model on initial training data, and wherein the new labels are used to further train the models, a system would predictably increase the accuracy of models, by ensuring the most accurate data is used to train machine learning models. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL P HARRINGTON whose telephone number is (571)270-1365. The examiner can normally be reached Monday-Friday 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If
Read full office action

Prosecution Timeline

Dec 29, 2023
Application Filed
Jun 11, 2025
Non-Final Rejection — §101, §103, §112
Sep 10, 2025
Applicant Interview (Telephonic)
Sep 11, 2025
Examiner Interview Summary
Sep 12, 2025
Response Filed
Dec 18, 2025
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591896
SYSTEM AND METHOD FOR TOKEN-BASED TRADING OF CARBON CREDITS
2y 5m to grant Granted Mar 31, 2026
Patent 12561694
Real Time Channel Affinity Derivation
2y 5m to grant Granted Feb 24, 2026
Patent 12555125
EMISSION DETECTING CAMERA PLACEMENT PLANNING USING 3D MODELS
2y 5m to grant Granted Feb 17, 2026
Patent 12525067
System and Method for Toll Transactions Utilizing a Distributed Ledger
2y 5m to grant Granted Jan 13, 2026
Patent 12518582
Methods of Performing a Dispatched Logistics Operation Related to an Item Being Shipped and Using a Modular Autonomous Bot Apparatus Assembly and a Dispatch Server
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
24%
Grant Probability
41%
With Interview (+16.9%)
4y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 477 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month