Prosecution Insights
Last updated: April 19, 2026
Application No. 16/953,387

MACHINE LEARNING BASED IMAGING METHOD OF DETERMINING AUTHENTICITY OF A CONSUMER GOOD

Non-Final OA §103
Filed
Nov 20, 2020
Examiner
RAMESH, TIRUMALE K
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
The Procter & Gamble Company
OA Round
7 (Non-Final)
18%
Grant Probability
At Risk
7-8
OA Rounds
4y 5m
To Grant
20%
With Interview

Examiner Intelligence

Grants only 18% of cases
18%
Career Allow Rate
7 granted / 40 resolved
-37.5% vs TC avg
Minimal +2% lift
Without
With
+2.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
40 currently pending
Career history
80
Total Applications
across all art units

Statute-Specific Performance

§101
30.7%
-9.3% vs TC avg
§103
59.1%
+19.1% vs TC avg
§102
3.7%
-36.3% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 40 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant’s arguments with respect to claims 1, 12, 21 and 22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Response to Amendment (Submitted on 1/14/2026) In regard to 103 rejections The examiner first notes that claims 2 and 13 are CANCELED from the last Office Action. - On Page 9, the applicant argues that a camera bias can affect the training and to address this bias, the images can be extracted using at least three different camera types within the context of plurality of camera types. The applicant further argues that using different camera type can provide more accurate model that is robust to camera type. The applicant argues that references “Lau “ alone or in combinations with Heikel, Plebani and Annun fails to teach three different types of cameras for amended claim 1, 12, 21 and 22. Examiner’s Response Applicant’s arguments with respect to claims 1, 12, 21 and 22 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument and further submits that a new reference “Broyda” teaches three different types of cameras. In order to optimize the references, the patent has used the reference Lau in combination of “Broyda” to teach all the claims limitations in claims 1, 12, 21 and 22. - On Page 11, the applicant argues claims 6-7 and 15-20 are patentable as a result of the claims 1, 12, 20 and 22 being amended, and claim 23 in view of Simske and Blair . Examiner’s Response As a result of the new reference (Broyda) and new grounds of rejections, the applicant’s argument for claim 23 is MOOT. On Page 12, the applicant argues that claim 24 is patentable. Examiner’s Response As a result of the new reference (Broyda) and new grounds of rejections, the applicant’s argument for claim 24 is MOOT. In CONCLUSION, the examiner rejects the independent claims 1, 12, 21 and 22 and all dependent claims 3-8, 10, 14-20 and 23-24 as NON- FINAL REJECTION under 103 in response to applicant’s RCE. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-5, 8, 10, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Tak Wai Lau et. al (hereinafter Lau) US 2020/0410510 A1, in view of Juliy Broyda et.al (hereinafter Broyda) US 2021/0004949 A1. In regard to claim 1: (Currently Amended) Lau discloses: - A machine learning based imaging method for imaging and classifying whether one or more physical and subject consumer goods are authentic or non-authentic, the machine learning based imaging method comprising: in [Abstract]: An authentication apparatus and a method to devise an authentication tool is provided for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of an information bearing device on the article. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device using a trained neural network, - a) obtaining an image of a subject consumer good comprising a subject product specification in [0006]: An authentication tool, an authentication apparatus and a method to devise an authentication tool for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of the article using a neural network is disclosed, in [0070]: A direct image of a source means the image is obtained directly from the source without intervening copying, that is, the image is not captured from an image of the source. An example source may be an information bearing device which is covertly coded with a security data designed to function as an authentic authentication device, - b) inputting the obtained image into a model, wherein the model is configured to classify the obtained image as authentic or non- authentic, in [0102]: During the forward pass on progressing from an earlier layer to a next layer, each filter is convolved across the spatial dimensions of the input volume, in [0102]: The entries of the filter collectively define a weight matrix and the weight matrix was learned by the CNN during deep learning training of the CNN, - wherein the model is constructed by a machine learning classifier, in [0075]: An example CNN 30 comprises an input layer 300, an output layer 399, and a plurality of convolutional layers 301-30n interconnecting the input layer 300 and the output layer 399. CNN is a class of deep, feed-forward, artificial neural networks in machine learning, - wherein the machine learning classifier is trained by a training dataset, in [0187]: The training images include images of authentic and non-authentic information bearing devices, - wherein the authentic product specification comprises at least one steganographic feature having a length greater than 0.01 mm; in [0108]: the example information bearing device 60 is set to have an physical size of a one-cm square, in [0109]: To print the information bearing device using a 1200 DPI printer on a 1 cm× 1 cm medium, the information bearing device 60 need to be resized and quantized. Specifically, the information bearing device is needed to resize to a width and height of 472 pixels in each orthogonal direction, since 472 pixels per cm is equivalent to 1200 DPI. Each pixel of the data-embedded image pattern is a real number and the resized information bearing device is quantized from real number to bi-level, in [0070]: An example source may be an information bearing device which is covertly coded with a security data designed to function as an authentic authentication device, in [0070] : The covertly coded data is typically not human readable or perceivable and the data coding may be by means of steganographic techniques such as transform domain coding techniques. - and (ii) an associated class definition based on the steganographic feature; In [0080]: An example CNN of an example authentication apparatus comprises a plurality of convolution layers between the input layer and the output layer, as depicted in FIG. 3. The convolution layers of the CNN are serially connected to form an ensemble of serially connected convolution layers. Each convolution layer comprises a plurality of filters, and each filter is a convolution filter which is to operate with an input data file to generate an output data file. A plurality of output data files is generated as a result of convolution operations among the convolution filters and the input data files at the input of a convolution layer. Each output data file is referred to as a feature map in CNN terminology. In [0079]: The fully connected network (“FCN”) is connected to output of the CNN, such that output of the CNN is fed as input to the FCN, as depicted in FIG. 2. The FCN will perform classification operations on the processed data of the CNN, for example, to determine whether, or how likely, the processed data of a target image CNN corresponds to an authentic authentication device or a non-authentic authentication device. - c) outputting a classification output from the model indicating a likelihood that the image of the subject consumer good is authentic or non-authentic in [0079]: The FCN will perform classification operations on the processed data of the CNN, for example, to determine whether, or how likely, the processed data of a target image CNN corresponds to an authentic authentication device or a non-authentic authentication device. - wherein the non-authentic product specification is different from the at least one steganographic feature In [0073]: An authentic authentication device herein is also referred to as an authentic information bearing device or a genuine information bearing device herein, while a non-authentic authentication device is also referred to as a non-authentic information bearing device or a non-genuine information bearing device where appropriate. The target image may be captured by the apparatus or received from an outside source. In [0132]: An image of an authentic authentication device is a primary copy of an authentic information bearing device, while an image of a non-authentic authentication device may be a secondary copy of an authentic information bearing device or a copy of a fake information bearing device. In [0071]: An example information bearing device herein comprises a data-encoded image pattern which is encoded with a set of discrete data. The set of data is human non-perceivable in its encoded state such that the data is not readily readable or readily decodable by a human reader looking at the data-encoded image pattern using naked eyes. - and wherein the training dataset further comprises the extracted images of the authentic product augmented with geometric distortion so that the extracted images of the authentic product have a different shape. In [0006]: An authentication tool, an authentication apparatus and a method to devise an authentication tool for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of the article using a neural network is disclosed. To facilitate verification of authenticity of an article, an article is commonly incorporated with an authentication device which includes an information bearing device such as a label, a tag or an imprint. The information bearing device comprises a data-embedded image pattern and the data-embedded image pattern is covertly encoded with a set of data so that the data is not perceivable by a reasonable person reading the data-embedded image pattern. In example embodiments. Each data is a discrete data having characteristic two- or three-dimensional coordinate values in data domain and the coordinate values are transformed into spatial properties of image-defining elements which cooperate to define the entirety of the data-embedded image pattern. The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. (BRI: transforming data into spatial properties of image-defining elements (such as pixel positions, shapes, or sizes) and spreading coordinate values throughout those elements generally represents geometrical distortion) In [0016] : In some embodiments, the set of data embedded in the data-embedded image pattern comprises a plurality of discrete frequency data, and the discrete frequency data are transformed into spatially distributed pattern defining elements which are spread in the data-embedded image pattern and which are non-human readable or non-human perceivable using naked eyes; and the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0136]: Each pixel has characteristic physical properties including size, shape, color, brightness, etc., and the entirety of pixels collectively define a data-embedded image pattern. (BRI: the process of spatially distributing a pattern that are non-human readable represents a sophisticated steganographic feature) - wherein the training dataset is spatially manipulated by a Spatial Transformer Network before training the machine learning classifier. In [0006]: The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. In [0016]: the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0177]: On defining the CNN structure, the input layer is set to have a single channel since the example information bearing device 60 has a data-embedded image pattern which is defined by pattern defining elements in gray-scale coding. (BRI: this process that combine frequency domain data with spatial manipulation, such as those used in Spectral-Spatial-Frequency Transformer Networks or Fourier-based data augmentation/feature extraction. Specifically, discrete frequency data (e.g., Fourier or Discrete Cosine Transform coefficients) can be transformed back into spatially distributed patterns or maps, which are then used to augment or inform training datasets. When these generated patterns are treated as input images, a Spatial Transformer Network (STN) can be employed to actively manipulate (e.g., rotate, scale, warp) these input features to improve spatial invariance before the data is fed to a classification model. Lau does not explicitly disclose: - of an authentic product specification comprising an authentic product specification comparable with the subject product specification, wherein the different camera types comprises at least three different camera types, - wherein the training data set further comprises a balance set of extracted images from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types, However, Broyda discloses: - of an authentic product specification comprising an authentic product specification comparable with the subject product specification, wherein the different camera types comprises at least three different camera types, in [0076]: client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, in [0076]: a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, (BRI: a client computing device that is exclusively represents as a laptop (webcam), smartphone, PDA, and tablet have cameras that teach at least three different camera types) In [0160]: At 912, in response to determining that the receipt has not been confirmed as a duplicate receipt, a reason for a false-positive duplicate receipt identification is determined. For example, one or more conditions or characteristics of a duplicate receipt, or an existing receipt that had been incorrectly matched to the receipt, can be identified. In [0161]: At 914, one or more machine learning models are adjusted to prevent (or reduce) future false-positive duplicate receipts for a same reason as why the receipt was incorrectly identified as a duplicate receipt. For instance, a machine learning model can be adjusted to identify information in a receipt that would differentiate the receipt from existing receipts (e.g., where the information may not have been previously identified). (BRI: The characteristics is a “specification”. The machine learning (ML) models can be adjusted based on identified characteristics of false-positive duplicate receipts to prevent or reduce future occurrences. When a legitimate receipt is incorrectly flagged as a duplicate, this "false positive" can be used as training data to refine the ML model, allowing it to recognize the specific, authentic characteristics ) - wherein the training data set further comprises a balance set of extracted images from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types, in [0076]: client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, in [0076]: a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, (BRI: a client computing device that is exclusively represents as a laptop (webcam), smartphone, PDA, and tablet have cameras that teach at least three different camera types) In [0003]: receiving a request to authenticate an image of a document; preprocessing the image of the document to prepare the image of the document for line orientation analysis; automatically analyzing the preprocessed image to determine lines in the preprocessed image in [0177]: FIGS. 14A and 14B illustrate examples of a machine-generated receipt image 1402 and an authentic receipt image 1404, respectively. The authentic receipt image 1404 can be an image captured by a camera, in [0234]: Features associated with valid images can be features of images of printed documents that have been captured by a camera, for example. in [0029]: FIG. 21 is a flowchart of an example method for training a neural network model for image classification. In [0182]: The valid electronic documents can be excluded from machine learning training, or can be included in machine learning training In [0217]: At 2110, the network is trained. In general, machine learning algorithms can be trained on a training portion and evaluated on a testing portion. More specifically, the model can be initially fit using a training dataset that is a set of examples used to fit the parameters (e.g., weights of connections between neurons in artificial neural networks) of the model. The fitted model can be used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset can provide an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters (e.g., the number of hidden units in a neural network). In [0215]: A validation data set is a dataset of examples used to tune the hyperparameters of the network. A hyperparameter can be, for example, the number of hidden units in the network. For instance, an example validation data set 2108b can include 300 fake images and 9300 authentic images. In [0216]: A test dataset is a dataset that is independent of the training dataset, but that can follow a same probability distribution as the training dataset. For instance, an example test data set 2108b can include 100 fake images and 100 authentic images. A test data set can be used to evaluate and fine tune a fitted network. (BRI: using a model fitted on a training dataset can predict responses for a separate validation dataset to provide an unbiased evaluation of performance helps in tuning hyperparameters and detecting overfitting, with the validation set acts as a checkpoint for how well the model generalizes to unseen data. In this context, the training set to fit the model parameters, often to minimize error. While validation sets are generally used for evaluation, maintaining a balanced dataset (in terms of classes or features) during training is crucial for building a reliable, unbiased model. The data set is balanced with 100 fake images and 100 authentic images) The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, and Broyda. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. One of ordinary skill would have motivation to combine Lau, and Broyda that can provide a n increases system accuracy and confidence for product authentication (Broyda [0051]) In regard to claim 3: (Previously Presented) Lau discloses: - augmenting the extracted images of the authentic product with color distortion in [0136]: Modern authentic information bearing devices contain data-embedded image patterns which are digitally formed and consist of pixels. Each pixel has characteristic physical properties including size, shape, color, brightness, etc, in [0136 ]: Some of the characteristic physical properties suffer degradation or loss of fidelity during image capture and/or reproduction, in [0138]: When the data-embedded image pattern of the authentic information bearing device is captured by an image capture apparatus, the gray levels of the pixels forming the captured image may be changed. For example, the gray levels may be shifted linearly, non-linearly, randomly or may have an entirely different gray-scale distribution of pixels compared to those of the data-embedded image pattern. The change may be due to internal setting of the image capture apparatus (for example, exposure setting), calibration of the image capture apparatus, ambient illumination, sensitivity and/or linearity of the image sensor of the capture apparatus, angle of image capture, and/or other parameters. In regard to claim 4: (Original) Lau discloses: - the at least one steganographic feature has a length from 0.02 mm to 20 mm in [0109]: To print the information bearing device using a 1200 DPI printer on a 1 cm × 1 cm medium, the information bearing device 60 need to be resized and quantized. Specifically, the information bearing device is needed to resize to a width and height of 472 pixels in each orthogonal direction, since 472 pixels per cm is equivalent to 1200 DPI. Each pixel of the data-embedded image pattern is a real number and the resized information bearing device is quantized from real number to bi-level, in [0070]: An example source may be an information bearing device which is covertly coded with a security data designed to function as an authentic authentication device. The authentic authentication device is a target for the purpose of the present disclosure and the source is therefore also a target. The covertly coded data is typically not human readable or perceivable and the data coding may be by means of steganographic techniques such as transform domain coding techniques. In regard to claim 5: (Original) Lau discloses: - the machine learning classifier is validated by a validating dataset, wherein the validating dataset comprises one or more images defining the at least one steganographic feature of the subject product specification in [0006]: To facilitate verification of authenticity of an article, an article is commonly incorporated with an authentication device which includes an information bearing device such as a label, a tag or an imprint. In regard to claim 8: (Original) Lau discloses: - the machine learning classifier is a convolutional neural network (CNN) in [0008]: In some embodiments, the neural network is a convolutional neural network (CNN). In regard to claim 10: (Original) Lau discloses: - the obtained image of the subject consumer good is spatially manipulated before being inputted into the model in [0080]: An input data file presented at the input of a first convolution layer of the CNN is intended to be a data file of a target image containing a plurality of image data representing a plurality of image-defining elements. Each image-defining element has spatial properties and characteristics such that the spatial properties and characteristics of all the image-defining elements of a target image define the entirety of the target image. The spatial properties and characteristics include spatial coordinates and signal amplitude or strength of the image-defining elements. In regard to claim 21: (Currently Amended ) Lau discloses: - imaging and classifying whether one or more physical and subject consumer goods are authentic or non- authentic, that when executed by one or more processors cause the one or more processors to: in [0068]:, in [0006] : An authentication tool, an authentication apparatus and a method to devise an authentication tool for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of the article using a neural network is disclosed. - a) obtain an image of a subject consumer good comprising a subject product specification in [0070] : A direct image of a source means the image is obtained directly from the source without intervening copying, that is, the image is not captured from an image of the source. An example source may be an information bearing device which is covertly coded with a security data designed to function as an authentic authentication device. (BRI: The product specification is the source information on the information bearing device which is overtly coded) - b) input the obtained image into a model, wherein the model is configured to classify the obtained image as authentic or non- authentic, in [0102] : During the forward pass on progressing from an earlier layer to a next layer, each filter is convolved across the spatial dimensions of the input volume, in [0102]: The entries of the filter collectively define a weight matrix and the weight matrix was learned by the CNN during deep learning training of the CNN, in [0187]: The training images include images of authentic and non-authentic information bearing devices, - wherein the model is constructed by a machine learning classifier, in [0075] : An example CNN 30 comprises an input layer 300, an output layer 399, and a plurality of convolutional layers 301-30n interconnecting the input layer 300 and the output layer 399. CNN is a class of deep, feed-forward, artificial neural networks in machine learning, - wherein the machine learning classifier is trained by a training dataset, in [0183] : The training images are selected according to some selection criteria such that the imperfection values are within the acceptable ranges”, in [0187]: The training images include images of authentic and non-authentic information bearing devices. where in training data set comprises: - of an authentic product specification comprising an authentic product specification comparable with the subject product specification in [0072]: Due to its unique properties, for example, a specific or one-to-one correspondence between a set of data and a set of spatial image pattern having spread or distributed pattern defining elements to represent the set of data, in [0072]: the information bearing device can be used as an authentication device, with the encoded coordinate data or encoded set of coordinate data, in [0073] : An authentic authentication device herein is also referred to as an authentic information bearing device or a genuine information bearing device herein, while a non-authentic authentication device is also referred to as a non-authentic information bearing device or a non-genuine information bearing device where appropriate. - the authentic product specification comprises at least one steganographic feature having a length greater than 0.01 mm; in [0108]: the example information bearing device 60 is set to have an physical size of a one-cm square, in [0109]: To print the information bearing device using a 1200 DPI printer on a 1 cm× 1 cm medium, the information bearing device 60 need to be resized and quantized. Specifically, the information bearing device is needed to resize to a width and height of 472 pixels in each orthogonal direction, since 472 pixels per cm is equivalent to 1200 DPI. Each pixel of the data-embedded image pattern is a real number and the resized information bearing device is quantized from real number to bi-level, in [0070] : An example source may be an information bearing device which is covertly coded with a security : data designed to function as an authentic authentication device, in [0070] : The covertly coded data is typically not human readable or perceivable and the data coding may be by means of steganographic techniques such as transform domain coding techniques. - (ii) an associated class definition based on the steganographic feature; In [0080]: An example CNN of an example authentication apparatus comprises a plurality of convolution layers between the input layer and the output layer, as depicted in FIG. 3. The convolution layers of the CNN are serially connected to form an ensemble of serially connected convolution layers. Each convolution layer comprises a plurality of filters, and each filter is a convolution filter which is to operate with an input data file to generate an output data file. A plurality of output data files is generated as a result of convolution operations among the convolution filters and the input data files at the input of a convolution layer. Each output data file is referred to as a feature map in CNN terminology. In [0079]: The fully connected network (“FCN”) is connected to output of the CNN, such that output of the CNN is fed as input to the FCN, as depicted in FIG. 2. The FCN will perform classification operations on the processed data of the CNN, for example, to determine whether, or how likely, the processed data of a target image CNN corresponds to an authentic authentication device or a non-authentic authentication device. - c) output a classification output from the model indicating a likelihood that the image of the subject consumer good is authentic or non-authentic in [0079: The FCN will perform classification operations on the processed data of the CNN, for example, to determine whether, or how likely, the processed data of a target image CNN corresponds to an authentic authentication device or a non-authentic authentication device. - wherein the training dataset further comprises further extracted images, from a plurality of different camera types, of a non-authentic product comprising a non-authentic product specification, wherein the non-authentic product specification is different from the at least one steganographic feature, In [0138]: When the data-embedded image pattern of the authentic information bearing device is captured by an image capture apparatus, the gray levels of the pixels forming the captured image may be changed In [0138]: The change may be due to internal setting of the image capture apparatus (for example, exposure setting), calibration of the image capture apparatus, ambient illumination, sensitivity and/or linearity of the image sensor of the capture apparatus, angle of image capture, and/or other parameters. In [0140]: When a smart phone having a built-in image capture device is used to capture an image of the information bearing device 60, the resulting captured images have different average brightness levels ranging from 19 to 252, (BR:, modern smartphones (from early 2011) represent a plurality of different camera types integrated into one device. They typically feature multiple lenses—such as wide-angle, ultra-wide, and telephoto—along with dedicated depth or macro sensors, allowing users to capture varied perspectives and improve image quality through computational photography. The prior art filed on 2018-03-01) In [0073]: An authentic authentication device herein is also referred to as an authentic information bearing device or a genuine information bearing device herein, while a non-authentic authentication device is also referred to as a non-authentic information bearing device or a non-genuine information bearing device where appropriate. The target image may be captured by the apparatus or received from an outside source. In [0132]: An image of an authentic authentication device is a primary copy of an authentic information bearing device, while an image of a non-authentic authentication device may be a secondary copy of an authentic information bearing device or a copy of a fake information bearing device. In [0071]: An example information bearing device herein comprises a data-encoded image pattern which is encoded with a set of discrete data. The set of data is human non-perceivable in its encoded state such that the data is not readily readable or readily decodable by a human reader looking at the data-encoded image pattern using naked eyes. - wherein the training dataset further comprises the extracted images of the authentic product augmented with geometric distortion so that the extracted images of the authentic product have a different shape, In [0006]: An authentication tool, an authentication apparatus and a method to devise an authentication tool for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of the article using a neural network is disclosed. To facilitate verification of authenticity of an article, an article is commonly incorporated with an authentication device which includes an information bearing device such as a label, a tag or an imprint. The information bearing device comprises a data-embedded image pattern and the data-embedded image pattern is covertly encoded with a set of data so that the data is not perceivable by a reasonable person reading the data-embedded image pattern. In example embodiments. Each data is a discrete data having characteristic two- or three-dimensional coordinate values in data domain and the coordinate values are transformed into spatial properties of image-defining elements which cooperate to define the entirety of the data-embedded image pattern. The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. (BRI: transforming data into spatial properties of image-defining elements (such as pixel positions, shapes, or sizes) and spreading coordinate values throughout those elements generally represents geometrical distortion) In [0016] : In some embodiments, the set of data embedded in the data-embedded image pattern comprises a plurality of discrete frequency data, and the discrete frequency data are transformed into spatially distributed pattern defining elements which are spread in the data-embedded image pattern and which are non-human readable or non-human perceivable using naked eyes; and the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0136]: Each pixel has characteristic physical properties including size, shape, color, brightness, etc., and the entirety of pixels collectively define a data-embedded image pattern. (BRI: the process of spatially distributing a pattern that are non-human readable represents a sophisticated steganographic feature) - and wherein the training dataset is spatially manipulated by a Spatial Transformer Network before training the machine learning classifier. In [0006]: The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. In [0016]: the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0177]: On defining the CNN structure, the input layer is set to have a single channel since the example information bearing device 60 has a data-embedded image pattern which is defined by pattern defining elements in gray-scale coding. (BRI: this process that combine frequency domain data with spatial manipulation, such as those used in Spectral-Spatial-Frequency Transformer Networks or Fourier-based data augmentation/feature extraction. Specifically, discrete frequency data (e.g., Fourier or Discrete Cosine Transform coefficients) can be transformed back into spatially distributed patterns or maps, which are then used to augment or inform training datasets. When these generated patterns are treated as input images, a Spatial Transformer Network (STN) can be employed to actively manipulate (e.g., rotate, scale, warp) these input features to improve spatial invariance before the data is fed to a classification model. Lau does not explicitly disclose: - A tangible, non-transitory computer-readable medium storing instructions storing instructions for imaging and classifying whether one or more physical and subject consumer goods are authentic or non-authentic, that when executed by one or more processors cause the one or more processors to: - wherein the machine learning classifier is trained by a training dataset, wherein the training dataset comprises: (i) extracted images, from a plurality of different camera types, of an authentic product comprising an authentic product specification comparable with the subject product specification, wherein the plurality of different camera types comprises at least three different camera types, - and wherein the training dataset further comprises the extracted images of the authentic product augmented with geometric distortion so that the extracted images of the authentic product have a different shape. - wherein the training data set further comprises a balance set of extracted images from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types However, Broyda discloses: - A tangible, non-transitory computer-readable medium storing instructions storing instructions for imaging and classifying whether one or more physical and subject consumer goods are authentic or non-authentic, that when executed by one or more processors cause the one or more processors to: In [0072], [0051], [0003] - wherein the machine learning classifier is trained by a training dataset, wherein the training dataset comprises: (i) extracted images, from a plurality of different camera types, of an authentic product comprising an authentic product specification comparable with the subject product specification, wherein the plurality of different camera types comprises at least three different camera types, in [0076]: client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, in [0076]: a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, (BRI: a client computing device that is exclusively represents as a laptop (webcam), smartphone, PDA, and tablet have cameras that teach at least three different camera types) In [0160]: At 912, in response to determining that the receipt has not been confirmed as a duplicate receipt, a reason for a false-positive duplicate receipt identification is determined. For example, one or more conditions or characteristics of a duplicate receipt, or an existing receipt that had been incorrectly matched to the receipt, can be identified. In [0161]: At 914, one or more machine learning models are adjusted to prevent (or reduce) future false-positive duplicate receipts for a same reason as why the receipt was incorrectly identified as a duplicate receipt. For instance, a machine learning model can be adjusted to identify information in a receipt that would differentiate the receipt from existing receipts (e.g., where the information may not have been previously identified). (BRI: The characteristics is a “specification”. The machine learning (ML) models can be adjusted based on identified characteristics of false-positive duplicate receipts to prevent or reduce future occurrences. When a legitimate receipt is incorrectly flagged as a duplicate, this "false positive" can be used as training data to refine the ML model, allowing it to recognize the specific, authentic characteristics ) - wherein the training data set further comprises a balance set of extracted images from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types in [0076]: client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, in [0076]: a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, (BRI: a client computing device that is exclusively represents as a laptop (webcam), smartphone, PDA, and tablet have cameras that teach at least three different camera types) In [0003]: receiving a request to authenticate an image of a document; preprocessing the image of the document to prepare the image of the document for line orientation analysis; automatically analyzing the preprocessed image to determine lines in the preprocessed image in [0177]: FIGS. 14A and 14B illustrate examples of a machine-generated receipt image 1402 and an authentic receipt image 1404, respectively. The authentic receipt image 1404 can be an image captured by a camera, in [0234]: Features associated with valid images can be features of images of printed documents that have been captured by a camera, for example. in [0029]: FIG. 21 is a flowchart of an example method for training a neural network model for image classification. In [0182]: The valid electronic documents can be excluded from machine learning training, or can be included in machine learning training In [0217]: At 2110, the network is trained. In general, machine learning algorithms can be trained on a training portion and evaluated on a testing portion. More specifically, the model can be initially fit using a training dataset that is a set of examples used to fit the parameters (e.g., weights of connections between neurons in artificial neural networks) of the model. The fitted model can be used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset can provide an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters (e.g., the number of hidden units in a neural network). In [0215]: A validation data set is a dataset of examples used to tune the hyperparameters of the network. A hyperparameter can be, for example, the number of hidden units in the network. For instance, an example validation data set 2108b can include 300 fake images and 9300 authentic images. In [0216]: A test dataset is a dataset that is independent of the training dataset, but that can follow a same probability distribution as the training dataset. For instance, an example test data set 2108b can include 100 fake images and 100 authentic images. A test data set can be used to evaluate and fine tune a fitted network. (BRI: using a model fitted on a training dataset can predict responses for a separate validation dataset to provide an unbiased evaluation of performance helps in tuning hyperparameters and detecting overfitting, with the validation set acts as a checkpoint for how well the model generalizes to unseen data. In this context, the training set to fit the model parameters, often to minimize error. While validation sets are generally used for evaluation, maintaining a balanced dataset (in terms of classes or features) during training is crucial for building a reliable, unbiased model. The data set is balanced with 100 fake images and 100 authentic images) The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, and Broyda. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. One of ordinary skill would have motivation to combine Lau, and Broyda that can provide a n increases system accuracy and confidence for product authentication (Broyda [0051]) In regards to claim 22: (Currently Amended) Lau discloses: - A machine learning based imaging system configured to image and classify whether one or more physical and subject consumer goods are authentic or non-authentic, in [0013]: A self-learning system and methods for automatic document classification, authentication, and information extraction are described. in [0025]: This invention uses unique methods to automatically train classes of documents, i.e., it allows the training subsystem to self-learn optimal parameters for classification and authentication. Thereby, it improves the accuracy and reliability, and shortens the training time. image and classify whether one or more physical and subject consumer goods are authentic or non-authentic, the machine learning based imaging system comprising in [Abstract]: “An authentication apparatus and a method to devise an authentication tool is provided for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of an information bearing device on the article. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device using a trained neural network; (BRI: Information bearing device is a product) - a) obtain an image of a subject consumer good comprising a subject product specification, the image captured by the mobile device; in [0070] : A direct image of a source means the image is obtained directly from the source without intervening copying, that is, the image is not captured from an image of the source. An example source may be an information bearing device which is covertly coded with a security data designed to function as an authentic authentication device. - b) input the obtained image into a model, wherein the model is configured to classify the obtained image as authentic or non- authentic, in [0102] : During the forward pass on progressing from an earlier layer to a next layer, each filter is convolved across the spatial dimensions of the input volume, in [0102]: The entries of the filter collectively define a weight matrix and the weight matrix was learned by the CNN during deep learning training of the CNN, in [0187]: The training images include images of authentic and non-authentic information bearing devices, - wherein the model is constructed by a machine learning classifier, in [0075] : An example CNN 30 comprises an input layer 300, an output layer 399, and a plurality of convolutional layers 301-30n interconnecting the input layer 300 and the output layer 399. CNN is a class of deep, feed-forward, artificial neural networks in machine learning, - wherein the machine learning classifier is trained by a training dataset, in [0183] : The training images are selected according to some selection criteria such that the imperfection values are within the acceptable ranges”, in [0187]: The training images include images of authentic and non-authentic information bearing devices. in [0072]: Due to its unique properties, for example, a specific or one-to-one correspondence between a set of data and a set of spatial image pattern having spread or distributed pattern defining elements to represent the set of data, in [0072]: the information bearing device can be used as an authentication device, with the encoded coordinate data or encoded set of coordinate data, in [0073] : An authentic authentication device herein is also referred to as an authentic information bearing device or a genuine information bearing device herein, while a non-authentic authentication device is also referred to as a non-authentic information bearing device or a non-genuine information bearing device where appropriate. - the authentic product specification comprises at least one steganographic feature having a length greater than 0.01 mm; in [0108]: the example information bearing device 60 is set to have an physical size of a one-cm square, in [0109]: To print the information bearing device using a 1200 DPI printer on a 1 cm× 1 cm medium, the information bearing device 60 need to be resized and quantized. Specifically, the information bearing device is needed to resize to a width and height of 472 pixels in each orthogonal direction, since 472 pixels per cm is equivalent to 1200 DPI. Each pixel of the data-embedded image pattern is a real number and the resized information bearing device is quantized from real number to bi-level, in [0070] : An example source may be an information bearing device which is covertly coded with a security : data designed to function as an authentic authentication device, in [0070] : The covertly coded data is typically not human readable or perceivable and the data coding may be by means of steganographic techniques such as transform domain coding techniques. - (ii) an associated class definition based on the steganographic feature; In [0080]: An example CNN of an example authentication apparatus comprises a plurality of convolution layers between the input layer and the output layer, as depicted in FIG. 3. The convolution layers of the CNN are serially connected to form an ensemble of serially connected convolution layers. Each convolution layer comprises a plurality of filters, and each filter is a convolution filter which is to operate with an input data file to generate an output data file. A plurality of output data files is generated as a result of convolution operations among the convolution filters and the input data files at the input of a convolution layer. Each output data file is referred to as a feature map in CNN terminology. In [0079]: The fully connected network (“FCN”) is connected to output of the CNN, such that output of the CNN is fed as input to the FCN, as depicted in FIG. 2. The FCN will perform classification operations on the processed data of the CNN, for example, to determine whether, or how likely, the processed data of a target image CNN corresponds to an authentic authentication device or a non-authentic authentication device. - c) output a classification output from the model indicating a likelihood that the image of the subject consumer good is authentic or non-authentic in [0079: The FCN will perform classification operations on the processed data of the CNN, for example, to determine whether, or how likely, the processed data of a target image CNN corresponds to an authentic authentication device or a non-authentic authentication device. - wherein the training dataset further comprises further extracted image, from the plurality of different camera types, of a non-authentic product comprising a non-authentic product specification, wherein the non-authentic product specification is different from the at least one steganographic feature, In [0073]: An authentic authentication device herein is also referred to as an authentic information bearing device or a genuine information bearing device herein, while a non-authentic authentication device is also referred to as a non-authentic information bearing device or a non-genuine information bearing device where appropriate. The target image may be captured by the apparatus or received from an outside source. In [0132]: An image of an authentic authentication device is a primary copy of an authentic information bearing device, while an image of a non-authentic authentication device may be a secondary copy of an authentic information bearing device or a copy of a fake information bearing device. In [0071]: An example information bearing device herein comprises a data-encoded image pattern which is encoded with a set of discrete data. The set of data is human non-perceivable in its encoded state such that the data is not readily readable or readily decodable by a human reader looking at the data-encoded image pattern using naked eyes. - and wherein the training dataset further comprises the extracted images of the authentic product augmented with geometric distortion so that the extracted images of the authentic product have a different shape. In [0006]: An authentication tool, an authentication apparatus and a method to devise an authentication tool for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of the article using a neural network is disclosed. To facilitate verification of authenticity of an article, an article is commonly incorporated with an authentication device which includes an information bearing device such as a label, a tag or an imprint. The information bearing device comprises a data-embedded image pattern and the data-embedded image pattern is covertly encoded with a set of data so that the data is not perceivable by a reasonable person reading the data-embedded image pattern. In example embodiments. Each data is a discrete data having characteristic two- or three-dimensional coordinate values in data domain and the coordinate values are transformed into spatial properties of image-defining elements which cooperate to define the entirety of the data-embedded image pattern. The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. (BRI: transforming data into spatial properties of image-defining elements (such as pixel positions, shapes, or sizes) and spreading coordinate values throughout those elements generally represents geometrical distortion) In [0016] : In some embodiments, the set of data embedded in the data-embedded image pattern comprises a plurality of discrete frequency data, and the discrete frequency data are transformed into spatially distributed pattern defining elements which are spread in the data-embedded image pattern and which are non-human readable or non-human perceivable using naked eyes; and the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0136]: Each pixel has characteristic physical properties including size, shape, color, brightness, etc., and the entirety of pixels collectively define a data-embedded image pattern. (BRI: the process of spatially distributing a pattern that are non-human readable represents a sophisticated steganographic feature) - and wherein the training dataset is spatially manipulated by a Spatial Transformer Network before training the machine learning classifier. In [0006]: The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. In [0016]: the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0177]: On defining the CNN structure, the input layer is set to have a single channel since the example information bearing device 60 has a data-embedded image pattern which is defined by pattern defining elements in gray-scale coding. (BRI: this process that combine frequency domain data with spatial manipulation, such as those used in Spectral-Spatial-Frequency Transformer Networks or Fourier-based data augmentation/feature extraction. Specifically, discrete frequency data (e.g., Fourier or Discrete Cosine Transform coefficients) can be transformed back into spatially distributed patterns or maps, which are then used to augment or inform training datasets. When these generated patterns are treated as input images, a Spatial Transformer Network (STN) can be employed to actively manipulate (e.g., rotate, scale, warp) these input features to improve spatial invariance before the data is fed to a classification model. Lau does not explicitly disclose: - the machine learning based imaging system comprising: a server comprising a processor and a memory, the memory storing a model; - and a software application (app) configured to execute on a mobile device comprising a mobile processor and a mobile memory, the software app communicatively coupled to the server via a computer network, wherein the server comprises computing instructions configured for execution on the processor, and that when executed by the processor causes the processor to: - wherein the machine learning classifier is trained by a training dataset, wherein the training dataset comprises: (i) extracted images, from a plurality of different camera types, of an authentic product comprising an authentic product specification comparable with the subject product specification, wherein the plurality of different camera types comprises at least three different camera types, - wherein the training data set further comprises a balance set of extracted images from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types However, Broyda discloses: - the machine learning based imaging system comprising: a server comprising a processor and a memory, the memory storing a model; In [0028]: FIG. 20 is a flowchart of an example method for using machine learning for classifying document images as authentic or unauthentic. In [0080] : FIG. 2A illustrates an example system 200 for expense report auditing. An orchestrator component 202 can orchestrate auditing of expense report items. For example, the orchestrator component 202 can request auditing for each expense included in an expense report. The orchestrator 202 can provide expense data and receipt information 204 (e.g., OCR text extracted from receipts, credit card receipt information, electronic receipt data) to a ML (Machine Learning) audit service 206. The ML audit service 206 can forward the expense data and receipt information 204 to a data science server 208. In [0079]: the server 102 and the client devices 104 and 105 may be any computer or processing device such as, for example, a blade server, general-purpose personal computer (PC), Mac®, workstation, UNIX-based workstation, or any other suitable device (BRI: a blade server has its own dedicated RAM memory and processors) In [0081] : The data science server 208 can extract receipt token values from the OCR text. In some implementations, the data science server 208 is configured to perform a receipt audit service 209. In other implementations, the receipt audit service 209 is performed by a different server. In [0089]: FIG. 4A is a flowchart of an example method 400 for generating an audit alert as part of a receipt audit. A machine learning engine receives receipt text 401 and performs a machine learning algorithm 402 to produce a prediction and a confidence score 404. The prediction includes predicted token values that a token extractor has extracted from the receipt. In [0176]: A fake receipt detector 1314 can perform an audit to determine whether a receipt is a fake receipt (e.g., a receipt image generated by a computer program rather than a legitimate image of a physical receipt). Various components can store data in one or more data stores 1316. (BRI: within the context of a fake receipt detector, various components typically store data in one or more data stores represents model store) - and a software application (app) configured to execute on a mobile device comprising a mobile processor and a mobile memory, the software app communicatively coupled to the server via a computer network, wherein the server comprises computing instructions configured for execution on the processor, and that when executed by the processor causes the processor to: In [0076]: The end-user client device 104, the auditor client device 106, and the administrator client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, one or more processors within these devices, or any other suitable processing device. For example, a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, In [0074]: A client application is any type of application that allows a respective client device to request and view content on the respective client device. In some implementations, a client application can use parameters, metadata, and other information received at launch to access a particular set of data from the server 102. In some instances, a client application may be an agent or client-side version of an application running on the server 102 or another server. In [0075]: client device 105 respectively include processor(s) 160, 161, or 162. Each of the processor(s) 160, 161, or 162 may be a central processing unit (CPU), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another suitable component. Generally, each processor 160, 161 or 162 executes instructions and manipulates data to perform the operations of the respective client device. Specifically, each processor 160, 161, or 162 executes the functionality required to send requests to the server 102 and to receive and process responses from the server 102. - wherein the machine learning classifier is trained by a training dataset, wherein the training dataset comprises: (i) extracted images, from a plurality of different camera types, of an authentic product comprising an authentic product specification comparable with the subject product specification, wherein the plurality of different camera types comprises at least three different camera types, in [0076]: client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, in [0076]: a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, (BRI: a client computing device that is exclusively represents as a laptop (webcam), smartphone, PDA, and tablet have cameras that teach at least three different camera types) In [0160]: At 912, in response to determining that the receipt has not been confirmed as a duplicate receipt, a reason for a false-positive duplicate receipt identification is determined. For example, one or more conditions or characteristics of a duplicate receipt, or an existing receipt that had been incorrectly matched to the receipt, can be identified. In [0161]: At 914, one or more machine learning models are adjusted to prevent (or reduce) future false-positive duplicate receipts for a same reason as why the receipt was incorrectly identified as a duplicate receipt. For instance, a machine learning model can be adjusted to identify information in a receipt that would differentiate the receipt from existing receipts (e.g., where the information may not have been previously identified). (BRI: The characteristics is a “specification”. The machine learning (ML) models can be adjusted based on identified characteristics of false-positive duplicate receipts to prevent or reduce future occurrences. When a legitimate receipt is incorrectly flagged as a duplicate, this "false positive" can be used as training data to refine the ML model, allowing it to recognize the specific, authentic characteristics ) - wherein the training data set further comprises a balance set of extracted images from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types in [0076]: client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, in [0076]: a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, (BRI: a client computing device that is exclusively represents as a laptop (webcam), smartphone, PDA, and tablet have cameras that teach at least three different camera types) In [0003]: receiving a request to authenticate an image of a document; preprocessing the image of the document to prepare the image of the document for line orientation analysis; automatically analyzing the preprocessed image to determine lines in the preprocessed image in [0177]: FIGS. 14A and 14B illustrate examples of a machine-generated receipt image 1402 and an authentic receipt image 1404, respectively. The authentic receipt image 1404 can be an image captured by a camera, in [0234]: Features associated with valid images can be features of images of printed documents that have been captured by a camera, for example. in [0029]: FIG. 21 is a flowchart of an example method for training a neural network model for image classification. In [0182]: The valid electronic documents can be excluded from machine learning training, or can be included in machine learning training In [0217]: At 2110, the network is trained. In general, machine learning algorithms can be trained on a training portion and evaluated on a testing portion. More specifically, the model can be initially fit using a training dataset that is a set of examples used to fit the parameters (e.g., weights of connections between neurons in artificial neural networks) of the model. The fitted model can be used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset can provide an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters (e.g., the number of hidden units in a neural network). In [0215]: A validation data set is a dataset of examples used to tune the hyperparameters of the network. A hyperparameter can be, for example, the number of hidden units in the network. For instance, an example validation data set 2108b can include 300 fake images and 9300 authentic images. In [0216]: A test dataset is a dataset that is independent of the training dataset, but that can follow a same probability distribution as the training dataset. For instance, an example test data set 2108b can include 100 fake images and 100 authentic images. A test data set can be used to evaluate and fine tune a fitted network. (BRI: using a model fitted on a training dataset can predict responses for a separate validation dataset to provide an unbiased evaluation of performance helps in tuning hyperparameters and detecting overfitting, with the validation set acts as a checkpoint for how well the model generalizes to unseen data. In this context, the training set to fit the model parameters, often to minimize error. While validation sets are generally used for evaluation, maintaining a balanced dataset (in terms of classes or features) during training is crucial for building a reliable, unbiased model. The data set is balanced with 100 fake images and 100 authentic images) The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, and Broyda. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. One of ordinary skill would have motivation to combine Lau, and Broyda that can provide an increases system accuracy and confidence for product authentication (Broyda [0051]) Claims 12, and 14-17 are rejected under 35 U.S.C. 103 as being unpatentable over Tak Wai Lau et. al (hereinafter Lau) US 2020/0410510 A1, in view of Juliy Broyda et.al (hereinafter Broyda) US 2021/0004949 A1. in view of Simske et.al (hereinafter Simske) US 2011/0280480 A1. In regard to claim 12: (Currently Amended) Lau discloses: - A machine learning based imaging method for imaging and classifying whether one or more physical and subject consumer goods are authentic or non-authentic, the machine learning based imaging method comprising: in [Abstract]: An authentication apparatus and a method to devise an authentication tool is provided for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of an information bearing device on the article. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device using a trained neural network, a) obtaining an image of a subject consumer good comprising a subject product specification in [0006]: An authentication tool, an authentication apparatus and a method to devise an authentication tool for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of the article using a neural network is disclosed, in [0070]: A direct image of a source means the image is obtained directly from the source without intervening copying, that is, the image is not captured from an image of the source. An example source may be an information bearing device which is covertly coded with a security data designed to function as an authentic authentication device. (BRI: The product specification is the source information on the information bearing device which is overtly coded) - b) inputting the obtained image into a model, wherein the model is configured to classify the obtained image as authentic or non- authentic, in [0102]: During the forward pass on progressing from an earlier layer to a next layer, each filter is convolved across the spatial dimensions of the input volume, in [0102]: The entries of the filter collectively define a weight matrix and the weight matrix was learned by the CNN during deep learning training of the CNN”, in [0187]: The training images include images of authentic and non-authentic information bearing devices, - wherein the model is constructed by a machine learning classifier, in [0075]: An example CNN 30 comprises an input layer 300, an output layer 399, and a plurality of convolutional layers 301-30n interconnecting the input layer 300 and the output layer 399. CNN is a class of deep, feed-forward, artificial neural networks in machine learning, - wherein the machine learning classifier is trained by a training dataset, in [0187]: The training images include images of authentic and non-authentic information bearing devices, - c) outputting a classification output from the model indicating a likelihood that the image of the subject consumer good is authentic or non-authentic in [0079]: The FCN will perform classification operations on the processed data of the CNN, for example, to determine whether, or how likely, the processed data of a target image CNN corresponds to an authentic authentication device or a non-authentic authentication device. - wherein the training dataset further comprises further extracted images, from the plurality of different camera types, of a non-authentic product comprising a non-authentic product specification, wherein the non-authentic product specification is different from the at least one steganographic feature, In [0138]: When the data-embedded image pattern of the authentic information bearing device is captured by an image capture apparatus, the gray levels of the pixels forming the captured image may be changed In [0138]: The change may be due to internal setting of the image capture apparatus (for example, exposure setting), calibration of the image capture apparatus, ambient illumination, sensitivity and/or linearity of the image sensor of the capture apparatus, angle of image capture, and/or other parameters. In [0140]: When a smart phone having a built-in image capture device is used to capture an image of the information bearing device 60, the resulting captured images have different average brightness levels ranging from 19 to 252, (BR:, modern smartphones (from early 2011) represent a plurality of different camera types integrated into one device. They typically feature multiple lenses—such as wide-angle, ultra-wide, and telephoto—along with dedicated depth or macro sensors, allowing users to capture varied perspectives and improve image quality through computational photography. The prior art filed on 2018-03-01) In [0073]: An authentic authentication device herein is also referred to as an authentic information bearing device or a genuine information bearing device herein, while a non-authentic authentication device is also referred to as a non-authentic information bearing device or a non-genuine information bearing device where appropriate. The target image may be captured by the apparatus or received from an outside source. In [0132]: An image of an authentic authentication device is a primary copy of an authentic information bearing device, while an image of a non-authentic authentication device may be a secondary copy of an authentic information bearing device or a copy of a fake information bearing device. In [0071]: An example information bearing device herein comprises a data-encoded image pattern which is encoded with a set of discrete data. The set of data is human non-perceivable in its encoded state such that the data is not readily readable or readily decodable by a human reader looking at the data-encoded image pattern using naked eyes. - and wherein the training dataset further comprises the extracted images of the authentic product augmented with geometric distortion so that the extracted images of the authentic product have a different shape. In [0006]: An authentication tool, an authentication apparatus and a method to devise an authentication tool for facilitating determination of authenticity or genuineness of an article with reference to a captured image or a purported primary image of the article using a neural network is disclosed. To facilitate verification of authenticity of an article, an article is commonly incorporated with an authentication device which includes an information bearing device such as a label, a tag or an imprint. The information bearing device comprises a data-embedded image pattern and the data-embedded image pattern is covertly encoded with a set of data so that the data is not perceivable by a reasonable person reading the data-embedded image pattern. In example embodiments. Each data is a discrete data having characteristic two- or three-dimensional coordinate values in data domain and the coordinate values are transformed into spatial properties of image-defining elements which cooperate to define the entirety of the data-embedded image pattern. The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. (BRI: transforming data into spatial properties of image-defining elements (such as pixel positions, shapes, or sizes) and spreading coordinate values throughout those elements generally represents geometrical distortion) In [0016] : In some embodiments, the set of data embedded in the data-embedded image pattern comprises a plurality of discrete frequency data, and the discrete frequency data are transformed into spatially distributed pattern defining elements which are spread in the data-embedded image pattern and which are non-human readable or non-human perceivable using naked eyes; and the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0136]: Each pixel has characteristic physical properties including size, shape, color, brightness, etc., and the entirety of pixels collectively define a data-embedded image pattern. (BRI: the process of spatially distributing a pattern that are non-human readable represents a sophisticated steganographic feature) - and wherein the training dataset is spatially manipulated by a Spatial Transformer Network before training the machine learning classifier. In [0006]: The spatial properties include, for example, brightness or amplitude of an image-defining element at a specific set of coordinates on the data domain. The data may be covertly coded by a transformation function which operate to spread the coordinate values of a data into spatial properties spread throughout the image-defining elements. The set of data or each individual discrete data point has characteristic signal strengths. The authenticity or genuineness of an article is determined with reference to whether a captured image is a primary image of an authentic information bearing device. In [0016]: the spatially distributed pattern defining elements and the discrete frequency data are correlated by Fourier transform. In [0177]: On defining the CNN structure, the input layer is set to have a single channel since the example information bearing device 60 has a data-embedded image pattern which is defined by pattern defining elements in gray-scale coding. (BRI: this process that combine frequency domain data with spatial manipulation, such as those used in Spectral-Spatial-Frequency Transformer Networks or Fourier-based data augmentation/feature extraction. Specifically, discrete frequency data (e.g., Fourier or Discrete Cosine Transform coefficients) can be transformed back into spatially distributed patterns or maps, which are then used to augment or inform training datasets. When these generated patterns are treated as input images, a Spatial Transformer Network (STN) can be employed to actively manipulate (e.g., rotate, scale, warp) these input features to improve spatial invariance before the data is fed to a classification model. Lau does not explicitly disclose: - wherein the training data set further comprises a balance set of extracted images, from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types However, Broyda discloses: - wherein the training data set further comprises a balance set of extracted images, from both authentic and non-authentic corresponding consumer goods, wherein balance refers to an equal number, or approximately equal number of, training samples, from each of the plurality of different camera types in [0076]: client device 105 are each generally intended to encompass any client computing device such as a laptop/notebook computer, wireless data port, smart phone, personal data assistant (PDA), tablet computing device, in [0076]: a client device may comprise a computer that includes an input device, such as a keypad, touch screen, or other device that can accept user information, and an output device that conveys information associated with the operation of the server 102, (BRI: a client computing device that is exclusively represents as a laptop (webcam), smartphone, PDA, and tablet have cameras that teach at least three different camera types) In [0003]: receiving a request to authenticate an image of a document; preprocessing the image of the document to prepare the image of the document for line orientation analysis; automatically analyzing the preprocessed image to determine lines in the preprocessed image in [0177]: FIGS. 14A and 14B illustrate examples of a machine-generated receipt image 1402 and an authentic receipt image 1404, respectively. The authentic receipt image 1404 can be an image captured by a camera, in [0234]: Features associated with valid images can be features of images of printed documents that have been captured by a camera, for example. in [0029]: FIG. 21 is a flowchart of an example method for training a neural network model for image classification. In [0182]: The valid electronic documents can be excluded from machine learning training, or can be included in machine learning training In [0217]: At 2110, the network is trained. In general, machine learning algorithms can be trained on a training portion and evaluated on a testing portion. More specifically, the model can be initially fit using a training dataset that is a set of examples used to fit the parameters (e.g., weights of connections between neurons in artificial neural networks) of the model. The fitted model can be used to predict the responses for the observations in a second dataset called the validation dataset. The validation dataset can provide an unbiased evaluation of a model fit on the training dataset while tuning the model's hyperparameters (e.g., the number of hidden units in a neural network). In [0215]: A validation data set is a dataset of examples used to tune the hyperparameters of the network. A hyperparameter can be, for example, the number of hidden units in the network. For instance, an example validation data set 2108b can include 300 fake images and 9300 authentic images. In [0216]: A test dataset is a dataset that is independent of the training dataset, but that can follow a same probability distribution as the training dataset. For instance, an example test data set 2108b can include 100 fake images and 100 authentic images. A test data set can be used to evaluate and fine tune a fitted network. (BRI: using a model fitted on a training dataset can predict responses for a separate validation dataset to provide an unbiased evaluation of performance helps in tuning hyperparameters and detecting overfitting, with the validation set acts as a checkpoint for how well the model generalizes to unseen data. In this context, the training set to fit the model parameters, often to minimize error. While validation sets are generally used for evaluation, maintaining a balanced dataset (in terms of classes or features) during training is crucial for building a reliable, unbiased model. The data set is balanced with 100 fake images and 100 authentic images) The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, and Broyda. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. One of ordinary skill would have motivation to combine Lau, and Broyda that can provide a n increases system accuracy and confidence for product authentication (Broyda [0051]) Lau and Broyda do not explicitly disclose: - wherein the authentic product specification comprises a Manufacturing Line Variable Printing Code; - (ii) extracted images, from the plurality of different camera types, of non- authentic product comprising a non-authentic product specification comparable with the subject product specification, - wherein the non-authentic product specification is different from the Manufacturing Line Variable Printing Code; - and (iii) an associated class definition based on the Manufacturing Line Variable Printing Code; However, Simske discloses - wherein the authentic product specification comprises a Manufacturing Line Variable Printing Code; [0037] Again referring back to reference numeral 204, if a template is not available, the method continues with the analysis system 18 determining whether one or more prior zoning output specifications is available in the secure database 14, as shown at reference numeral 218. Prior zoning output specifications are results (i.e., list of regions, region types/classifications, and/or region characteristics) from other images that have undergone zoning analysis that are stored in the secure database 14. in [0028]: It is to be understood that to identify a region of interest that contains variable data, one needs to compare the information in two or more images of a variable data printing job (e.g., barcodes on two different labels), in [0033]: Inspection as an end use also accommodates using an image evaluation as a strategy for identifying regions of interest. Using this strategy, regions identified as being highly variable can be reviewed as steganographic deterrents SD (BRI: a barcode printed as a variable in a designated "region of interest" (ROI) within an output specification may represents an authentic product specification that comprises a Manufacturing Line Variable Printing Code) in [0014]: It is to be understood that any of the steganographic security deterrents SD may contain information, in [0015]: Further, it is to be understood that the information may be, for example, a code; a sequence of bits, bytes, characters, colors, graphics, numbers, etc.; a watermark; symbols; interpretable information; a fingerprint(s). - wherein the non-authentic product specification is different from the Manufacturing Line Variable Printing Code; In [0012]: The indicia 24 printed on the object 22 may include, but are not limited to graphical indicia, alphanumeric indicia, or combinations thereof. In one non-limiting example, the indicia 24 are text T or images I which include brand information, product information, manufacturer or distributor information, and/or any other desirable textual and/or graphical information. In [0023]: Referring back to FIG. 2, if one or more templates is/are available, the analysis system 18 will compare each of the regions that are included in the list with each available template to determine if one or more matches are found, as shown at reference numeral 206. In [0023]: When a match between the listed regions and an existing template is found, the existing template is reviewed for previously identified and/or optimized regions of interest. In [0023]: a stored template which matches the digital image 26 may indicate that the color tile security deterrent SD, 24 is consistently rated as the best indicia 24 to enable an image-based forensic service to differentiate the image 26 from a counterfeit. As another example, the stored template which matches the digital image 26 may indicate that purposefully misspelling the indicia "PRODUCT X" T, 24 adds another region of variability to the deployed object 22, at least in part because counterfeiters often correct the spelling. (BRI: the intentional misspelling by the manufactures is the counterfeit specification provided by the manufacturer as a security measure) - and (iii) an associated class definition based on the Manufacturing Line Variable Printing Code; (BRI: within the context of product authentication, identifying regions particularly suitable for an end application can provide an associated class definition for manufacturing line variable printing codes. that can provide customize products for specific markets) In [0023]: When a match between the listed regions and an existing template is found, the existing template is reviewed for previously identified and/or optimized regions of interest. Since the template is based upon one or more previously analyzed images, it may identify regions that are particularly suitable for a specific end application and/or it may identify how to optimize one or more regions for a specific end application. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches non-authentic specification different from the Manufacturing Line Variable Printing Code One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). In regard to claim 14: (Previously Presented) Lau discloses: - augmenting the extracted images of the authentic product with color distortion in [0136]: Modern authentic information bearing devices contain data-embedded image patterns which are digitally formed and consist of pixels. Each pixel has characteristic physical properties including size, shape, color, brightness, etc in [0136]: Some of the characteristic physical properties suffer degradation or loss of fidelity during image capture and/or reproduction, in [0138]: When the data-embedded image pattern of the authentic information bearing device is captured by an image capture apparatus, the gray levels of the pixels forming the captured image may be changed. For example, the gray levels may be shifted linearly, non-linearly, randomly or may have an entirely different gray-scale distribution of pixels compared to those of the data-embedded image pattern. The change may be due to internal setting of the image capture apparatus (for example, exposure setting), calibration of the image capture apparatus, ambient illumination, sensitivity and/or linearity of the image sensor of the capture apparatus, angle of image capture, and/or other parameters. In regard to claim 15: (Original)**[ See Examiner’s Note under “Claim Objections] Lau, and Broyda do not explicitly disclose: - wherein the Manufacturing Line Variable Printing Code comprises one or more of: one or more alphanumeric characters, one or more non-alphanumeric characters, one or more non-alphanumeric characters comprising a pattern box, or one or more non-alphanumeric characters comprising a dotted column. However, Simske discloses: - wherein the Manufacturing Line Variable Printing Code comprises one or more of: one or more alphanumeric characters, one or more non-alphanumeric characters, one or more non-alphanumeric characters comprising a pattern box, or one or more non-alphanumeric characters comprising a dotted column. in [0012]: The indicia 24 printed on the object 22 may include, but are not limited to graphical indicia, alphanumeric indicia, or combinations thereof. In one non-limiting example, the indicia 24 are text T or images I which include brand information, product information, manufacturer or distributor information, and/or any other desirable textual and/or graphical information. In another non-limiting example, the indicia 24 are security deterrents SD (some of which may be steganographic, i.e., capable of having information hidden therein) selected from color lines, fingerprints, color text, copy detection patterns (CDP), color tiles, letter sequences, number sequences, graphic sequences, target patterns, bar codes, and the like, and combinations thereof. (BRI: graph sequences are non-alphanumeric characters. The graphic indicia can represent alphanumeric characters that comprise a pattern box, often in contexts involving document verification, security marking, or specialized character encoding) The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches printing code. One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). In regard to claim 16: (Previously Presented) Lau, and Broyda do not explicitly disclose: - Manufacturing Line Variable Printing Code is printed or affixed to the subject consumer good by one or more of: a continuous ink-jet printer, an embossing, a laser etching, thermal transferring, or hot waxing (in at least However, Simske discloses: - Manufacturing Line Variable Printing Code is printed or affixed to the subject consumer good by one or more of: a continuous ink-jet printer, an embossing, a laser etching, thermal transferring, or hot waxing in [0013]: As non-limiting examples, the indicia 24 may be formed of inkjet ink, laserjet ink, spectrally opaque ink, spectrally transparent ink, ultraviolet ink, infrared ink, thermochromatic ink, electrochromatic ink, electroluminescent ink, conductive ink, magnetic ink, color-shifting ink, quantum dot ink, phosphorescent ink, a guilloche, a planchette, holographs, security threads, watermarks, other security deterrents, anti-tamper deterrents, and combinations thereof. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches printing code. One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). In regard to claim 17: (Previously Presented) Lau, and Broyda do not explicitly disclose: - training dataset comprises annotations that annotate the Manufacturing Line Variable Printing Code However, Simske discloses: - training dataset comprises annotations that annotate the Manufacturing Line Variable Printing Code in [0006]: the detection of the steganographic marks may be used by brand protection investigators to process many images simultaneously and discover counterfeit images in large data sets; the detection of variable data printing regions may be used for proofing and/or inspecting in print authentication; and the detection of low quality marks may be used for proofing, print defect detection, and auditing. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches printing code. One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). Claims 6-7 and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tak Wai Lau et. al (hereinafter Lau) US 2020/0410510 A1, in view of Juliy Broyda et.al (hereinafter Broyda) US 2021/0004949 A1. further in view of Simske et.al (hereinafter Simske) US 2011/0280480 A1. In regard to claim 6: (Original) Lau, and Broyda do not explicitly disclose: - the at least one steganographic feature is selected from one or more of: an isolated font style for a letter, an isolated font style for a number; an isolated location change of a text location, an isolated location change of a letter location, an isolated location change of a punctuation location However, Simske discloses: - the at least one steganographic feature is selected from one or more of: an isolated font style for a letter, an isolated font style for a number; an isolated location change of a text location, an isolated location change of a letter location, an isolated location change of a punctuation location in [0012]: In another non-limiting example, the indicia 24 are security deterrents SD (some of which may be steganographic, i.e., capable of having information hidden therein) selected from color lines, fingerprints, color text, copy detection patterns (CDP), color tiles, letter sequences, number sequences, graphic sequences, target patterns, bar codes, and the like, and combinations thereof. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches steganographic features . One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). In regard to claim 7: (Original) Lau, and Broyda do not explicitly disclose: - the authentic product specification is selected from one or more of: a production code, a batch code, a brand name, a product line, a label, artwork, an ingredient list, or usage instructions However, Simske discloses: - the authentic product specification is selected from one or more of: a production code, a batch code, a brand name, a product line, a label, artwork, an ingredient list, or usage instructions in [0012] : The indicia 24 printed on the object 22 may include, but are not limited to graphical indicia, alphanumeric indicia, or combinations thereof. In one non-limiting example, the indicia 24 are text T or images I which include brand information, product information, manufacturer or distributor information, and/or any other desirable textual and/or graphical information. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches product specification. One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). In regard to claim 18: (Original) Lau , and Broyda do not explicitly disclose: - the training dataset comprises annotations annotating the at least one steganographic feature However, Simske discloses: - the training dataset comprises annotations annotating the at least one steganographic feature in [0006]: the detection of the steganographic marks may be used by brand protection investigators to process many images simultaneously and discover counterfeit images in large data sets; the detection of variable data printing regions may be used for proofing and/or inspecting in print authentication; and the detection of low quality marks may be used for proofing, print defect detection, and auditing. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches annotated steganographic features . One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). In regard to claim 19: (Previously Presented) Lau, and Broyda do not explicitly disclose: - the at least one steganographic feature is generated by computing instructions configured for execution on a processor, that when executed caused the processor to automatically generate the steganographic feature based on one or more steganographic feature types However, Simske discloses: - the at least one steganographic feature is generated by computing instructions configured for execution on a processor, that when executed caused the processor to automatically generate the steganographic feature based on one or more steganographic feature types in [0007]: These components of the system 10 are part of a computer or enterprise computing system 20, which includes programs or software configured to segment an image, store and retrieve previously saved templates and/or zoning output specifications, store and retrieve previously stored region of interest information and strategies, and identify one or more regions of interest of the image”, in [0029] : As still another non-limiting example, if the end-application is determining steganographic content areas, then the strategy may involve looking for small regions, which may be noted in the registry 16 as being good candidates for the selected region of interest for such applications. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simske. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches steganographic features . One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). In regard to claim 20: (Original) Lau and Broyda do not explicitly disclose: - the at least one steganographic feature is affixed on the subject consumer good during or after manufacture of the subject consumer good However, Simske discloses: - the at least one steganographic feature is affixed on the subject consumer good during or after manufacture of the subject consumer good in [0026]: The strategies may be developed and saved after one image has been deployed and analyzed, inspected, authenticated, or the like, and may be changed and/or refined over time. It is to be understood that any type of machine learning may be employed here, in [0012]: The indicia 24 printed on the object 22 may include, but are not limited to graphical indicia, alphanumeric indicia, or combinations thereof. In one non-limiting example, the indicia 24 are text T or images I which include brand information, product information, manufacturer or distributor information, and/or any other desirable textual and/or graphical information. In another non-limiting example, the indicia 24 are security deterrents SD (some of which may be steganographic, i.e., capable of having information hidden therein) selected from color lines, fingerprints, color text, copy detection patterns (CDP), color tiles, letter sequences, number sequences, graphic sequences, target patterns, bar codes, and the like, and combinations thereof. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Simke. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Simske teaches steganographic features . One of ordinary skill would have motivation to combine Lau, Broyda and Simske to determine an authencity of a product using steganographic feature in the image for variety of applications to enhance anti-counterfeit efforts (Simske [0045]). Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Tak Wai Lau et. al (hereinafter Lau) US 2020/0410510 A1, in view of Juliy Broyda et.al (hereinafter Broyda) US 2021/000494 A1. further in view of Ronald Bruce Blair et.al (hereinafter Blair) US 2014/0037196 A1. In regards to claim 23: (Previously Presented) Lau, and Broyda do not explicitly disclose: - wherein the training data set comprises [[the]] approximately equal number of extracted images from both authentic and non- authentic corresponding consumer goods such that a number of the extracted images from the authentic consumer goods is within 5% or less of a number the extracted images from the non- authentic consumer goods. However, Blair discloses: - wherein the training data set comprises the approximately equal number of extracted images from both authentic and non- authentic corresponding consumer goods such that a number of the extracted images from the authentic consumer goods is within 5% or less of a number the extracted images from the non- authentic consumer goods. In [0034]: Any suitable image sensor 104 capable of capturing any suitable image (frame, line, or otherwise) of a document may be employed In [0023]: The training intensity values 128, 130 may be obtained from at least one training document 132 that is used as a benchmark or model to determine whether the document 116 is authentic. In [0023]: A training module 133 may capture one or more images 134, 136 of a training region 138 of the training document 132 in conjunction with the image capturing module 106, the light source 102, and the image sensor 104. In another embodiment, the training module 133 may be separate from the document authentication application 108 and be implemented by a different device, authority, or entity than that used to process and authorize the document 116. The training images 134, 136 may undergo processing to determine the training intensity values 123, 130 for each of the training images 130, 136, respectively. To provide comparisons between wavelength-dependent intensities of the training document 132 and the document 116, the images 110, 112 may be captured at the same or similar wavelengths as the training images 134, 136 of the training document 132. In [0023]: Also, in one embodiment, training data may be collected from a plurality of training documents, whereby an average or acceptable range may be determined for comparison with the images 110, 112. In [0024]: The training intensity value 128 is indicative of an intensity of the image 134 and the training intensity value 130 is indicative of an intensity of the image 136. In one embodiment, the training intensity values 128, 130 may include a mean training intensity and a standard deviation of the mean training intensity for the images 134 and 136, respectively. However, other values, such as the median, maximum, minimum, average, etc., indicative of or associated with the intensity of the pixels may be used for the training intensity values 128, 130. (BRI: capturing images at the same wavelength generally leads to the same number of data points, as each pixel will represent the intensity of light at that specific wavelength. However, the number of data bits per pixel (radiometric resolution) can vary.. Using the value variations from 128 to 130, the range is 2/128= 1.5625 % (within 5 %)) The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and Blair. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. Blair teaches number of the extracted images from the authentic consumer goods is within 5% or less of a number the extracted images from the non- authentic consumer goods. One of ordinary skill would have motivation to combine Lau, Broyda and Blair that can help reduce the outlying data (Blair [0029]) Claim 24 is rejected under 35 U.S.C. 103 as being unpatentable over Lau et. al (hereinafter Lau) US 2020/0410510 A1, Tak Wai Lau et. al (hereinafter Lau) US 2020/0410510 A1, in view of Juliy Broyda et.al (hereinafter Broyda) US 2021/000494 A1. further in view of Iain McDonald et.al (hereinafter McDonald) US 2019/0213462 A1. In regards to claim 24: (Previously Presented) Lau, and Broyda do not explicitly disclose: - wherein the extracted images of the authentic product have a shape of a first polygon and wherein the extracted images of the authentic product augmented with the geometric distortion have a shape of a second polygon that is different from the first polygon. However, McDonald discloses: - wherein the extracted images of the authentic product have a shape of a first polygon and wherein the extracted images of the authentic product augmented with the geometric distortion have a shape of a second polygon that is different from the first polygon In [0005]: traditional tags, the secure tags each contain a unique, discreet key, have dynamic and flexible areas of storage, are integrated with digital ledgers, and contain numerous other advantages. For example, the secure tags are not limited to one shape or forms—they can exist in multiple design states to fit the need of the customer. In [0155]: During tag detection step 801, client device 110 can be configured to determine an orientation of a tag feature and rotate the image based on the determined orientation of the tag feature. The rotation can further be based on a target parameter value retrieved from the public portion of the stylesheet. The tag feature can a center logo of the secure tag. Client device 110 can be configured to identify a center of the tag using the template match system described above. Client device 110 can be configured to then determine an outer ovoid line encompassing the entire secure tag. Client device 110 can be configured to then determine a center of the secure tag. After determining the center and the ovoid line, client device 110 can be configured to construct multiple right triangles on the secure tag image. The right triangles can be placed such that the center of each right triangle overlaps the center of the secure tag, while the two vertices bounding the hypotenuse intersect the outer ovoid rim. The triangle(s) with the least and/or greatest hypotenuse can be used, in conjunction with orientation information in the public portion of the stylesheet, to correct the orientation of the tag. In [0026]: the potential secure tag can include detecting image gaps by determining tag feature options for potential image gaps and comparing the tag feature option values to target parameter values. The target parameter values can include an inner or outer tag rim thickness and diameter ratio or an inner or outer tag rim thickness and tag rim break width ratio. In other embodiments still, generating a normalized image of the potential secure tag can include determining an orientation of a tag feature and rotating the image based on the determined orientation of the tag feature and a target parameter value retrieved from the stylesheet. The tag feature can be a center logo and the target parameter value can include a center logo orientation. The examiner interprets that the core theme of the invention is to provide machine learning based learning to extracted images of potentially good and bad (counterfeited) products and provide authentication with the use of using three different camera types for image extraction and using a balanced training set. It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Lau, Broyda and McDonald. Lau teaches using a neural network model to classify the product authenticity after capturing the image of the product that contains a steganographic feature and outputting the result of the classification, providing data augmentation with geometric distortion. Broyda teaches using at least three different camera types for capturing the image of the product and providing a balanced training set. McDonald teaches shapes of a polygon. One of ordinary skill would have motivation to combine Lau, Broyda and McDonald that can reduce the authentication issues across the supply chain (McDonald [0006]) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIRUMALE KRISHNASWAMY RAMESH whose telephone number is (571)272-4605. The examiner can normally be reached by phone. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li B Zhen can be reached on phone (571-272-3768). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIRUMALE K RAMESH/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Nov 20, 2020
Application Filed
May 04, 2023
Non-Final Rejection — §103
Aug 31, 2023
Response Filed
Nov 17, 2023
Final Rejection — §103
Feb 27, 2024
Request for Continued Examination
Feb 29, 2024
Response after Non-Final Action
Jun 05, 2024
Non-Final Rejection — §103
Sep 16, 2024
Response Filed
Dec 18, 2024
Final Rejection — §103
Mar 20, 2025
Request for Continued Examination
Mar 27, 2025
Response after Non-Final Action
May 09, 2025
Non-Final Rejection — §103
Aug 11, 2025
Response Filed
Oct 03, 2025
Final Rejection — §103
Jan 14, 2026
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12518153
TRAINING MACHINE LEARNING SYSTEMS
2y 5m to grant Granted Jan 06, 2026
Patent 12293284
META COOPERATIVE TRAINING PARADIGMS
2y 5m to grant Granted May 06, 2025
Patent 12229651
BLOCK-BASED INFERENCE METHOD FOR MEMORY-EFFICIENT CONVOLUTIONAL NEURAL NETWORK IMPLEMENTATION AND SYSTEM THEREOF
2y 5m to grant Granted Feb 18, 2025
Patent 12131244
HARDWARE-OPTIMIZED NEURAL ARCHITECTURE SEARCH
2y 5m to grant Granted Oct 29, 2024
Patent 11803745
TERMINAL DEVICE AND METHOD FOR ESTIMATING FIREFIGHTING DATA
2y 5m to grant Granted Oct 31, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

7-8
Expected OA Rounds
18%
Grant Probability
20%
With Interview (+2.1%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 40 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month