DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 07 Oct 2025 has been entered.
Status of Claims
This Office Action is in response to the communication filed on 07 Oct 2025.
Claims 1, 3, 8-9, 12, 14, 19-20, 23, 25, and 30-31 are being considered on the merits.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 12, and 23 are rejected under 35 U.S.C. 103 as being unpatentable over Golan, et. al. (“Deep Anomaly Detection Using Geometric Transformations” arXiv:1805.10917v2 [cs.LG] 9 Nov 2018 ; hereinafter, “Golan”) in view of Muselli, et. al. (US 2022/0036137 A1; hereinafter “Muselli”).
Regarding claims 1, 12, and 23, Golan teaches:
A system for training a neural network to predict anomalous data within a specified data domain, comprising: (Golan, pg. 13, Algorithm 1: Train
F
θ
on one labeled set
S
T
.” )
apply, to each of said plurality of data instances, a set of M affine transformations, to transform each of said plurality of data instances into a set of M transformed data instances, wherein each of said transformed data instances is labeled with a transformation label indicating a particular one of said set of M affine transformation applied thereto (Golan, pg. 3, sec. 4 and pg 7, sec. 6: “To this end, we create a self-labeled dataset of images from our initial training set S, by using a class of geometric transformations
T
. The created dataset, denoted
S
T
, is generated by applying each geometric transformation in
T
on all images in
S
, where we label each transformed image with the index of the transformation that was applied on it.” “In this section we explain our intuition behind the choice of the set of transformations used in our method. Any bijection of a set (having some geometric structure) to itself is a geometric transformation. Among all geometric transformations, we only used compositions of horizontal flipping, translations, and rotations in our model”)
automatically construct a training set comprising at least some of said labeled transformed data instances, (Golan, sec. 4.1: “Thus, for any
x
∈
S
,
j
is the label of
T
j
(
x
)
. We use this set to straightforwardly learn a deep k-class classification model,
f
θ
, which we train over the self-labeled dataset
S
T
using the standard cross-entropy loss function.”)
use said training set to train a neural network, (Golan, sec. 4.1: “Thus, for any
x
∈
S
,
j
is the label of
T
j
(
x
)
. We use this set to straightforwardly learn a deep k-class classification model,
f
θ
, which we train over the self-labeled dataset
S
T
using the standard cross-entropy loss function.”) wherein said training optimizes said neural network to predict the particular one of said set of M affine transformation applied to a transformed data instance, and (Golan, sec. 6: “We speculate that the effectiveness of the chosen transformation set is affected by their ability to preserve spatial information about the given “normal” images, as well as the ability of our classifier to predict which transformation was applied on a given transformed image.”) wherein said prediction is associated with an accuracy probability (Golan, sec. 6: “In addition, for a fixed type-II error rate, the type-I error rate of our method decreases the harder it gets for the trained classifier to correctly predict the identity of the transformations that were applied on anomalies” Examiner notes Golan teaches an accuracy probability by analyzing number of Type-II and type-I errors).
apply said set of M affine transformations to a target data instance obtain a set of M transformed target data instances, and (Golan, pg. 3, sec. 4 and pg 7, sec. 6: “To this end, we create a self-labeled dataset of images from our initial training set S, by using a class of geometric transformations
T
. The created dataset, denoted
S
T
, is generated by applying each geometric transformation in
T
on all images in
S
, where we label each transformed image with the index of the transformation that was applied on it.” “In this section we explain our intuition behind the choice of the set of transformations used in our method. Any bijection of a set (having some geometric structure) to itself is a geometric transformation. Among all geometric transformations, we only used compositions of horizontal flipping, translations, and rotations in our model”)
apply said trained neural network to each of said transformed target data instances, to obtain said predictions and said associated accuracy probabilities, and (Golan, pg. 3, sec. 4 and pg 7, sec. 6: “To this end, we create a self-labeled dataset of images from our initial training set S, by using a class of geometric transformations
T
. The created dataset, denoted
S
T
, is generated by applying each geometric transformation in
T
on all images in
S
, where we label each transformed image with the index of the transformation that was applied on it.” “In addition, for a fixed type-II error rate, the type-I error rate of our method decreases the harder it gets for the trained classifier to correctly predict the identity of the transformations that were applied on anomalies” Examiner notes Golan teaches an accuracy probability by analyzing number of Type-II and type-I errors)
Golan does not explicitly disclose:
at least one hardware processor; and
a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to:
receive, as input, a dataset associated with a specified non-image data type within said specified data domain and comprising a plurality of data instances, wherein said plurality of data instances represent normal data within said specified data domain,
However, Museli teaches:
at least one hardware processor; and (Muselli, para. 0024: “It is also disclosed an apparatus comprising a processor and memory storing computer-executable instructions”)
a non-transitory computer-readable storage medium (Muselli, para. 0054: “The apparatus 100 comprises a processor 110 and a memory 120. The memory 120 may comprise random access memory (RAM), read-only memory (ROM), one or more hard drives, and/or any other type of computer-readable medium or memory.”) having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: (Muselli, para. 0024: “It is also disclosed an apparatus comprising a processor and memory storing computer-executable instructions”)
receive, as input, a dataset associated with a specified non-image data type within said specified data domain and comprising a plurality of data instances, wherein said plurality of data instances represent normal data within said specified data domain, (Muselli, para. 0044: “In an embodiment, the data records in the unlabeled data set contain information related to a business process, such as a work order. The data records may be related to at least one category selected from the group consisting of product names, minimum quantities, lead times, and shipping methods.”)
calculate an anomaly score for said target data instance based on said accuracy probabilities. (Muselli, para. 0011: “The computer-implemented method may further comprise a step of assigning a confidence score to the predicted output class by the classification model, and that the anomalous data record is detected based on a threshold of the confidence score” Examiner notes that Muselli teaches a confidence score is an anomaly score because that score detects anomalies).
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Muselli into Golan. Golan teaches training a deep neural model to detect anomalies using a multi-class model to discriminate between dozens of geometric transformations applied on all given images; Muselli teaches a computer-implemented method and system for detecting anomalies in an unlabeled data set of data records. One of ordinary skill would have been motivated to combine the teachings of Muselli into Golan in order to enable automatic data check of a very large amount of data, being capable of processing unlabeled data sets comprising many thousands of data records, or even more, in a short time (Muselli, para. 0053).
Claims 3, 8-9, 14, 19-20, 25, 30, and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Golan, in view of Muselli, et. al., and further in view of X. Chen, B. Li, R. Proietti, Z. Zhu and S. J. B. Yoo, ("Self-Taught Anomaly Detection With Hybrid Unsupervised/Supervised Machine Learning in Optical Networks," in Journal of Lightwave Technology, vol. 37, no. 7, pp. 1742-1749, 1 April 1, 2019, doi: 10.1109/JLT.2019.2902487; hereinafter, “Chen”)
Regarding claims 3, 14, and 25, Golan as modified teaches claims 1, 12, and 23 above. Chen further teaches:
Wherein said training causes said neural network to learn a feature extractor which maps said set of transformed data instances into a feature representation within a feature space. (Chen, sec. 4: “The input is processed by a few shared fully-connected hidden layers for feature extraction.”)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Chen into Golan, as modified. Chen teaches a self-taught anomaly detection framework for optical networks. One of ordinary skill would have been motivated to combine the teachings of Chen into Golan, as modified, in order to facilitate more scalable and time-efficient online anomaly detection by avoiding excessively traversing the original dataset (Chen, abstract).
Regarding claims 8, 19, and 30, Golan as modified teaches claims 1, 12, and 23 above. Museli further teaches:
wherein said specified non-image data type is selected from the group comprising: numerical data, univariate time-series data, multivariate time-series data, attribute-based data, vectors, graph data, and tabular data. (Museli, para. 0043-044: “The disclosed method places no restrictions on the type of variables. For example, they may represent names, codes, time values, address components, control parameters, or numeric values. ¶ In an embodiment, the data records in the unlabeled data set contain information related to a business process, such as a work order. The data records may be related to at least one category selected from the group consisting of product names, minimum quantities, lead times, and shipping methods.” Examiner Museli teaches data type of quantities and numeric values i.e. numerical data)
It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine the teachings of Museli into Golan, as modified, as set forth above with respect to claims 1, 12, and 13.
Regarding claims 9, 20, and 31, Golan as modified teaches claims 1, 12, and 23 above. Golan further teaches:
wherein said set of M affine transformations comprise dimensionality reduction transformations and non-distance preservation transformations. (Golan, sec. 5.1 and 6: “we use this model on raw input (i.e., a flattened array of the pixels comprising an image), as well as on a low-dimensional representation obtained by taking the bottleneck layer of a trained convolutional autoencoder.” Examiner notes that Golan teaches both low-dimensional representation which is both dimensionality reduction and a non-distance preservation transformation)
Response to Applicant Arguments/Remarks
35 U.S.C. §101
Beginning on page 9 of applicant’s remarks, applicant argues that the claims traverse the rejection under 35 U.S.C. §101. Applicant points out some inconsistencies in the prior OA, which have been corrected. Regardless, in light of applicant’s arguments and amendments, the previously asserted 101 rejections have been withdrawn.
35 U.S.C. §103
On page 16, applicant argues that neither Golan nor Muselli teaches the claims as amended, particularly the use of a data-set containing non-image data. However, Muselli teaches such a dataset as an example.
On page 17, applicant argues that Muselli does not teach “only normal (non-anomalous data)”. However, applicant does not claim a data set consisting of “only normal” data. Instead, applicant teaches receiving data comprising normal data i.e. which can also comprise other data.
Applicant goes on to discuss variable dependency of Muselli which does not affect or change the claims, as currently asserted. Finally, applicant claims Chen does not cure the deficiencies, however Chen is not asserted to teach such limitation.
Applicant’s arguments are unpersuasive. Claims 1, 12, and 23, as amended, as rejected for the reasons set forth in the rejection above.
Applicant makes no independent argument regarding the patentability of dependent claims 3, 8-9, 14, 19-20, 25, or 30-31. Therefore, such dependent claims remain rejected at least from their dependency on the independent claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Sally T. Ley whose telephone number is (571)272-3406. The examiner can normally be reached Monday - Thursday, 10:00am - 6:00pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached at (571) 270-5871. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/STL/Examiner, Art Unit 2147
/ERIC NILSSON/Primary Examiner, Art Unit 2151