Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-10 are rejected under 35 U.S.C. 101 because:
At step 1:
Claims 1-10 is directed to a “system, method and computer readable medium for anomaly detection” and thus directed to a statutory category.
At step 2A, Prong One:
The claims 1-10 recite the following limitation directed to an abstract ideas:
“obtaining latency data of a first endpoint; obtaining latency data of a second endpoint” recites a mental process as obtaining latency data of a first endpoint; obtaining latency data of a second endpoint.
“generating, by a representation learning model, reconstruction error distribution data of the latency data of the first endpoint according to the latency data of the first endpoint and the latency data of the second endpoint” recites a mental process such mathematical method as using model to generate reconstruction error distribution data of the latency data of the first endpoint according to the latency data of the first endpoint and the latency data of the second endpoint.
“obtaining new latency data of the first endpoint; obtaining new latency data of the second endpoint” recites the mental process obtaining new latency data of the first endpoint; obtaining new latency data of the second endpoint.
“generating, by the representation learning model, a reconstruction error of the new latency data of the first endpoint according to the new latency data of the first endpoint and the new latency data of the second endpoint” recites the mental process using model to generate a reconstruction error of the new latency data of the first endpoint according to the new latency data of the first endpoint and the new latency data of the second endpoint.
“generating an anomaly score for the first endpoint according to a dispersion characteristic of the reconstruction error distribution data and the reconstruction error” recites the mental process generating an anomaly score for the first endpoint according to a dispersion characteristic of the reconstruction error distribution data and the reconstruction error.
At step 2A, Prong Two:
The claims recite the following additional elements:
That the content management system includes “processor” which are high level recitation of generic computer component s and functions and represent mere instruction to apply to a computer as in MPEP 2106.05 (f) which does not provide integration into a practical application.
At step 2B
The conclusions for the mere implementation using a generic computer and mere field of use are carried over and to not provide significantly more.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Regarding claim 2, the phrase " a TCN autoencoder model" renders the claim indefinite because it is unclear what is the TCN. The specification fails to define what is the TCN.
Regarding claim 7, the phrase " a TCN autoencoder model" renders the claim indefinite because it is unclear what is the TCN. The specification fails to define what is the TCN.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1,3, 5, 8-10 is/are rejected under 35 U.S.C. 102 (a)(1) as being anticipated by Bajaj (U.S. Pat. 12,468,617).
With respect to claims 1, 8 and 10, Bajaj discloses a method for anomaly detection, comprising:
obtaining latency data of a first endpoint (i.e., “The operational metrics may indicate system performance information, such as usage information, resource usage information, timing information (e.g., day), system analysis information, bandwidth information, and efficiency information. Example operational metrics include latency, throughput, number of invocations of the machine learning model 118, number of errors, types of errors, availability (e.g., availability of the machine learning services and/or other resources), or a combination thereof.”(col. 1, lines 60-67));
obtaining latency data of a second endpoint (i.e., “The operational metrics 120 may comprise latency (time) for application service 106 to receive a response from the machine learning model 118, latency as measured from the machine learning service 107 (e.g., overhead latency, latency of the machine learning model from input time to output time), or a combination thereof. ’(col. 4, lines 22-27), “The analysis service 106 is configured to perform analysis of one or more of the services, such as the application service 104, the machine learning service 107, the storage service 108, the computing service 102, and/or the like. The analysis service 106 is configured to determine (e.g., generate) operational metrics 120 associated with the one or more services”(col. 3, lines 52-57) and Examiner asserts at different application service 106 such as application service, storage service, etc. are different endpoint or first endpoint and second endpoint);
generating, by a representation learning model, reconstruction error distribution data of the latency data of the first endpoint according to the latency data of the first endpoint and the latency data of the second endpoint (i.e., “the operational metrics 120 comprise latency, throughput, number of invocations of the machine learning model 118, number of errors, types of errors, availability (e.g., availability of the machine learning services and/or other resources), durability, or a combination thereof”(col. 4, lines 15-21));
obtaining new latency data of the first endpoint (i.e., “The operational metrics 120 may comprise latency (time) for application service 106 to receive a response from the machine learning model 118, latency as measured from the machine learning service 107 (e.g., overhead latency, latency of the machine learning model from input time to output time), or a combination thereof. ’(col. 4, lines 22-27) at different time or current time is new latency data as claimed invention);
obtaining new latency data of the second endpoint (i.e., “The operational metrics 120 may comprise latency (time) for application service 106 to receive a response from the machine learning model 118, latency as measured from the machine learning service 107 (e.g., overhead latency, latency of the machine learning model from input time to output time), or a combination thereof. ’(col. 4, lines 22-27), “The analysis service 106 is configured to perform analysis of one or more of the services, such as the application service 104, the machine learning service 107, the storage service 108, the computing service 102, and/or the like. The analysis service 106 is configured to determine (e.g., generate) operational metrics 120 associated with the one or more services”(col. 3, lines 52-57) and Examiner asserts at different application service 106 such as application service, storage service, etc. are different endpoint or first endpoint and second endpoint at different time or current time and “the machine learning model 118 is associated with a service endpoint. The service endpoint may comprise a location (e.g., uniform resource identifier) and/or a computing service (e.g., a virtual machine, computing node, and/or the like of the computing service) associated with executing requests sent to the location. In embodiments, the service endpoint causes the machine learning model 118 to execute the plurality of prior events based on the specified transaction rate. The service endpoint may comprise a serverless web service configured with application code. If the machine learning model 118 is in production mode, requests from the application service 104 to process events may be sent to the service endpoint, which may forward the requests to the machine learning service 107 (e.g., which may cause the machine learning model 118 to process the request/event).”(col. 5, lines 40-55));
generating, by the representation learning model, a reconstruction error of the new latency data of the first endpoint according to the new latency data of the first endpoint and the new latency data of the second endpoint(i.e., “the operational metrics 120 comprise latency, throughput, number of invocations of the machine learning model 118, number of errors, types of errors, availability (e.g., availability of the machine learning services and/or other resources), durability, or a combination thereof”(col. 4, lines 15-21)); and
generating an anomaly score for the first endpoint according to a dispersion characteristic of the reconstruction error distribution data and the reconstruction error (i.e., “The accuracy metric may be based on area under the curve (AUC) scores of the machine learning model 118, such as an area under a receiver operating characteristic curve. The output of the machine learning model 118, such as an anomaly score, may be stored and used to determine to AUC scores for the machine learning model.’(col. 4, lines 59-65 )).
With respect to claims 3, and 9, Bajaj discloses wherein further comprising: generating, by the representation learning model, reconstruction error distribution data of the latency data of the second endpoint according to the latency data of the first endpoint and the latency data of the second endpoint (i.e., “the machine learning model 118 is associated with a service endpoint. The service endpoint may comprise a location (e.g., uniform resource identifier) and/or a computing service (e.g., a virtual machine, computing node, and/or the like of the computing service) associated with executing requests sent to the location. In embodiments, the service endpoint causes the machine learning model 118 to execute the plurality of prior events based on the specified transaction rate. The service endpoint may comprise a serverless web service configured with application code. If the machine learning model 118 is in production mode, requests from the application service 104 to process events may be sent to the service endpoint, which may forward the requests to the machine learning service 107 (e.g., which may cause the machine learning model 118 to process the request/event).”(col. 5, lines 40-55) and i.e., “the operational metrics 120 comprise latency, throughput, number of invocations of the machine learning model 118, number of errors, types of errors, availability (e.g., availability of the machine learning services and/or other resources), durability, or a combination thereof”(col. 4, lines 15-21));
With respect to claim 5, Bajaj discloses wherein the latency data of the first endpoint includes a first training set and a first validation set, the latency data of the second endpoint includes a second training set and a second validation set (i.e., “the operational metrics 120 comprise latency, throughput, number of invocations of the machine learning model 118, number of errors, types of errors, availability (e.g., availability of the machine learning services and/or other resources), durability, or a combination thereof. The operational metrics 120 may comprise latency (time) for application service 106 to receive a response from the machine learning model 118, latency as measured from the machine learning service 107 (e.g., overhead latency, latency of the machine learning model from input time to output time), or a combination thereof…. The operational metrics may comprise an indication of durability, such as an indication of whether a transaction was processed by the machine learning model 118, an indication of the number of transactions processed by the machine learning model 118, an indication of whether a response of the machine learning model 118 is valid, an indication of a number of responses of the machine learning model 118 that were valid, an indication of a number of responses of the machine learning model 118 that were invalid, a combination thereof, and/or the like.”(col. 4, lines 15-43)), the representation learning model is trained by the first training set and the second training set (i.e., “The one or more machine learning models may be stored in the storage service 108. The machine learning service 107 may allow users to create, upload, modify, train, and/or the like a machine learning model 118.”(col. 3, lines 20-25) or “storing data indicative of a plurality of prior events associated with an application operating on a web service; receiving, based on input via a user interface, a trained machine learning model and data configuring the machine learning model to execute as an event processing service via a service endpoint, wherein the machine learning model is initially configured to execute in a non-production mode;”(claim 1) or “The machine learning model may be trained to classify the event as low risk, high risk, and/or any other relevant category”(col. 10, lines 38-40)), and the reconstruction error distribution data is generated by inputting the first validation set and the second validation set into the representation learning model ((i.e., “the operational metrics 120 comprise latency, throughput, number of invocations of the machine learning model 118, number of errors, types of errors, availability (e.g., availability of the machine learning services and/or other resources), durability, or a combination thereof. The operational metrics 120 may comprise latency (time) for application service 106 to receive a response from the machine learning model 118, latency as measured from the machine learning service 107 …the operational metrics 120 comprise latency, throughput, number of invocations of the machine learning model 118, number of errors, types of errors, availability (e.g., availability of the machine learning services and/or other resources), durability, or a combination thereof. The operational metrics 120 may comprise latency (time) for application service 106 to receive a response from the machine learning model 118, latency as measured from the machine learning service 107 (e.g., overhead latency, latency of the machine learning model from input time to output time), or a combination thereof…. The operational metrics may comprise an indication of durability, such as an indication of whether a transaction was processed by the machine learning model 118, an indication of the number of transactions processed by the machine learning model 118, an indication of whether a response of the machine learning model 118 is valid, an indication of a number of responses of the machine learning model 118 that were valid, an indication of a number of responses of the machine learning model 118 that were invalid, a combination thereof, and/or the like.”(col. 4, lines 15-43)).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 6 is rejected under 35 U.S.C 103(a) as being unpatentable over Bajaj (U.S. Pat. 12,468,617) in view of Karlsson et al. (U.S. Pat. 11,216,838 B1)
With respect to claim 6, Bajaj discloses all limitations recited in claim 1 except for obtaining values of a system parameter; generating a first correlation value between the values of the system parameter and the new latency data of the first endpoint; and generating a second correlation value between the values of the system parameter and the new latency data of the second endpoint. However, Zhen et al. discloses obtaining values of a system parameter; generating a first correlation value between the values of the system parameter and the new latency data of the first endpoint (i.e., “The method may further include generating and/or updating a correlation map between the conversions and the converted impressions. For at each conversion that was observed at least within a look-back window, the correlation map may indicate the associated converted impression and the observed latency.”(col. 2, lines 22-30)); and generating a second correlation value between the values of the system parameter and the new latency data of the second endpoint ((i.e., “The method may further include generating and/or updating a correlation map between the conversions and the converted impressions. For at each conversion that was observed at least within a look-back window, the correlation map may indicate the associated converted impression and the observed latency.”(col. 2, lines 22-30) and “Conversion module 216 and/or profile module 218 is enabled to generate a correlation map between the provided impressions and the received conversions. Furthermore, conversion module 216 and/or profile module 218 is enabled to generate/update various signals based on the provided impressions, received conversions, correlation map, and the configuration parameters. Profile module 218 is enabled to iteratively determine a latency distribution and a conversion rate based on the provided impressions, the received conversions, the correlation map, and one or more of configuration parameters 210.”(col. 8, lines 30-40)). It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the claimed invention, to include Karlsson et al.’s feature in order to generate online advertising accurate based on the profile of online events for the stated purpose has been well known in the art as evidenced by teaching of Karlsson et al.
Allowable Subject Matter
Claim 2 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and 35 USC § 101, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the wherein the representation learning model is a TCN autoencoder model, and the reconstruction error distribution data of the latency data of the first endpoint is generated by inputting the latency data of the first endpoint and the latency data of the second endpoint into the TCN autoencoder model.
Claim 4 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and 35 USC § 101, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the claimed wherein the anomaly score is generated according to an interquartile range and a 75th percentile of the reconstruction error distribution data.
Claim 7 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and 35 USC § 101, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the claimed further comprising: obtaining latency data of a plurality of endpoints; generating reconstruction error distribution data of the latency data of each of the plurality of endpoints according to the latency data of the plurality of endpoints by a TCN autoencoder model; obtaining new latency data of the plurality of endpoints; generating a reconstruction error of new latency data of each of the plurality of endpoints according to the new latency data of the plurality of endpoints; generating an anomaly score for each of the plurality of endpoints according to a dispersion characteristic of the corresponding reconstruction error distribution data and the corresponding reconstruction error; generating correlation data for each of the plurality of endpoints according to new latency data of the corresponding endpoint and a system parameter; filtering the plurality of endpoints according to the anomaly score for each of the plurality of endpoints to create a first group of endpoints; and filtering the first group of endpoints according to the correlation data for each of the first group of endpoints to create a second group of endpoints.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG T VY whose telephone number is (571)272-1954. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached at (571)272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUNG T VY/Primary Examiner, Art Unit 2163 January 31, 2026