DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-2, 5-9, 12-15, and 18-20 are currently pending in this application.
Claims 1, 8, 12-14, and 18-20 are amended as filed on 02/09/2026.
Claims 3-4, 10-11, and 16-17 are canceled as filed on 02/09/2026.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-10, 12-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Shachar et al. (Pre-Grant Publication No. US 2024/0380766 A1), hereinafter Shachar, in view of Vuda et al. (Pre-Grant Publication No. US 2024/0291718 A1), hereinafter Vuda, in view of Muddu et al. (Patent No. US 9,516,053 B1), hereinafter Muddu, and in further view of Li et al. (Pre-Grant Publication No. US 2021/0241099 A1), hereinafter Li.
2. With respect to claims 14, 1, and 8, Shachar taught a computing system comprising: a processor (0015); a network module coupled to the processor to enable communication over a network (0050, where the network module is given); a computer-readable storage device coupled to the processor (0015); an event sequence conversation module coupled to the network module (0059 & 0061. See also: the conditioned data generation of 0087-0093); an event sequence generation engine coupled to the network module (0003, where the module is given such that the system is able to detect the event sequences); and program instructions stored on the computer-readable storage device for execution by the processor via a memory, wherein the execution of the program instructions by the processor configures the computing system to perform an anomaly detection method (0003) comprising: applying, via the event sequence conversation module, historical log data to generate a first plurality of structured event sequences labeled as training data (0003, where the sequence of activities is the topology); building, via the event sequence generation engine, a machine learning model using the training data, wherein the event sequence generation engine calculates a probability threshold for each of the first plurality of structured event sequences using the machine learning model (0003, where the likelihood is the probability); applying, via the event sequence conversation module, log data to generate a second plurality of structured event sequences (0093, where the ML model is updated based on the continuing second sequence data of 0005-0006); running, via the event sequence generation engine, the second plurality of structured event sequences through the machine learning model (0003, where the process is repeated for the second time-interval); and calculating, by the event sequence generation engine, a probability for each of the second plurality of structured event sequences using the machine learning model (0003, where the process is repeated for the second time-interval).
However, Shachar did not explicitly state that the log data comprised a system topology to runtime/historical log data; wherein the historical log data is associated with a microservices topology, the first system topology is applied based on the microservices topology of the historical log data, and the applying of the first system topology maps events in the historical log data to the first system topology. On the other hand, Vuda did teach the use of a graphical user interface coupled to the processor (figure 4) and that the log data comprised a system topology to runtime/historical log data (0100); the historical log data is associated with a microservices topology, the first system topology is applied based on the microservices topology of the historical log data, and the applying of the first system topology maps events in the historical log data to the first system topology (0102, where the plurality of historical subsets teaches the first and second topology, where the microservices can be seen in 0061 & 0105). Both of the systems of Shachar and Vuda are directed towards training machine learning models and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings Shachar to utilize analyzing topographic log data and presenting said data via a graphical user interface, as taught by Vuda, in order to provide a more accurately machine learning model.
However, Shachar did not explicitly state determining, by a discriminator, whether the probability for each of the second plurality of structured event sequences is lower than the probability threshold of Classified event sequences of the first plurality of structured event sequences; and identifying, by the event sequence generation engine, the probability for each of the second plurality of structured event sequences as an anomaly based on the determination that the probability for each of the second plurality of structured event sequences is lower than the probability. On the other hand, Muddu did teach determining, by a discriminator, whether the probability for each of the second plurality of structured event sequences is lower than the probability threshold of Classified event sequences of the first plurality of structured event sequences; and identifying, by the event sequence generation engine, the probability for each of the second plurality of structured event sequences as an anomaly based on the determination that the probability for each of the second plurality of structured event sequences is lower than the probability (87:56 to 88:12). Both of the systems of Shachar and Muddu are directed towards training machine learning models and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention, to modify the teachings Shachar to utilize analyzing structured event sequences as part of the anomaly detection algorithm, as taught by Muddu, in order to provide a more accurately machine learning model.
However, Shachar did not explicitly state that the system was using a SeqGAN model. Shachar also did not explicitly state that the machine model was an adversarial reinforcement learning model using the training data, where the adversarial reinforcement learning model comprises a sequence generative adversarial network architecture, and that the event sequence generation engine calculates a probability threshold for each structured event sequence. On the other hand, Li did teach that the system was using a SeqGAN model (0032). Li also taught that the machine model was an adversarial reinforcement learning model using the training data, where the adversarial reinforcement learning model comprises a sequence generative adversarial network architecture, and that the event sequence generation engine calculates a probability threshold for each structured event sequence (0004, where the sequence probabilities can be seen in 0036). Both of the systems of the combination of Shachar & Li are directed towards Generative Adversarial Networks and therefore, it would have been obvious to a person having ordinary skill in the art, at the time of the effective filing of the invention to modify the teachings of Shachar, to utilize SeqGan, as taught by Li, in order to provide useful ML techniques that were contemporary to the time of the invention.
3. As for claims 2, 9, and 15, they are rejected on the same basis as claims 1, 8, and 14 (respectively). In addition, Shachar taught wherein the historical log data and the runtime log data are extracted from a distributed system (0044, where the log data being in real time can be seen in 0077, where the distributed network is taught by the cloud of 0039).
5. As for claims 5, 12, and 18, they are rejected on the same basis as claims 1, 8, and 16 (respectively). In addition, Shachar taught processing, by a preprocessing module, the historical log data and the runtime log data, wherein each log entry of the historical log data and the runtime log data is processed as a log template (0059, where the template is given in order to be able to pre-process the data accurately).
6. As for claims 6, 13, and 19, they are rejected on the same basis as claims 1, 8, and 16 (respectively). In addition, Vuda taught providing, via a graphical user interface, the second plurality of structured event sequences to SMEs, wherein the SMEs perform at least one of: reviewing or amending the second plurality of structured event sequences (figure 4, where this, at least, teaches the reviewing limitation).
7. As for claims 7 and 20, they are rejected on the same basis as claims 1 and 14 (respectively). In addition, Shachar taught wherein the probability threshold is formed using an event sequence reward comprising a number of correctly ordered generated event sequences based on a system topology (0003, where this is given for the sequence of events likelihood determinations. According, the topology can be seen in Vuda: 0102).
Response to Arguments
Applicant’s arguments with respect to the claim(s) have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSEPH L GREENE whose telephone number is (571)270-3730. The examiner can normally be reached Monday - Thursday, 10:00am - 4:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nicholas R. Taylor can be reached at 571 272-3889. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSEPH L GREENE/Primary Examiner, Art Unit 2443