Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This action is in response to the claimed listing filed on 03/28/2024.
Claims 1-20 are pending.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claim 1-9: Claim 1 recites the limitation "a program" in line 1, and limitation “a program” in lines 2-3 of the claim. Thus, it recites two different computer programs. The claim recite “the program” in line 5, and “the program” in various lines after line 3. There is insufficient antecedent basis for this limitation, “the program”, in the claim, and thus the claim would be rejected under 35 USC 112(b).
It would require amending the claim for having sufficient antecedent basis in the limitations. For expediting the claimed examination, “a program” in lines 2–3 is interpreted with “the program”.
Claims 2-9 are dependent, and thus they bear the limitations addressed above in claim 1, and would be rejected the same under 35 USC 112(b).
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 5-6, 10-11, 14-17, 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lo et al., “Classification of Software Behaviors for Failure Detection: A Discriminative Pattern Mining Approach”, 2009, ACM, pages 1-10.
As per Claim 1: Lo discloses,
1. A method of detecting differences in behavior for a program (p.1, right column, last text portion, “Can data mining help? In this paper, as a new step to address the reliability of software systems, we present a novel classification approach to predict software behaviors. Based on historical data of software and known failures, we construct a classifier to generalize the failures and to further
detect other unknown failures.” ), comprising:
obtaining a first prediction model that predicts behavior of a first implementation of a program;
(See p.. 2, left col., in paragraph started with ‘Our proposal..’, referred to “The trained classifier can be updated several times during the life cycle of a software system with new known good and bad behaviors.”, and right column, first full para., “To validate the utility of our proposed classifier, we performed controlled experiments and case studies on synthetic and real datasets. For comparison, we implemented a base-line classification model based on single events as features.”. Thus, the reference of Lo builds a software behavior prediction used as a baseline in software life circle.
See sec. 5.1, Referred to Evt: “We denote the baseline method as Evt” and Pat: “We denote … our iterative pattern-based method as Pat in the following tables.”. Thus, baseline, Evt, read on ‘first prediction model ’ (and ‘pat’ as second ))
obtaining an event set representing a partially or totally ordered set of events realized from executing the program;
(See above: “we implemented a base-line classification model based on single events as features”. See sec. 2.1, in p. 3, left column: “We denote a trace or sequence S as (e1; e2; : : : ; eend) where each ei is an event from an event set i . The set of input traces or sequence database under consideration is denoted by TDB.” Note: TDB is for Traces Database )
determining, based on observations of a second implementation of the program, a second prediction model of a behavior of the second implementation of the program;
(See in sec. 5.2, p. 6, right column: “CVS Application. The first program we analyze is a
Concurrent Versions System (CVS) application built on top of FTP library of Jakarta Commons Net [26]”.
And see in Tables 1-2, in Dataset column is of CVS Application, presented with the four programs as X11, CVS Omission, CVS Ordering, CVS Mix. Thus, the build model on CVS application/protocol with ‘pat’ reads on “second prediction model”)
generating, by the first prediction model, a first prediction based on the event set (Referred to baseline, Evt; and See sec. 2.1 trace or sequence S, as input TDB, and referred to Algorithm 1, right column, “Inputs: TDB: Trace database” );
generating, by the second prediction model, a second prediction based on the event set (Referred to CVS application/protocol, Pat, and Iterative Patterns (sec. 2.2, in p. 3), and as input Pat in Algorithm 1, “Inputs: Pat: Pattern so far”);
comparing the first prediction with the second prediction to generate a first comparison result; (See Tables 1, and 2, as synthesis Dataset and Trace dataset with Accuracy and AUC between Evt and Pat, and see in p. 2, right column, second full para. “To validate the utility of our proposed classifier, we performed controlled experiments and case studies on synthetic and real datasets. For comparison, we implemented a baseline classification model based on single events as features.”: comparing baseline classification model: read on comparing ‘first prediction”, and second full para., “Experiments demonstrate the effectiveness of our proposed discriminative iterative pattern-based classification for software failure detection.”: read on compared with iterative pattern-based classification ‘second prediction’)
and
signaling an unexpected behavior of the second implementation of the program when a value of the first comparison result is within a predefined behavior range that has been identified as corresponding to unexpected behavior.
(In left column, p. 2, left col., in paragraph started with ‘Our proposal..’, referred to “The trained classifier can be updated several times during the life cycle of a software system with new known good and bad behaviors. It can be treated as a black box to predict and give warning signals in case the program is behaving in an undesirable way." And in Algorithm 1, with ‘Minimum support threshold’).
As per Claim 2: Regarding,
2. The method of claim 1, further comprising:
generating, by the first prediction model, a first prediction of a subsequent event that follows the event set (See in p. 3, sec. 2.1, sequence S…The set of input traces or sequence database under consideration is denoted by TDB, and see in p. 6, left column, in sec. 5.1, “The baseline method simply measures the frequency distribution of single events in a sequence,”); and
generating, by the second prediction model, a second prediction of the subsequent event that follows the event set (See in p. 2, left column, in para. started with “Software behavior..”, and referred to “On a related front, iterative pattern mining, based on the semantics of several software modeling languages, has been proposed to extract frequent repetitive series of events from program execution traces as candidate software specifications [21].” ).
As per Claim 5: Regarding,
5. The method of claim 1, wherein:
the first implementation of the program is a current version of the program;
(In the Lo reference, the classifier is performed in a lifecycle software: Thus it is associated with update/upgrade/versioning. In this limitation, as given above, The TDB/baseline/Evt and current version. It should be noted that software set as baseline is considered as standard. It always considered as current version)
the second implementation of the program is an upgrade version of the program;
(Referred to CVS Application/Iterative pattern-based ‘Pat’. Also see in p. 2, left column, in para. started with “Our proposed classification..” and referred “The classifier can be used to classify unknown program behaviors corresponding to the current or a future version of a software system as similar errors might potentially be made.”: Unknown program or future version: “upgrade version” )
the program is executed in a cloud computing environment (See in p.6, sec. 5.2, right column, CVS Application, FTP server, or in p.7, sec. 5.3, Siemens & MySQL); and
testing the upgrade version before committing the upgrade version to the cloud computing environment (In sec. 5.3, “we analyze different programs from Siemens Test Suite” ) includes the comparing of the first prediction with the second prediction to generate the first comparison result and signaling the unexpected behavior of the second implementation of the program when the value of the first comparison result is within the predefined behavior range (See Table 1 and Table 2).
As per Claim 6: Regarding,
6. The method of claim 1, wherein determining the second prediction model based on observations of a second implementation of the program comprises:
generating records of program traces representing respective sequences of events resulting from executing the second implementation of the program;
(See in p. 6, sec. 5, Figure 1, CVS Protocol with Error. E.g. in the box presented with events with info X, G, O, Y, Y, are design as software trace)
mapping the records of program traces to canonical trace events to generate canonicalized records of trace events; and
(See Figure 1, Figure 2, or Figure 3, the info are mapped into events. It should be noted that events such as login, logout, disconnect are canonical events )
training the second prediction model using training data comprising the canonicalized records to predict likelihoods of subsequent events in event sets of the second implementation of the program (Figure 1, with dashed lines) or to predict which subsequent events are unsurprising for respective event sets of the second implementation.
(See Tables 1, 2, see the dataset: in machine leaning, dataset is used to train a model, and the X11, CVS Omission, CVS Ordering, CVS Mix are of CVS Application, ‘second prediction model’)
As per claims 10-11, 14-15: The claims are directed to an apparatus in which the claimed recitations are corresponding to the limitations in the method of claims 1-2, 5-6. The claims are rejected with the same rationales addressed in claims 1-2, 5-6.
As per claims 16-17, 20: The claims are directed to a non-transitory computer-readable medium in which the claimed recitations are corresponding to the limitations in the method of claims 1-2, 6. The claims are rejected with the same rationales addressed in claims 1-2, 6.
Allowable Subject Matter
Claims 3-4, 7-9 are rejected under 35 USC 112(b). In the allowable subject, Claims 3-4, 7-9 would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims, provided with resolving the rejection of 35 USC 112(b).
Claims 12-13, and 18-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ted T Vo whose telephone number is (571)272-3706. The examiner can normally be reached 8am-4:30pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Y Mui can be reached at (571) 272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TTV
March 4, 2026
/Ted T. Vo/
Primary Examiner, Art Unit 2191