DETAILED ACTION
This action is responsive to the application filed on April 17, 2024, which claims priority from provisional application 63/507,652 filed June 12, 2023.
Claims 1-20 are pending and are presented to examination.
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Examiner Notes
Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Drawings
Figure 1 should be designated by a legend such as --Prior Art-- because only that which is old is illustrated. See MPEP § 608.02(g). Corrected drawings in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. The replacement sheet(s) should be labeled “Replacement Sheet” in the page header (as per 37 CFR 1.84(c)) so as not to obstruct any portion of the drawing figures. If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Claim Objections
Claims 1-8 and 18-20 are objected to because of the following informalities: Claim 1 recites “A method for accelerating a software development process using Generative Artificial Intelligence (AI) and Behavior Driven Development (BDD), the method comprising:” in lines 1-2.
Claim 1 recites “generating data sets for the test scenarios using Generative AI based on the natural language description of expected behaviors;” in lines 9-10.
Claim 18 recites “generating data sets for the test scenarios using Generative AI based on the natural language description of expected behaviors;” in lines 9-10. Appropriate correction is required. Please amend the claim language as suggested in bold. Dependent claims 2-8 and 19-20 do not overcome the deficiency of the base claim and, therefore, are objected for the same reasons as the base claim.
Specification
The disclosure is objected to because of the following informalities:
The term “behaviour” has been found in different areas within the specification. Please replace it with “behavior”.
The term “may be” has been found in different areas within the specification. Making the description unclear and/or indefinite. Please remove that term.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a converter module configured to convert the natural language description into test scenarios using a Domain Specific Language (DSL);”, “a Generative AI module configured to generate test code based on the test scenarios and generate data sets based on a description of expected behavior;”; “an implementation code generation module configured to generate implementation source code based on the test scenarios;” and “a verification module for testing the generated implementation source code against the test scenarios.” in claim 9.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
35 USC § 101
As set forth above, claim 9 has been interpreted as invoking 35 U.S.C. § 112(f). It should be noted that if the claims were not interpreted as invoking 35 U.S.C. § 112(f), then they would be rejected under 35 U.S.C. 101 because the disclosed “a converter module”, “a Generative AI module”; “an implementation code generation module” and “a verification module” may be fairly interpreted as software. See MPEP § 2106.01.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 4-5 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 4 recites “wherein the natural language description of expected behaviors description is based on product information.”. it is unclear what kind of production information the claim language is referring to, since pretty much the specification does not provide further detail about this requirement. Clarification is required.
Claim 5 recites “wherein the natural language description of expected behaviors description is based on customer information.”. it is unclear what kind of production information the claim language is referring to, since pretty much the specification does not provide further detail about this requirement. Clarification is required.
Claim limitations “a converter module configured to convert the natural language description into test scenarios using a Domain Specific Language (DSL);”, “a Generative AI module configured to generate test code based on the test scenarios and generate data sets based on a description of expected behavior;”; “an implementation code generation module configured to generate implementation source code based on the test scenarios;” and “a verification module for testing the generated implementation source code against the test scenarios.” in claim 9.” invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 13-17 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 13 recites “further comprising a feedback module for providing feedback to the Generative AI module based on results output by the verification module to improve quality and accuracy of the generated implementation source code and the data sets.”. The specification does not mention any details about a feedback module and its functionality. For examination purposes, claim would be interpreted as using a tool to evaluate a result. Claim 14 recites “further comprising a review module for review and iteration with the Generative AI for the generated implementation source code and the data sets.”. The specification does not mention any details about a review module and its functionality. For examination purposes, claim would be interpreted as using a tool to evaluate/interact/generate code.
Claim 15 recites “further comprising a repeat module for repeating review, iteration, and verification until the generated implementation source code passes all tests according to a predetermined metric.”. The specification does not mention any details about a repeat module and its functionality. For examination purposes, claim would be interpreted as generating score and metrics during evaluation iteration.
Claim 16 recites “further comprising a code inspection module for determining a need for additional inspection based on an output of the verification module.”. The specification does not mention any details about a code inspection module and its functionality. For examination purposes, claim would be interpreted as a mere evaluation using different tools.
Claim 17 recites “further comprising a completion module for indicating a completion when the generated implementation source code requires zero or nearly zero human inspection and passes all tests according to a predetermined metric.”. The specification does not mention any details about a completion module and its functionality. For examination purposes, claim would be interpreted as a mere interaction with machine learning/artificial intelligence technology.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 18-20 are rejected under 35 USC 101 because the claimed invention is directed to non-statutory subject matter.
Referring to claim 18, the broadest reasonable interpretation of a claim drawn to a computer-readable storage medium (also called machine readable medium and other such variations) typically covers from form of non-transitory tangible media and transitory propagating signals. The specification is silent to limit that such medium excludes signals. Therefore, covers transitory propagating signal per se (non-statutory subject matter). See MPEP 2111.01. (When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 USC 101 as covering non-statutory subject matter. See In re Nuijten, 500F.3d 1346, 1356-57 (Fed. Cir. 2007) (transitory embodiments are not directed to statutory subject matter) and Interim Examination Instructions for Evaluating Subject Matter Eligibility Under 35 USC 101, Aug. 24, 2009; p.2.) OG Notice – Jan. 28, 2010. It is respectfully suggested to amend the term in a way to preclude an interpretation of “propagating signal per se”, such as “A non-transitory computer-readable storage medium…”. Therefore, the claims are non-statutory. Per claims 19-20, these claims do not cure the deficiency in claim 18 and are rejected based on dependency on claim 18.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Leon Chemnitz et al. (“Towards Code Generation from BDD Test Case Specifications: A Vision” – hereinafter Chemnitz ). With respect to claim 1, Chemnitz teaches a method for accelerating a software development process using Generative Artificial Intelligence (AI) and Behaviour Driven Development (BDD), the method comprising: receiving a natural language description of expected behaviors of machine-readable code functionality (See figure 1, natural language machine readable code), . converting the natural language description into test scenarios using a Domain Specific Language (DSL) (See page 2, left hand column, 1st paragraph “For the test cases to be interpretable by these domain experts and stakeholders, the test case specification is usually realized with a domain-specific language (DSL) that integrates natural language and is executed by a specialized tool or framework (e.g. cucumber [15]). An example of a BDD test case for the cucumber framework can be found in Figure 1.”). submitting the test scenarios to a Generative AI to generate test code for executing the test scenarios (See abstract, “We propose to do this using behavior-driven development test specifications as input to a transformer-based machine learning model;”. See page 1, right hand column, 2nd – 3rd paragraphs, “Initially developed for the processing of NL, recent progress in Artificial Intelligence (AI) and Machine Learning (ML) enabled the application of novel techniques to coding assistance [3]–[6]. The transformer model has marked a milestone in natural language processing (NLP) [10] and is the technique we will be focusing on here but the ideas we present are not confined to any particular ML model. In this work, we propose an approach that further leverages the NLP capabilities of transformers for code generation by using BDD test specifications as input for the generation task. We aim to provide developers with a tool to optimize their efficiency while also enforcing established software engineering practices i.e. automated software testing.”. See page 2, right hand column, 4th paragraph, “Enforced BDD: As Angular is very opinionated, it also imposes conventions around the testing methodology on users. When creating a new Angular component using the Angular CLI (which is generally the preferred way of doing so), a test file for the new component is automatically generated. All Angular tests are based on Jasmine [22] which is a BDD framework for JavaScript and TypeScript. Because of this, we believe it is reason able to assume that there exist many valuable samples of BDD specification and component code pairs in publicly available open-source projects that can be used for the training of our model. Also, because of this convention, we believe that the adoption of our proposed system should require minimal effort for a developer that is already accustomed to the Angular ecosystem.”). generating data sets for the test scenarios using Generative AI based on the natural language description of expected behaviours (See page 4, left hand column, 4th paragraph, “because we will use test case specifications as the input to our generation task, we have a high amount of information regarding the desired output. We can leverage this and have our model generate code samples which we then execute the tests on in a sandboxed environment. Generated samples that do not meet all assertion criteria of our tests will be discarded and only the ones that do are presented to the user. Since the test implementations of our input data can be used in this way and jasmine tests are comprised of strings containing NL and the test code as Figure 2 shows, we propose to explore the idea of only using the NL part of the test specifications as the input to our model and use the test implementation for the selection of generated results.” incorporating the generated data sets into descriptions of the test scenarios (See section III. “Method” on pages 3-4). generating implementation source code based on the test scenarios and verifying the generated implementation source code (See section III. “Method” on pages 3-4, C. Post-Processing and D. Evaluation). With respect to claim 2, Chemnitz teaches wherein the Generative AI is trained to generate the test code and the data sets based on the expected behaviors described in the natural language description (See page 3, left hand column, “A. Data Acquisition and Pre-Processing”, “As stated at the beginning of this section, we need data for the training, testing, and validation of our model and we plan to extract this data from open-source projects on GitHub.”. See page 4, left hand column, “B. Model”, “At the core of our proposed approach is a generative ML model and because of the recent ubiquity and performance of transformer-based models in generation and sequence-to sequence transformation tasks”). With respect to claim 3, Chemnitz teaches wherein the Generative AI utilizes machine learning techniques to improve quality and accuracy of the generated implementation source code and the data sets over time (See page 2, left hand column, “In addition to the reduction of effort required in the development of applications, we believe that our approach incentivizes the implementation of meaningful test cases and therefore the adoption of QA practices which are generally regarded as integral to the development of high-quality software.”. See page 3, right hand column, “To ensure that the data is of high quality, we propose to filter the data further and remove samples from the data set that do not meet certain criteria. Possible exclusion criteria could be the number of test cases per test file (e.g. minimum of 3 test cases) or the amount of forks of the associated GitHub project (e.g. minimum of 5 forks). We believe these filters to be promising, since angular auto generates one test case per Component and we are of the opinion that more are needed for the Component to be considered high quality. Also we assume that GitHub users are more likely to interact with a project if they think it contains high quality code.”. Furthermore, see pages 4-5 “D. Evaluation” and “IV Discussion on the Impact of Our Vision”). With respect to claim 4, Chemnitz teaches wherein the natural language description of expected behaviors description is based on product information (See figure 1, product information, e.g., password data. Examiner notes: claim is broad product information could be anything).). With respect to claim 5, Chemnitz teaches wherein the natural language description of expected behaviors description is based on customer information (See figure 1, customer information, e.g., sign in information. Examiner notes: claim is broad customer information could be anything). With respect to claim 6, Chemnitz teaches further comprising reviewing the generated implementation source code to ensure requirement compliance (See page 4, right hand column, “D. Evaluation”, “To find a satisfactory answer to RQ1 we propose a search over the pre-preprocessing and filter parameters (Section III-A) training multiple models and evaluating code quality in one of the ways mentioned above. To find an answer to RQ2 we suggest defining a threshold score that indicates whether a piece of code can be considered high quality when measured against a sample from the evaluation set. Whether the proposed system can reliably generate such code answers the research question. Answering RQ3 is straightforward since test coverage analysis is part of the angular tooling. Such analysis could be run after the generation and the results aggregated to calculate min, max, and average branch and instruction coverage on test data.”). With respect to claim 7, Chemnitz teaches wherein the implementation source code is generated without human intervention (See page 2, left hand column, 1st paragraph,” We propose a method to generate application code from test case specifications to further reduce the developer time needed in application development. Specifically, we aim to extract the relevant information needed for the generation of code from the specification of test cases adhering to behavior-driven development (BDD) standards.”). With respect to claim 8, Chemnitz teaches wherein verifying the generated implementation source code comprises utilizing one or more tests to determine an accuracy of (See pages 3-4, “III Method”, which discloses the use of training, testing and validation for the model). With respect to claim 9, the claim is directed to a system that corresponds to the method recited in claim 1, respectively (see the rejection of claim 1 above; wherein Chemnitz also teaches such system in figure 2 using an interface, i.e., system). With respect to claim 10, Chemnitz teaches wherein the Generative AI module comprises a machine learning model trained on a dataset of natural language descriptions and corresponding test code and the data sets (See pages 3-4, “III Method”, which discloses the use of training, testing and validation for the model, e.g., ML model).
With respect to claim 11, Chemnitz teaches wherein the implementation code generation module utilizes the test scenarios as functional specifications to generate the implementation source code (See abstract, “We propose to do this using behavior-driven development test specifications as input to a transformer-based machine learning model… Our approach aims to drastically reduce the development time needed for web applications while potentially increasing software quality and introducing new research ideas toward automatic code generation.”). With respect to claim 12, Chemnitz teaches wherein the verification module automates the testing of the generated implementation source code against the test scenarios (See abstract, “Our approach aims to drastically reduce the development time needed for web applications while potentially increasing software quality and introducing new research ideas toward automatic code generation.”. See page 1, right hand column, 3rd paragraph “In this work, we propose an approach that further leverages the NLP capabilities of transformers for code generation by using BDD test specifications as input for the generation task. We aim to provide developers with a tool to optimize their efficiency while also enforcing established software engineering practices i.e. automated software testing”. Furthermore, see “Conclusion” on page 5, “The field of ML-based code generation is just emerging and to our knowledge, there does not exist any research that tries to achieve what we propose to do. We describe a possible method for generating Angular frontend components based on BDD test specifications by utilizing ML techniques. Our approach aims to reduce the development time needed for web applications and introduce new research ideas toward automatic code generation.”). With respect to claim 13, Chemnitz teaches further comprising a feedback module for providing feedback to the Generative AI module based on results output by the verification module to improve quality and accuracy of the generated implementation source code and the data sets (See page 4, “D. Evaluation”, using metric and generative models for evaluation). With respect to claim 14, Chemnitz teaches further comprising a review module for review and iteration with the Generative AI for the generated implementation source code and the data sets (See page 4, “D. Evaluation”, using metric and generative models for evaluation). With respect to claim 15, Chemnitz teaches further comprising a repeat module for repeating review, iteration, and verification until the generated implementation source code passes all tests according to a predetermined metric (See page 4, “D. Evaluation”, using metric and generative models for evaluation, e.g., performing metric and evaluation score). With respect to claim 16, Chemnitz teaches further comprising a code inspection module for determining a need for additional inspection based on an output of the verification module (See page 4, “D. Evaluation”, evaluating code quality using different tools). With respect to claim 17, Chemnitz teaches further comprising a completion module for indicating a completion when the generated implementation source code requires zero or nearly zero human inspection and passes all tests according to a predetermined metric (See abstract, “We propose to do this using behavior-driven development test specifications as input to a transformer-based machine learning model”).
With respect to claims 18-20, the claims are directed to a medium that corresponds to the method recited in claims 1 and 7-8, respectively (see the rejection of claims 1 and 7-8 above; wherein Chemnitz also teaches such system in figure 2 using an interface, e.g., a system must have memory).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Sriramdas et al. (US Pub. No. 2024/0362153) discloses techniques for test automation portals for behavior-driven development. Some embodiments are particularly directed to a user interface portal and supporting systems that allow users of various capabilities to view available Behavior-Driven Development (BDD) tests, select one or more BDD tests, execute/schedule automated BDD tests, view execution results, and view execution history from one central location without the need for programming/developer skills. (see abstract).
Raman et al. (US Pat. No. 10,073,763) discloses methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for a touchless testing platform employed to, for example, create automated testing scripts, sequence test cases, and implement determine defect solutions. In one aspect, a method includes the actions of receiving a log file that includes log records generated from a code base; processing the log file through a pattern mining algorithm to determine a usage pattern; generating a graphical representation based on an analysis of the usage pattern; processing the graphical representation through a machine learning algorithm to select a set of test cases from a plurality of test cases for the code base and to assign a priority value to each of the selected test cases; sequencing the set of test cases based on the priority values; and transmitting the sequenced set of test cases to a test execution engine. (see abstract).
Faezeh Khorram et al. (“Challenges & Opportunities in Low-Code Testing”) conducts an analysis of the testing components of five commercial Low-Code Development Platforms (LCDP) to present low-code testing advancements from a business point of view. (see abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANIBAL RIVERACRUZ whose telephone number is (571)270-1200. The examiner can normally be reached Monday-Friday 9:30 AM-6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Hyung S Sough can be reached at 5712726799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANIBAL RIVERACRUZ/Primary Examiner, Art Unit 2192