DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-7, 9-15 and 17-20 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Singh et al US 2025/0265176 A1.
The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
Regarding claims 1, 9 and 17
Singh et al teaches
a application programming interface (API) validation platform comprising at least one processor [0002] aspects of the disclosure provide effective, efficient, scalable, and convenient solutions that address and overcome the technical problems associated with the development, test, and validation of an API. In accordance with one or more aspects of the disclosure, a computing platform with at least one processor, a communication interface communicatively coupled to the at least one processor, and memory storing computer-readable instructions may train, based on historical data, an artificial intelligence (AI) engine];
memory storing computer-readable first instructions that, when executed by the at least one processor, cause the API validation platform to [0023] API development validation server system 105 may be a computer system that includes one or more computing devices (e.g., servers, server blades, or the like) and/or other computer components (e.g., processors, memories, communication interfaces) that may be used to test and/or validate a new API that was previously developed by API development platform 102. In doing so, the new API may be validated by a system in addition to API development platform 102, which may establish a tangle-based consensus of the validity of the new API. Although only one API validation server system 105 is shown, there may be more equivalent systems to further validate a new API without departing from the scope of the disclosure];
train, based on a training data set associated with a plurality of application programming interfaces (APIs), a generative artificial intelligence (AI) model [0032] at step 204, API development platform 102 may train an artificial intelligence (AI) engine based on the historical data. In some instances, the AI engine may utilize generative AI, which is a sub-type of AI models that may learn patterns related to input training data and may subsequently generate new data that may have similar characteristics. In training the AI engine, API development platform 102 may configure the AI engine to extract features of a new API that has been requested to be developed, compare the features of the new API to features of historical APIs, identify historical API features that are similar to the new API features, identify and/or apply polices that were used to remediate errors based on previous tests of the historical API features, generate a correlation matrix based on scoring the previous historical APIs, output a new API based on solving the correlation matrix, and/or perform other functions];
generate, by the trained generative AI model, a first plurality of test cases for a first API of the plurality of APIs [0002…the computing platform may send the first API to one or more validation servers, in which each of the validation servers may be configured to execute one or more tests of the first API. The computing platform may weight the validation score based on the one or more tests executed by the one or more validation servers];
initiate testing, by a plurality of test nodes, of the first plurality of test cases for the first API [0061] at step 345, the computing platform may execute a test of the new API. At step 350, the computing platform may output a validation score based on executing the test. At step 355, the computing platform may send the new API to API validation server system 105.
determine, based on output of a consensus algorithm, agreement on the first plurality of test cases among the plurality of test nodes [0017] accordingly, described herein is a tangle-powered API development system that may automatically generate a requested API. The system may consider the features of the new API and may match it to features of existing APIs and corresponding test reviews of the existing APIs. The API development system may use this correlation and a test generator to validate the new API. Accordingly, described herein is a technical procedure and apparatus for an API validation for distributed programming environment leveraging tangle technology. The method may enable development operations (DevOps) server machine-to-machine API validation and orchestration using a tangle decentralized network node of DevOps infrastructure. The method may be a generative artificial intelligence (AI) engine-based solution for development and/or test teams to validate their APIs before releasing without spending time and money on extensive API test results. Accordingly, a consensus algorithm may be leveraged to achieve agreement one or more data values among distributed processes or systems. The consensus algorithm may be designed to achieve reliability in the API management system involving multiple sources for API testing and/or validating];
return a validation result based on the agreement on the first plurality of test cases [0062] at step 360, the computing platform may receive results from API validation server system based on executing a similar test of the new API. At step 365, the computing platform may weight the validation score based on the results from API validation server system 105];
an application computing system processing second instructions that cause the application computing system to, based on a validation of the first API, automatically initiate use of the first API by an associated first application [0027] API development module 112a may have instructions that direct and/or cause API development platform 102 to process and/or execute a request to automatically develop, test, validate, and/or deploy a new API, and/or perform other functions, as discussed in greater detail below. API development database 112b may store information used by API development module 112a and/or API development platform 102 and/or in performing other functions. AI engine 112c may be used by API development platform 102 and/or API development module 112a to automatically develop, test, and/or validate a new API, and/or perform other methods described herein].
Regarding claims 2, 10 and 18
Singh et al teaches
the first instructions further cause the API validation platform to monitor, in real time, requests and responses via one or more API interfaces [0038] at step 208, API development platform 102 may input the request to develop the new API into an AI engine (e.g., AI engine 112c). At step 209, API development platform 102 may use the AI engine to extract features of the new API. For example, the newly requested API may include one or more features, such as an authentication protocol, a security protocol, a response-time protocol, and/or other protocols/features, and the API development platform 102 may extract these features from the newly requested API].
Regarding claims 3, 11 and 19
Singh et al teaches
the first instructions further cause the API validation platform to predict by a test case generation platform, data patterns and validation rules for the application programming interface [0033] in some instances, the AI engine may utilize supervised learning, in which labeled data sets may be inputted into the AI engine (e.g., historical APIs/historical API features, corresponding similarity scores, and the like), which may be used to classify information and accurately predict outcomes with respect API testing and/or validating. Using labeled inputs and outputs, the AI engine may measure its accuracy and learn over time. For example, supervised learning techniques such as linear regression, classification, neural networking, and/or other supervised learning techniques may be used].
Regarding claims 4, 12 and 20
Singh et al teaches
the first instructions further cause the API validation platform to predict a structure and format of an API request based on training model inputs, wherein the training model inputs comprise data corresponding to historical data processed by the first API [0032] at step 204, API development platform 102 may train an artificial intelligence (AI) engine based on the historical data. In some instances, the AI engine may utilize generative AI, which is a sub-type of AI models that may learn patterns related to input training data and may subsequently generate new data that may have similar characteristics. In training the AI engine, API development platform 102 may configure the AI engine to extract features of a new API that has been requested to be developed, compare the features of the new API to features of historical APIs, identify historical API features that are similar to the new API features, identify and/or apply polices that were used to remediate errors based on previous tests of the historical API features, generate a correlation matrix based on scoring the previous historical APIs, output a new API based on solving the correlation matrix, and/or perform other functions].
Regarding claims 6 and 14
Singh et al teaches
a dynamic API validation module validates results of the plurality of test cases based on a configuration file [0027] API development module 112a may have instructions that direct and/or cause API development platform 102 to process and/or execute a request to automatically develop, test, validate, and/or deploy a new API, and/or perform other functions, as discussed in greater detail below. API development database 112b may store information used by API development module 112a and/or API development platform 102 and/or in performing other functions. AI engine 112c may be used by API development platform 102 and/or API development module 112a to automatically develop, test, and/or validate a new API, and/or perform other methods described herein].
Regarding claims 7 and 15
Singh et al teaches
the instructions cause the API validation module to generate test data as a table of attributes and values corresponding to possible combinations of data received as input to an API function [0018] accordingly, the tangle consensus may be used for validating authorized API test reviews within the distributed authentication system and/or validating new blocks within the distributed review system. The method may consist of API feature validation scoring procedure that may analyze existing API features and/or dependencies and may take many feature combinations to find dependencies, which may help to find partial correlations of each of the API features. An AI model may be used to generate multiple permutations for each context/step to augment the training dataset] and [0027] API development module 112a may have instructions that direct and/or cause API development platform 102 to process and/or execute a request to automatically develop, test, validate, and/or deploy a new API, and/or perform other functions, as discussed in greater detail below. API development database 112b may store information used by API development module 112a and/or API development platform 102 and/or in performing other functions. AI engine 112c may be used by API development platform 102 and/or API development module 112a to automatically develop, test, and/or validate a new API, and/or perform other methods described herein].
Regarding claims 8 and 16
Singh et al teaches
the test data includes intentionally erroneous data for test cases associated with data security of API functionality [0003] in one or more examples, the computing platform may identify one or more historical test errors of the one or more corresponding features of the historical APIs. The computing platform may identify one or more corresponding policies to remediate the one or more historical test errors. The computing platform may apply the one or more corresponding policies to proactively eliminate potential future errors associated with the executing the first test].
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 5, 13 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Singh et al US 2025/0265176 in view of Sardesai et al US 2021/0329075 A1.
Regarding claims 5, 13 and 20
Singh et al teaches
the testing of the first plurality of test cases for the first API comprises but doesn’t teach explicitly a federated byzantine agreement method, however, Sardesai et al teaches [0021] the consensus mechanism used by the blockchain to reach consensus may include one or more of a Proof-of-Authority consensus process, a Proof-of-Stake consensus process, a Proof-of-Elapsed-Time consensus process, a Paxos consensus process, a Phase King consensus process, a Practical Byzantine Fault Tolerance consensus process, a Federated Byzantine Agreement consensus process, and/or another type of consensus mechanism. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to incorporate federated byzantine agreement method in automated testing for a application programming interface (API). The modification would have been obvious because one of ordinary skill in the art would have been motivated to combine teaching to improve scalability, efficiency, and customizable trust and validating application programming interfaces (APIs) using dynamic data and rules generation through federated byzantine agreement (FBA) analysis and generative artificial intelligence (AI) generated test cases.
Relevant Prior Art
US 10133650 B1 Park et al teaches Automated API Parameter Resolution And Validation
US 11908167 B1 Franco et al teaches Verifying That A Digital Image Is Not Generated By An Artificial Intelligence
US 11514448 B1 Liberman teaches Hierarchical Consensus Protocol Framework For Implementing Electronic Transaction Processing Systems
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Anil Khatri whose telephone number is (571)272-3725. The examiner can normally be reached M-F 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wei Zhen can be reached at 571-272-3708. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANIL KHATRI/Primary Examiner, Art Unit 2191