Prosecution Insights
Last updated: April 19, 2026
Application No. 18/618,273

REGRESSION MITIGATION USING MULTIPLE STORED PROCEDURES

Non-Final OA §101§102§112
Filed
Mar 27, 2024
Examiner
CHOWDHURY, INDRANIL
Art Unit
2114
Tech Center
2100 — Computer Architecture & Software
Assignee
Snowflake Inc.
OA Round
1 (Non-Final)
90%
Grant Probability
Favorable
1-2
OA Rounds
2y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 90% — above average
90%
Career Allow Rate
130 granted / 145 resolved
+34.7% vs TC avg
Moderate +15% lift
Without
With
+14.7%
Interview Lift
resolved cases with interview
Fast prosecutor
2y 1m
Avg Prosecution
19 currently pending
Career history
164
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
23.1%
-16.9% vs TC avg
§102
23.0%
-17.0% vs TC avg
§112
29.3%
-10.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 145 resolved cases

Office Action

§101 §102 §112
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-30 are pending for examination. Claims 1, 11, and 21 are independent claims. This Office Action is Non-Final. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/27/2024 is in compliance with the provisions of 37 CFR 1.97, 37 CFR 1.98, and MPEP § 609. The Information Disclosure Statement has been placed in the application file and the information referred to therein has been considered as to the merits. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes multiple claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitations uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are recited in claims 10, 20, and 30 with functional language italicized and generic placeholder and linking phrase in bold for claims 10, 20, and 30: 10. The system of claim 1, the operations further comprising: encoding a notification of the detected regression for communication to a developer workflow node, the developer workflow node associated with implementing a change in the database code associated with the detected regression; decoding analysis information of the detected regression received from the developer workflow node responsive to the notification of the detected regression; and detecting a rollout parameter of a plurality of rollout parameters as the root cause of the detected regression at least based on the analysis information. 20. The method of claim 11, further comprising: encoding a notification of the detected regression for communication to a developer workflow node, the developer workflow node associated with implementing a change in the database code associated with the detected regression; decoding analysis information of the detected regression received from the developer workflow node responsive to the notification of the detected regression; and detecting a rollout parameter of a plurality of rollout parameters as the root cause of the detected regression at least based on the analysis information. 30. The computer-storage medium of claim 21, the operations further comprising: encoding a notification of the detected regression for communication to a developer workflow node, the developer workflow node associated with implementing a change in the database code associated with the detected regression; decoding analysis information of the detected regression received from the developer workflow node responsive to the notification of the detected regression; and detecting a rollout parameter of a plurality of rollout parameters as the root cause of the detected regression at least based on the analysis information. Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the originally filed specification as performing the claimed function, and equivalents thereof. The portions of the specification that describe the corresponding structure that performs the claimed functions for the claims above are Fig. 13, paragraphs 00365 and Fig. 5, paragraphs 0079-0082. If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Objections Claims 2-8, 12-18, and 22-28 are objected to because of the following informalities: Claim 2, line 2; claim 6, line 3; claim 12, line 2; claim 16, line 3; claim 22, line 3; and claim 26, line 4 “the first store procedure” should be “the first stored procedure” to match the independent claims and specification. Appropriate correction is required. Claim 2, line 3; claim 6, line 2; claim 12, line 3; claim 16, line 2; claim 22, line 4; and claim 26, line 3 “the second store procedure” should be “the second stored procedure” to match the independent claims and specification. Appropriate correction is required. Claims 3-5 depend on claim 2 and inherit the deficiencies of claim 2. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the requirements. Claims 7-8 depend on claim 6 and inherit the deficiencies of claim 6. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the requirements. Claims 13-15 depend on claim 12 and inherit the deficiencies of claim 12. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the requirements. Claims 17-18 depend on claim 16 and inherit the deficiencies of claim 16. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the requirements. Claims 23-25 depend on claim 22 and inherit the deficiencies of claim 22. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the requirements. Claims 27-28 depend on claim 26 and inherit the deficiencies of claim 26. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the requirements. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 2-5, 12-15, and 22-25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 2, lines 2-3; claim 12, lines 2-3; claim 22, lines 3-4 “the same frequency” lacks antecedent basis. Appropriate correction is required. Claims 3-5 depend on claim 2 and inherit the deficiencies of claim 2. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the statutory requirements. Claims 13-15 depend on claim 12 and inherit the deficiencies of claim 12. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the statutory requirements. Claims 23-25 depend on claim 22 and inherit the deficiencies of claim 22. Applicant may cancel the claim, amend the claim to place the claim in proper dependent form, rewrite the claim in independent form, or present a sufficient showing that the dependent claim complies with the statutory requirements. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-30 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1, 11, and 21 recite: updating a table with a detected regression associated with database code of a database; performing a first stored procedure to determine a root cause of the detected regression; performing a second stored procedure to determine an impact of the detected regression based at least on the root cause; and determining whether to perform mitigation of the detected regression based on the impact. Step 1: Is the claim to a process, a machine, manufacture, or composition of matter? Yes: Claim 1 is a machine. Claim 11 is a method. Claim 21 is an article of manufacture. Step 2A, Prong I: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes: an abstract idea. Limitations 2-4, as claimed and under broadest reasonable interpretation (BRI), are mental processes that cover performance of the limitations in the human mind. For example, limitation #2 performing a first procedure encompasses a person analyzing the regression data stored in table and database code associated with that data to determine a root cause of the regression. Similarly, limitations 3-4 are also mental processes that cover performance of the limitations in the human mind. Step 2A, Prong II: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The “updating” limitation in # 1 above, as claimed and under BRI, is an additional elements that is insignificant extra-solution activity. For example, “updating a table” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). Additionally, one or more of the claims recite the following additional elements: at least one hardware processor (Claims 1, 11, 21), at least one memory (Claim 1), computer-storage medium (Claims 21-30), one or more processors of a machine (Claim 21) These additional elements are recited at a high level of generality (i.e. as generic computer components) such that they amount to no more than components comprising mere instructions to apply the exception. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. See MPEP 2106.05(f). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. As discussed above with respect to integration of the abstract idea into a practical application, the aforementioned additional elements amount to no more than components comprising mere instructions to apply the exception. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Additionally, with regards to # 1 above, per MPEP 2106.05(d)(Il), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Claims 2, 12, and 22 merely further describes the first store procedure and second store procedure of claims 1, 11, and 21, respectively, execute at the same frequency. Claims 3, 13, and 23 merely further describes the time of performance as prior processing cycle of first stored procedure of claims 2, 12, and 22, respectively. Claims 4, 14, 24 recites: 5. retrieving the root cause of the detected regression from a table entry in the table, the table entry corresponding to the detected regression in the prior processing cycle. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes: Claim 4 is a machine. Claim 14 is a method. Claim 24 is an article of manufacture. Step 2A, Prong I: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes: (an) abstract idea(s). There is no change to the abstract idea(s) of independent Claims 1, 11, and 21, respectively. Step 2A, Prong II: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The “retrieve” limitation in # 5 above, as claimed and under BRI, is an additional element that is insignificant extra-solution activity. For example, “retrieving” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. With regards to # 5 above, per MPEP 2106.05(d)(Il), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Claims 5, 15, and 25 merely further describes the time of performance as current processing cycle of second stored procedure based on root cause determined in prior processing cycle of claims 4, 14, and 24, respectively. Claims 6, 16, and 26 merely further describes the second store procedure executes at lower frequency than the first store procedure of claims 1, 11, and 21, respectively. Claims 7, 17, and 27 merely further describes the time of performance as prior processing cycle of first stored procedure to determine first root cause; and time of performance as current processing cycle of the first stored procedure to determine second root cause of claims 6, 16, and 26, respectively. Claims 8, 18, 28 recites: 6. retrieving the first root cause and the second root cause of the detected regression from a table entry in the table, the table entry corresponding to another table entry associated with the detected regression; and 7. performing the second stored procedure in the current processing cycle to determine the impact based at least on the second root cause determined in the current processing cycle. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes: Claim 8 is a machine. Claim 18 is a method. Claim 28 is an article of manufacture. Step 2A, Prong I: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes: (an) abstract idea(s). There is no change to the abstract idea(s) of independent Claims 1, 11, and 21, respectively. Step 2A, Prong II: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. The “retrieve” limitation in # 6 above, as claimed and under BRI, is an additional element that is insignificant extra-solution activity. For example, “retrieving” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. With regards to # 6 above, per MPEP 2106.05(d)(Il), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). Furthermore, in claims 8, 18, and 28, limitation #7 merely further describes the time of performance as current processing cycle of second stored procedure based on second root cause determined in current processing cycle of claims 7, 17, and 27, respectively. Claims 9, 19, and 29 recites: 8. performing blast radius analysis to determine a blast radius of the detected regression; and 9. determining whether to perform the mitigation for the detected regression based at least on the blast radius of the detected regression. Step 1: Is the claim to a process, machine, manufacture, or composition of matter? Yes: Claim 9 is a machine. Claim 19 is a method. Claim 29 is an article of manufacture. Step 2A, Prong I: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes: (an) abstract idea(s). Limitations 8-9, as claimed and under broadest reasonable interpretation (BRI), are mental processes that cover performance of the limitations in the human mind. For example, limitation #8 performing blast radius analysis encompasses a person analyzing the regression data stored in table and database code associated with that data to determine the blast radius of the regression. Similarly, limitation 9 is also mental processes that cover performance of the limitations in the human mind. Claims 10, 20, and 30 recite: 10. encoding a notification of the detected regression for communication to a developer workflow node, the developer workflow node associated with implementing a change in the database code associated with the detected regression; 11. decoding analysis information of the detected regression received from the developer workflow node responsive to the notification of the detected regression; and 12. detecting a rollout parameter of a plurality of rollout parameters as the root cause of the detected regression at least based on the analysis information. Step 1: Is the claim to a process, a machine, manufacture, or composition of matter? Yes: Claim 10 is a machine. Claim 20 is a method. Claim 30 is an article of manufacture. Step 2A, Prong I: Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes: an abstract idea. Limitation #12, as claimed and under broadest reasonable interpretation (BRI), is mental processes that cover performance of the limitations in the human mind. For example, limitation #12 “detecting a rollout parameter” encompasses a person reviewing the analysis information of the detected regression to determine a parameter as the root cause of the regression. Step 2A, Prong II: Does the claim recite additional elements that integrate the judicial exception into a practical application? No. Limitations 10-11 above, as claimed and under BRI, are additional elements that are insignificant extra-solution activity. For example, “encoding” and “decoding” in the context of this claim encompasses mere data gathering. See MPEP 2106.05(g). Step 2B: Does the claim recite additional elements that amount to significantly more than the judicial exception? No. With regards to limitations 10-11 above, per MPEP 2106.05(d)(Il), the courts have recognized the following computer functions as well-understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity: i. Receiving or transmitting data over a network, e.g., using the Internet to gather data, Symantec, 838 F.3d at 1321, 120 USPQ2d at 1362 (utilizing an intermediary computer to forward information); TLI Communications LLC v. AV Auto. LLC, 823 F.3d 607, 610, 118 USPQ2d 1744, 1745 (Fed. Cir. 2016) (using a telephone for image transmission); OIP Techs., Inc., v. Amazon.com, Inc., 788 F.3d 1359, 1363, 115 USPQ2d 1090, 1093 (Fed. Cir. 2015) (sending messages over a network); buySAFE, Inc. v. Google, Inc., 765 F.3d 1350, 1355, 112 USPQ2d 1093, 1096 (Fed. Cir. 2014) (computer receives and sends information over a network). For at least the reasoning provided above, Claims 1-30 are patent ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-30 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Lee et al. (U.S. Patent Publn No. 2021/0397593 A1), hereinafter Lee. Regarding claim 1, Lee teaches: A system (Lee, Fig. 5, Execution Platform 312, see also Fig. 8) comprising: at least one hardware processor (Lee, Fig. 5, each execution node includes a processor 516, 534, 552 …); and at least one memory (Lee, Fig. 8 memory device 804, paragraph 0102) storing instructions (Lee, paragraph 0102, paragraph 0044 “process flow 100 may be implemented by a developer 112 of a feature or program that is intended to be implemented on the database system 102.”) that cause the at least one hardware processor to perform operations (Lee, Abstract, and Fig. 1 and Fig. 7) comprising: updating a table (Lee, paragraph 0043 “Results analysis includes consuming results of multiple runs and compares those results to generate a report on differences or regressions. Useful data determined as a result of the process flow 100 may be stored as database data in one or more tables.”) with a detected regression associated with database code of a database (Lee, paragraph 0058 “data changes to the underlying tables may make performance comparisons irrelevant to the client's use case, e.g., for a staging table that is loaded with data, processed, and then truncated” (i.e. tables are used to store regressions that might be truncated in future). Paragraph 0043 teaches “a tool for running production queries and detecting regressions. …The workloads may consist of actual historical client queries [i.e. database code] that were requested by a client and executed for the client on actual database data. [i.e. database]”); performing a first stored procedure to determine a root cause of the detected regression (Lee, target run of workload is first stored procedure. Abstract teaches determine if performance regression between baseline run and target run of workload. Paragraph 0058 teaches “After the workload 118 generates multiple runs under different settings (e.g., the baseline run 124 and the target run 126), the feature testing system may perform verification at 120 and analysis on the results to generate a report 122. …When comparing the baseline run 124 and the target run 126, the feature testing system verifies at 120 performance regressions in the target run 126 by rerunning the queries reported as slower than the same queries in the baseline run 124 multiple times using the target run 126 settings. … Only after a query consistently shows a significant percentage of performance regression after several verification runs does the feature testing system report feature or program as an actual performance issues.” The target run of workload is the first stored procedure and Abstract “target run implements a feature that is not implemented in the baseline run. In response to identifying the performance regression, the target run is [re]executed to identify whether the performance regression still exists.” The feature causing the actual performance issues and performance regression is the root cause of the detected regression.); performing a second stored procedure (Lee, paragraph 0046, Fig. 1, Verification 120) to determine an impact of the detected regression based at least on the root cause (Lee, paragraph 0037, Fig. 1 Verification 120 (a second stored procedure), paragraph 0037 teaches embodiments are computer program product, paragraph 0040 teaches “each block in the flow diagrams or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).” Paragraph 0044 teaches that process flow 100 is “a feature or program that is intended to be implemented on the database system 102.” The verification procedure 120 determines the impact of the detected regression on the baseline run 124. Paragraph 0008 teaches that new programs and features can cause regressions, “errors or bugs that could cause damage to the database data, cause issues with performance or runtime, perpetuate additional errors throughout other functionality in the database system …” that are impacts of the detected regression. Paragraph 0058 teaches “the feature testing system verifies at 120 performance regressions in the target run 126 by rerunning the queries reported as slower than the same queries in the baseline run 124 multiple times using the target run 126 settings. … Only after a query consistently shows a significant percentage of performance regression after several verification runs does the feature testing system report feature or program as an actual performance issues.” Impact of the detected regression are the actual performance issues.); and determining whether to perform mitigation of the detected regression based on the impact (Examiner under BRI interprets as system making a decision as to whether mitigation should be performed and not actually performing the mitigation. Lee, Abstract, “ The performance regression is flagged as a false positive in response to identifying that the performance regression no longer exists when the target run is [re]executed.” Paragraph 0059 teaches “Queries that fail with the same error code in both runs (i.e., the baseline run 124 and the target run 126) are not reported as regressions.” (i.e. failure not caused by the feature that has changed so system determines not to perform mitigation). Paragraph 0058 teaches “the feature testing system verifies at 120 performance regressions in the target run 126 by rerunning the queries reported as slower than the same queries in the baseline run 124 multiple times using the target run 126 settings. … Only after a query consistently shows a significant percentage of performance regression after several verification runs does the feature testing system report feature or program as an actual performance issues.” Lee’s system then determines that mitigation should be performed on the detected regression based on impact (actual performance issues)). Regarding claim 2, the rejection of claim 1 is incorporated as given above. Lee teaches configuring the first store procedure to execute at substantially the same frequency as the second store procedure (Lee, Fig. 1, Paragraph 0058 teaches target run 126 and baseline run 124 are run together at different settings, for each run at a setting, the feature testing system performs verification procedure 120. Paragraph 0046 “When the baseline run 124 and/or the target run 126 are finished, a verification of the results is performed at 120 to eliminate false positives.” Thus, as shown in Fig. 1 baseline run 124 and target run 126 execute in parallel and at substantially the same time after which verification procedure 120 is run. Thus, all three are running the same number of times over a given period of time, i.e. at the same frequency. See Fig. 7, at 704 baseline run executed, 706 target run executed and then verification procedure 120 executed, in block 710 baseline run is executed, target run executes at slower speed and verification procedure 120 is executed so again all three are run again and at same frequency). Regarding claim 3, the rejection of claim 2 is incorporated as given above. Lee teaches performing the first stored procedure in a prior processing cycle to determine the root cause (Lee, Fig. 7 shows 706 target run (i.e. first stored procedure) executes while implementing the feature (i.e. root cause) and then verification procedure runs 708 after baseline run and target run have completed (see also Fig. 1), thus target run occurs in prior processing cycle followed by verification procedure 120 (i.e. second stored procedure)). Regarding claim 4, the rejection of claim 3 is incorporated as given above. Lee teaches retrieving the root cause of the detected regression from a table entry in the table (Lee, paragraph 0043 teaches “Results analysis includes consuming results of multiple runs and compares those results to generate a report on differences or regressions. Useful data determined as a result of the process flow 100 may be stored as database data in one or more tables.” Abstract teaches feature (i.e. root cause) in target run, paragraph 0058 teaches “Only after a query consistently shows a significant percentage of performance regression after several verification runs does the feature testing system report feature or program as an actual performance issues. Further, data changes to the underlying tables may make performance comparisons irrelevant to the client's use case, e.g., for a staging table that is loaded with data, processed, and then truncated.”), the table entry corresponding to the detected regression in the prior processing cycle (Lee, paragraph 0058 “Only after a query consistently shows a significant percentage of performance regression after several verification runs does the feature testing system report feature or program as an actual performance issues.”). Regarding claim 5, the rejection of claim 4 is incorporated as given above. Lee teaches performing the second stored procedure in a current processing cycle to determine the impact based on the root cause determined in the prior processing cycle (Lee, Fig. 7 shows 706 target run (i.e. first stored procedure) executes while implementing the feature (i.e. root cause) and then verification procedure 120 (i.e. second stored procedure) runs 708 after baseline run and target run have completed (see also Fig. 1), thus target run occurs in prior processing cycle followed by verification procedure 120 (i.e. second stored procedure)) that determines the impact. Paragraph 0068 “The new feature … is tested to gain insights on the impacts of the change (i.e., the implementation of the new feature…) to database clients.”). Regarding claim 6, the rejection of claim 1 is incorporated as given above. Lee teaches configuring the second store procedure to execute at a lower frequency than the first store procedure (Lee, Fig. 2, in some embodiments verification procedure 120 (i.e. second stored procedure) may be executed less often than target run 126. Paragraph 0020 teaches “in response to identifying the performance regression, rerunning the target run under isolated resource constraints to identify whether the performance regression still exists.” Because isolated environment in this embodiment, target run is executed multiple times and then verification procedure may be executed once [i.e. so executes at a lower frequency]. Paragraph 0063, Fig. 2, see block 206 and 214 “The query runner 206 takes in each query and multiplexes it to run with different settings [i.e. in target runs] before performing verification and comparison to generate the report 214. The query runner 206 will run the baseline run 208 and the target run 210 according to different parameters that may be determined by the query runner 206 or input by the developer 212.” One verification procedure and report is done for multiple target runs so second stored procedure executes at lower frequency or less frequently than the first stored procedure). Regarding claim 7, the rejection of claim 6 is incorporated as given above. Lee teaches performing the first stored procedure in a prior processing cycle to determine a first root cause (Lee, Fig. 7, paragraphs 0098-0099 in one embodiment shows 706 target run (i.e. first stored procedure) executes while implementing the feature (i.e. first root cause) and then verification procedure runs 708 after baseline run and target run have completed (see also Fig. 1), thus target run occurs in prior processing cycle followed by verification procedure 120 (i.e. second stored procedure).); and performing the first stored procedure in a current processing cycle to determine a second root cause (Lee, Abstract target run is the first stored procedure, Fig. 7, paragraphs 0098-0099 “The computing resource executes at 706 a target run of the workload while implementing the feature. … The computing resource, in response to identifying the performance regression, reruns at 710 the target run to identify whether the performance regression still exists. Rerunning at 710 may include running with more isolated resources… and/or running at a slower speed.” 710 occurs at current or next processing cycle as after 708 and with different resources or slower speed to determine if second root cause is producing performance regression). Regarding claim 8, the rejection of claim 7 is incorporated as given above. Lee teaches retrieving the first root cause and the second root cause of the detected regression from a table entry in the table, the table entry corresponding to another table entry associated with the detected regression (Lee, paragraph 0043 teaches “Results analysis includes consuming results of multiple runs and compares those results to generate a report on differences or regressions. Useful data determined as a result of the process flow 100 may be stored as database data in one or more tables.” Abstract teaches feature (i.e. root cause) in target run, paragraph 0058 teaches “Only after a query consistently shows a significant percentage of performance regression after several verification runs does the feature testing system report feature or program as an actual performance issues. Further, data changes to the underlying tables may make performance comparisons irrelevant to the client's use case, e.g., for a staging table that is loaded with data, processed, and then truncated.” The report includes multiple results for different regressions.); and performing the second stored procedure in the current processing cycle to determine the impact based at least on the second root cause determined in the current processing cycle (Lee, Fig. 7 shows 706 target run (i.e. first stored procedure) executes while implementing the feature (i.e. root cause) and then verification procedure 120 (i.e. second stored procedure) runs 708 after baseline run and target run have completed (see also Fig. 1), thus target run occurs in prior processing cycle followed by verification procedure 120 (i.e. second stored procedure)) that determines the impact. Fig. 7, block 710 target run is rerun at a slower speed and verification procedure is executed to determine other root causes in a current or next processing cycle. The impact is based on various other second, third, fourth root causes see Fig. 7, paragraphs 0098-0099 “when implementing new features in a cloud-based database system.”). Regarding claim 9, the rejection of claim 1 is incorporated as given above. Lee teaches performing blast radius analysis to determine a blast radius of the detected regression (Lee, paragraph 0019 “improved testing of database features or programs to enable developers to fix errors or bugs before programs are released to production servers. Testing with client production queries may provide exact knowledge of the impact of a program on a client's queries…” Paragraph 0043 teaches “process flow 100 may organize queries into workloads that may be run under different settings and later compared to analyze differences between the runs [i.e. blast radius of detected regressions]. The workloads may consist of actual historical client queries that were requested by a client and executed for the client on actual database data.” Process flow 100 includes verification procedure 120 that determines blast radius analysis); and determining whether to perform the mitigation for the detected regression based at least on the blast radius of the detected regression (Examiner under BRI interprets as system making a decision as to whether mitigation should be performed and not actually performing the mitigation. Lee, Abstract, “The performance regression is flagged as a false positive in response to identifying that the performance regression no longer exists when the target run is [re]executed.” Paragraph 0059 teaches “Queries that fail with the same error code in both runs (i.e., the baseline run 124 and the target run 126) are not reported as regressions.” (i.e. failure not caused by the feature that has changed so system determines not to perform mitigation). Paragraph 0098 teaches “computing resource executes at 704 a baseline run of the workload that does not implement the feature. The computing resource executes at 706 a target run of the workload while implementing the feature. The computing resource compares at 708 the baseline run, and the target run to identify whether there is a performance regression in the target run. The computing resource, in response to identifying the performance regression, reruns at 710 the target run to identify whether the performance regression still exists. Rerunning at 710 may include running with more isolated resources and/or running multiple times to protect against variance within a cloud-based database environment, and/or running at a slower speed.” Blast radius of queries in the baseline run and multiple target runs determines if mitigation should be performed.). Regarding claim 10, the rejection of claim 1 is incorporated as given above. Lee teaches encoding a notification of the detected regression for communication to a developer workflow node (Lee ,Fig. 1, paragraph 0046 “relevant data may be available to be queried using Structured Query Language (SQL). The developer 112 [i.e. in developer workflow node] may drill down to investigate specific queries from the report 122 or choose to perform deeper analysis using customized SQL queries depending on the goal of a test.” See also paragraph 0060, paragraph 0068 teaches developer using feature flags for new feature/failure/regression), the developer workflow node associated with implementing a change in the database code associated with the detected regression (Lee, Fig. 1 and Fig. 6, paragraph 0091 “The operating environment 600 may be applied to production queries executed by users or clients and may further by applied to feature testing where a user (see 602, 604, 606) may refer to a developer 112 that is generating and testing a new feature or program using actual database data.”); decoding analysis information of the detected regression received from the developer workflow node responsive to the notification of the detected regression (Lee, paragraph 0043 “Results analysis includes consuming results of multiple runs and compares those results to generate a report on differences or regressions.” Paragraph 0046 “The developer 112 may drill down to investigate specific queries from the report 122 or choose to perform deeper analysis using customized SQL queries [i.e. decoding analysis information from developer workflow node] depending on the goal of a test.” Paragraphs 0058 and 0060 “beneficial to ensure the report 122 is concise and easy for a developer 112 to digest. Because all metadata of the workloads and runs are stored in the feature testing system, including detailed statistics for each query, sophisticated users may easily write their own analytical queries to perform more detailed analysis suited to their own testing requirements.”); and detecting a rollout parameter of a plurality of rollout parameters as the root cause of the detected regression at least based on the analysis information (Lee, paragraph 0079 “feature testing manager 428 may be configured to determine … parameters for a baseline run or target run of historical client queries.” Paragraph 0098 teaches “computing resource executes at 704 a baseline run of the workload that does not implement the feature. The computing resource executes at 706 a target run of the workload while implementing the feature. The computing resource compares at 708 the baseline run, and the target run to identify whether there is a performance regression in the target run. The computing resource, in response to identifying the performance regression, reruns at 710 the target run to identify whether the performance regression still exists. Rerunning at 710 may include running with more isolated resources and/or running multiple times to protect against variance within a cloud-based database environment, and/or running at a slower speed.” (i.e. rollout parameters). Paragraph 0022 “a developer of a program or feature may rerun actual client queries to test the program in a real-world scenario using actual database data.” Developer performs analysis as described above prior to each rerun). Regarding claim 11, Lee teaches: A method (Lee, Fig. 1 and Fig. 7) comprising: updating, by at least one hardware processor (Lee, Fig. 5, each execution node includes a processor 516, 534, 552 …), a table (Lee, paragraph 0043 “Results analysis includes consuming results of multiple runs and compares those results to generate a report on differences or regressions. Useful data determined as a result of the process flow 100 may be stored as database data in one or more tables.”) with a detected regression associated with database code of a database (Lee, paragraph 0058 “data changes to the underlying tables may make performance comparisons irrelevant to the client's use case, e.g., for a staging table that is loaded with data, processed, and then truncated” (i.e. tables are used to store regressions that might be truncated in future). Paragraph 0043 teaches “a tool for running production queries and detecting regressions. …The workloads may consist of actual historical client queries [i.e. database code] that were requested by a client and executed for the client on actual database data. [i.e. database]”); performing a first stored procedure to determine a root cause of the detected regression (Lee, target run of workload is first stored procedure. Abstract teaches determine if performance regression between baseline run and target run of workload. Paragraph 0058 teaches “After the workload 118 generates multiple runs under different settings (e.g., the baseline run 124 and the target run 126), the feature testing system may perform verification at 120 and analysis on the results to generate a report 122. …When comparing the baseline run 124 and the target run 126, the feature testing system verifies at 120 performance regressions in the target run 126 by rerunning the queries reported as slower than the same queries in the baseline run 124 multiple times using the target run 126 settings. … Only after a query consistently shows a significant percentage of performance regression after several verification runs does the feature testing system report feature or program as an actual performance issues.” The target run of workload is the first stored procedure and Abstract “target run implements a feature that is not implemented in the baseline run. In response to identifying the performance regression, the target run is [re]executed to identify whether the performance regression still exists.” The feature causing the actual performance issues and performance regression is the root cause of the detected regression.); performing a second stored procedure (Lee, paragraph 0046, Fig. 1, Verification 120) to determine an impact of the detected regression based at least on the root cause (Lee, paragraph 0037, Fig. 1 Verification 120 (a second stored procedure), paragraph 0037 teaches embodiments are computer program product, paragraph 0040 teaches “each block in the flow diagrams or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).” Paragraph 00
Read full office action

Prosecution Timeline

Mar 27, 2024
Application Filed
Oct 27, 2025
Non-Final Rejection — §101, §102, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561216
SAFETY DEVICE AND SAFETY METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12554570
Method, Apparatus and System for Locating Fault of Server, and Computer-readable Storage Medium
2y 5m to grant Granted Feb 17, 2026
Patent 12487894
FAULT TOLERANT ARCHITECTURE
2y 5m to grant Granted Dec 02, 2025
Patent 12461835
SYSTEM AND METHOD FOR INTEGRITY MONITORING OF HETEROGENEOUS SYSTEM-ON-A-CHIP (SoC) BASED SYSTEMS
2y 5m to grant Granted Nov 04, 2025
Patent 12443159
SUPPORT DEVICE MONITORING FUNCTION BLOCKS OF USER PROGRAM, NON-TRANSITORY STORAGE MEDIUM STORING SUPPORT PROGRAM THEREON, AND CONTROL SYSTEM
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
90%
Grant Probability
99%
With Interview (+14.7%)
2y 1m
Median Time to Grant
Low
PTA Risk
Based on 145 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month