DETAILED ACTION
Status of Claims
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 26, 2026 has been entered.
Claims 1 and 11 have been amended.
Claims 1-20 are currently pending and have been examined.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendments
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action.
The rejection of claims 1-20 under 35 USC § 101 is maintained. Please see the Response to Arguments.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
As per claim 1 recites “wherein the alert system is configured for use by a developer to increase efficiency of the user experiments” Applicant’s disclosure does not describe that the alert system is configured for use by a developer to increase efficiency of the user experiments. Applicant’s disclosure does not describe how to increase efficiency of the user experiments when a developer review the log. The same rationale applies to claim 11. Appropriate correction is required.
As per claim 1 recites “wherein the automatic disabling of the treatment group experiment decreases network resources associated with the disabled treatment group experiment” Applicant’s disclosure does not describe that the treatment group experiment decreases network resources associated with the disabled treatment group experiment. Applicant’s disclosure does not describe how the network resources decreases when the treatment group is disabled. The same rationale applies to claim 11. Appropriate correction is required.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
As per claim 1 recites “when the notification indicates the need for immediate action based on latency” Examiner is not clear how much need for immediate action based on the latency? Does the immediate action happen in real-time? Does the immediate action happen automatically? Does the immediate action happen when the notification is in the log for review? What are the immediate action? The same rationale applies to claim 11. Appropriate correction is required.
As per claim 1 recites “wherein the alert system is configured for use by a developer to increase efficiency of the user experiments” Examiners is not clear how the developer using the alert system will increase the efficiency of the user experiment. Which user experiments, treatment or control group? Is the user experiments the same as treatment group experiment or control group experiment? What are the metes and bounds of the efficiency of the user experiments? The same rationale applies to claim 11. The claims were examined as best understood. Appropriate correction is required.
Claims 1 and 11 recites the limitation “the need” and "the user experiment". There is insufficient antecedent basis for this limitation in the claim. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Per MPEP 2106.03 Eligibility Step 1: The Four Categories of Statutory Subject Matter [R-07.2022]. Step 1 is directed to determining whether or not the claims fall within a statutory class. Herein, claims 1-10 falls within statutory class of a machine and claims 11-20 falls within statutory category of a process. Hence, the claims qualify as potentially eligible subject matter under 35 U.S.C §101. With Step 1 being directed to a statutory category, MPEP 2106.04 Eligibility Step 2A: Whether a Claim is Directed to a Judicial Exception [R-07.2022] is directed to Step 2. Step 2 is the two-part analysis from Alice Corp. (also called the Mayo test). The 2019 PEG makes two changes in Step 2A: It sets forth new procedure for Step 2A (called “revised Step 2A”) under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. The two-prong inquiry is as follows: Prong One: evaluate whether the claim recites a judicial exception (an abstract idea enumerated in the 2019 PEG, a law of nature, or a natural phenomenon). If claim recites an exception, then Prong Two: evaluate whether the claim recites additional elements that integrate the exception into a practical application of the exception. The claim(s) recite(s) the following abstract idea indicated by non-boldface font and additional limitations indicated by boldface font:
Claims 1 and 11 :
[at least one processor at least one non-transitory storage medium comprising instructions that, when executed by the at least one processor, cause the at least one processor to perform steps comprising: ]
receiving, from a user device over a network, experiment parameters related to a first experiment of a set of experiments, the set of experiments comprising at least one of a treatment group experiment and at least one of a control group experiment, the parameters associated with at least one webpage;
receiving, from a user device over a network, time till interaction (TTI) data for the first experiment, wherein the TTI data comprises a time value and a unique identifier;
calculating a value of TTI for each treatment group experiment;
calculating a value of TTI for each control group experiment;
comparing the value of TTI from the treatment group experiment with the value of TTI from the control group experiment;
determining whether the difference between the value of TTI from the treatment group experiment and the value of TTI from the control group experiment is greater than a predetermined threshold value:
maintaining the treatment group experiment and the control group experiment until the value is greater than the predetermined threshold;
sending a notification by an alert system indicating that the treatment group experiment value is greater than the predetermined threshold value, wherein the notification indicated the need for immediate action based on latency;
placing each treatment group associated with the notification in a log for review by at least one developer, wherein the alert system is configured for use by a developer to increase efficiency of the user experiment; and
upon sending the notification based on the treatment group value being greater than the predetermined threshold value, automatically disabling the treatment group experiment and maintaining the control group experiment, wherein the automatic disabling of the treatment group experiment decreases network resources associated with the disabled treatment group experiment.
Per Prong One of Step 2A, the identified recitation of an abstract idea falls within at least one of the Abstract Idea Groupings consisting of: Mathematical Concepts, Mental Processes, or Certain Methods of Organizing Human Activity. Particularly, the identified recitation falls within Mental Processes: concepts performed in the human mind, including observation, evaluation, judgement and opinion and Certain Methods of Organizing Human Activity such as commercial or legal interactions including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, business relations. Per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The processor, non-transitory storage medium and a user device over a network is recited at a high level of generality, i.e., as a generic computing devices, user device and network. This processor, non-transitory storage medium and a user device over a network is no more than mere instructions to apply the exception using a generic computing devices each comprising at least a processor and storage. Further, processor configured to cause receiving/determining/transmitting data is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Thus, since the claims are directed to the determined judicial exception in view of the two prongs of Step 2A, MPEP 2106.05 Eligibility Step 2B: Whether a Claim Amounts to Significantly More [R-07.2022] is directed to Step 2B. Therein, the additional elements and combinations therewith are examined in the claims to determine whether the claims as a whole amounts to significantly more than the judicial exception. It is noted here that the additional elements are to be considered both individually and as an ordered combination. In this case, the claims each at most comprise additional elements of processor, non-transitory storage medium and a user device over a network. Taken individually, the additional limitations each are generically recited and thus does not add significantly more to the respective limitations. Further, executing all the steps/functions by a user/service subsystem is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic processor, non-transitory storage medium and a user device over a network type structure at paragraph 0066: “Processor 346 may be a generic or specific electronic device capable of manipulating or processing information.” Paragraph 0067: “Memory 344 may be a generic or specific electronic device capable of storing codes and data accessible by the processor.” Paragraph 0069: “System 340 is connected to computer network 310. For example, computer network 310 may include any combination of any number of the Internet, an Intranet, a Local-Area Network (LAN), a Wide-Area Network (WAN). a Metropolitan Area Network (MAN), a virtual private network (VPN), a wireless network, a wired network, a leased line, a cellular data network, and a network using Bluetooth connections, infrared connections, or Near-Field Communication (NFC) connections.” And paragraph 0070: “User device 320 may be a laptop, standalone computer, tablet, mobile phone, and the like.” See also figure 3.
Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine for performing the present claims); and receiving or transmitting data (e.g., the present claims). The dependent claims 2-10 and 12-20 do not cure the above stated deficiencies, and in particular, the dependent claims further narrow the abstract idea without reciting additional elements that integrate the exception into a practical application of the exception or providing significantly more than the abstract idea. Claims 2 and 12 further limit the abstract idea that the predetermined threshold value is calculated based on TTI values for the treatment group and the control group at percentile values of 50%, 90%, and 95% (a more detailed abstract idea remains an abstract idea). Claims 3 and 13 further limit the abstract idea that the determining the difference between the value of TTI from the treatment group and the value of TTI from the control group further comprises at least one of: sending a notification if the treatment group experiment value is greater than the predetermined threshold value after seven days of value comparison or disabling activation of the treatment group experiment if the value is greater than the predetermined threshold value after seven days of value comparison (a more detailed abstract idea remains an abstract idea). Claims 4 and 14 further limit the abstract idea by allowing a user to use the control group experiment if the system disables activation of the treatment group experiment (a more detailed abstract idea remains an abstract idea). Claims 5 and 15 further limit the abstract idea that the user device is connected to Wi-Fi before running the set of experiments (a more detailed abstract idea remains an abstract idea). Claims 6 and 16 further limit the abstract idea that the value of TTI for each experiment is based on parameters associated with the webpage (a more detailed abstract idea remains an abstract idea). Claims 7 and 17 further limit the abstract idea that the log further includes each time a notification is sent indicating that the treatment group experiment value is greater than the predetermined threshold value , based on the time value and the unique identifier (a more detailed abstract idea remains an abstract idea). Claims 8 and 18 further limit the abstract idea by maintaining the log with disabled treatment group experiments (a more detailed abstract idea remains an abstract idea). Claims 9 and 19 further limit the abstract that idea each treatment group experiment associated with a notification is placed in a priority queue for further analysis (a more detailed abstract idea remains an abstract idea). And claims 10 and 20 further limit the abstract idea by accessing the priority queue of treatment groups associated with a notification; determining that the difference between the value of TTI from the treatment group experiment and the value of TTI from the control group experiment is less than a predetermined threshold value; and activating the treatment group experiment (a more detailed abstract idea remains an abstract idea). The identified recitation of the dependents claims falls within the Mental Processes: concepts performed in the human mind, including observation, evaluation, judgement and opinion and Certain Methods of Organizing Human Activity such as commercial or legal interactions including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, business relations. Since there are no elements or ordered combination of elements that amount to significantly more than the judicial exception, the claims are not eligible subject matter under 35 USC §101. Thus, viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C. 101 as being directed to non-statutory subject matter.
Response to Arguments
Applicant's arguments filed on 12/15/2025 have been fully considered but they are not persuasive.
With regards to the 35 U.S.C. 101 rejection, Applicant argues that (1) “[t]he Claims do not recite an abstract idea under Prong One”; (2) “The claims integrate the alleged abstract idea into a practical application under Prong Two” and (3) “The claims, as a whole, recite significantly more” (Remarks, pages 11-18).
With regard to the 35 U.S.C. 103 rejection, Applicant argues that specifically the prior art of record that (4) “the cited art does not teach or suggest the claim element of, "sending a notification indicating that the treatment group experiment value is greater than the predetermined threshold value, wherein the notification indicates the need for immediate action based on latency.” (5) “the cited art does not teach or suggest the claim element of, placing each treatment group associated with a the notification in a log for review by at least one developer, wherein the log alert system is configured for use by a developer to increase efficiency of the user experiment” and (6) “the cited art does not teach or suggest the amended claim element of, […] wherein the automatic disabling of the treatment group experiment decreases network resources associated with the disabled treatment group experiment” (Remarks pages 18-20).
In response to Applicant’s argument (1). Examiner respectfully disagrees. Please see the 35 U.S.C. 112 (a) and 112 (b) rejection above. In addition, claim 1 recites a method for calculating latency metrics for user experiments by receiving experiment parameters from at least one treatment group and at least one of a control group associated with at least one webpage and time till interaction data for the first experiment, a value of TTI is calculated for each group treatment and control group experiment and both values are compared in order to determine whether a difference between the value of TTI from the group treatment experiment and the value of TTI from the control group experiment is greater than a predetermined threshold. The experiment is maintained until the value is greater than the predetermined threshold and a notification is send when the treatment group experiment value is greater than the predetermined threshold value, the notification indicates the need for immediate action based on the latency, each treatment group associated with a notification is placed in a log for review by at least one developer, then the treatment group is disable automatically and the control group is maintained as described in the Applicant's disclosure in paragraph 0073 "calculating latency metrics for user experiments." Claim 1 recites a concept related to Mental Processes: concepts performed in the human mind, including observation (experiment parameters from the treatment and control group, TTI data), evaluation (TTI value calculations and TTI data comparison against a threshold, treatment group associated with a notification for review), judgement (threshold not reached, maintain the experiments) and opinion (notify that the threshold have been reached, notify immediate action based on latency for review, disable the treatment group experiment and maintain the control group based on the threshold) and Certain Methods of Organizing Human Activity such as commercial or legal interactions including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, business relations; Calculating latency metrics in a webpage is used as a measure to improve customer experience e.g., advertising, marketing or sales activities or behaviors, business relations, as shown in figures 1B – 1D and figure 3, claims 1 and 11 describe that the experiment i.e., A/B testing is associated with a webpage “the parameters associated with a webpage”, time till interaction data is used to measure when a page is fully interactive for users as per Applicant’s disclosure Background, paragraph 0003 “amount of time it takes for a webpage to load is an important metric, known as the lag time. Therefore, the reduction of latency is critical to improving the customer experience.” The same rationale applies to claim 11.
Claims 10 and 20 further limit the abstract idea by accessing the priority queue of treatment groups associated with a notification; determining that the difference between the value of TTI from the treatment group experiment and the value of TTI from the control group experiment is less than a predetermined threshold value; and activating the treatment group experiment (a more detailed abstract idea remains an abstract idea). The identified recitation of claims 10 and 20 falls within the Mental Processes: concepts performed in the human mind, including observation (priority queue of treatment groups associated with a notification), evaluation (determining the difference between the value of TTI from the treatment group experiment and the value of TTI from the control group experiment is less than a predetermined threshold value), judgement and opinion (activating the treatment group experiment) and Certain Methods of Organizing Human Activity such as commercial or legal interactions including agreements in the form of contracts, legal obligations, advertising, marketing or sales activities or behaviors, business relations. Calculating latency metrics in a webpage is used as a measure to improve customer experience e.g., advertising, marketing or sales activities or behaviors, business relations, as shown in figures 1B – 1D and figure 3, claims 1 and 11 describe that the experiment i.e., A/B testing is associated with a webpage “the parameters associated with a webpage”, time till interaction data is used to measure when a page is fully interactive for users as per Applicant’s disclosure Background, paragraph 0003 “amount of time it takes for a webpage to load is an important metric, known as the lag time. Therefore, the reduction of latency is critical to improving the customer experience.
In response to Applicant’s argument (2). Examiner respectfully disagrees. Please see the 35 U.S.C. 112 (a) and 112 (b) rejection above. In addition, per Prong Two of Step 2A, this judicial exception is not integrated into a practical application because the claim as a whole does not integrate the identified abstract idea into a practical application. The processor, non-transitory storage medium and a user device over a network is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of receiving/determining/transmitting data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. Considering the claims as a whole, these additional limitations merely add generic computer activities i.e., receiving/determining/transmitting: to receive inputs (experiment parameters, TTI data), analyze/determine TTI values and compare TTI values against a predetermined threshold in order to maintain the experiments ,to notify that the value of the treatment group is greater than the predetermined threshold for immediate action based on latency, treatment group for review by at least one developer, disable the treatment group experiment and maintain control group experiment. Claims 3, 10, 13 and 20 recites similar limitations as claims 1 and 11 of notifying based on the analysis between the treatment group experiment value and the predetermined threshold value, the analysis being executed by a generic processor to perform generic functions of a processor, to either disable or activate the treatment group experiment. The recited processor, non-transitory storage medium and a user device over a network, merely links the abstract idea to a computer environment. In this way, the processor, non-transitory storage medium and a user device over a network involvement is merely a field of use which only contributes nominally and insignificantly to the recited method, which indicates absence of integration. Claim 1 uses the processor, non-transitory storage medium and a user device over a network as a tool, in its ordinary capacity, to carry out the abstract idea. As to this level of computer involvement, mere automation of manual processes using generic computers does not necessarily indicate a patent-eligible improvement in computer technology.
Considered as a whole, the claimed method does not improve the functioning of the computer itself or any other technology or technical field of A/B testing and “calculating latency metrics”. Further, a processor configured to cause receiving/determining/transmitting data to a device is mere instruction to apply an exception using a generic computer component which cannot integrate a judicial exception into a practical application. Accordingly, this/these additional element(s) does/do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. The same rationale applies to claim 11.
In response to Applicant’s argument (3). Examiner respectfully disagrees. Executing all the steps/functions by a user/service subsystem is mere instruction to apply an exception using a generic computer component which cannot provide an inventive concept in Step 2B (or, looking back to Step 2A, cannot integrate a judicial exception into a practical application). For further support, the Applicant’s specification supports the claims being directed to use of a generic processor, non-transitory storage medium and a user device over a network type structure at paragraph 0066: “Processor 346 may be a generic or specific electronic device capable of manipulating or processing information.” Paragraph 0067: “Memory 344 may be a generic or specific electronic device capable of storing codes and data accessible by the processor.” Paragraph 0069: “System 340 is connected to computer network 310. For example, computer network 310 may include any combination of any number of the Internet, an Intranet, a Local-Area Network (LAN), a Wide-Area Network (WAN). a Metropolitan Area Network (MAN), a virtual private network (VPN), a wireless network, a wired network, a leased line, a cellular data network, and a network using Bluetooth connections, infrared connections, or Near-Field Communication (NFC) connections.” And paragraph 0070: “User device 320 may be a laptop, standalone computer, tablet, mobile phone, and the like.” See also figure 3.
Taken as an ordered combination, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the limitations are directed to limitations referenced in Alice Corp. that are not enough to qualify as significantly more when recited in a claim with an abstract idea include, as a non-limiting or non-exclusive examples: i. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, e.g., a limitation indicating that a particular function such as creating and maintaining electronic records is performed by a computer, as discussed in Alice Corp., 134 S. Ct. at 2360, 110 USPQ2d at 1984 (see MPEP § 2106.05(f)); ii. Simply appending well-understood, routine, conventional activities previously known to the industry, specified at a high level of generality, to the judicial exception, e.g., a claim to an abstract idea requiring no more than a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry, as discussed in Alice Corp., 134 S. Ct. at 2359-60, 110 USPQ2d at 1984 (see MPEP § 2106.05(d)); iii. Adding insignificant extra-solution activity to the judicial exception, e.g., mere data gathering in conjunction with a law of nature or abstract idea such as a step of obtaining information about credit card transactions so that the information can be analyzed by an abstract mental process, as discussed in CyberSource v. Retail Decisions, Inc., 654 F.3d 1366, 1375, 99 USPQ2d 1690, 1694 (Fed. Cir. 2011) (see MPEP § 2106.05(g)); or v. Generally linking the use of the judicial exception to a particular technological environment or field of use, e.g., a claim describing how the abstract idea of hedging could be used in the commodities and energy markets, as discussed in Bilski v. Kappos, 561 U.S. 593, 595, 95 USPQ2d 1001, 1010 (2010) or a claim limiting the use of a mathematical formula to the petrochemical and oil-refining fields, as discussed in Parker v. Flook. The courts have recognized the following computer functions inter alia to be well-understood, routine, and conventional functions when they are claimed in a merely generic manner: performing repetitive calculations; receiving, processing, and storing data (e.g., the present claims); electronically scanning or extracting data; electronic recordkeeping; automating mental tasks (e.g., process/machine for performing the present claims); and receiving or transmitting data (e.g., the present claims).
In response to Applicant’s argument (4). Examiner respectfully disagrees. Lyon teaches sending a notification indicating that the treatment group experiment value is greater than the predetermined threshold value in ¶ 0012: “automated notifications of such conditions as early ending of testing and/or detection of primacy/newness effects may be transmitted to personnel who oversee the conduct of A/B testing.” Lyon in view of Wang teaches wherein the notification indicates the need for immediate action based on latency in ¶ 0064: “At any monitoring point 108 along the multimedia delivery chain 100, when the QoE score is lower than a threshold value, or when the latency measure is longer than a threshold value, an alert may be generated to identify a QoE degradation or long-latency problem.“ Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of an alert/notification based on latency of Wang for the notifications based on early ending of testing and/or detection of primacy/newness effects of Lyon. Thus, the simple substitution of one known element for another producing a predictable result (based on the condition/rule, notify for immediate action to a user) renders the claim obvious. Wang also teaches in ¶ 0063: "alerts may be generated to identify delivery problems or to improve resource allocation, e.g., to avoid certain notes in the network, or to find better paths/routes in the content delivery network for the next step of video delivery." In a broadest reasonable interpretation and to one of ordinary skill in the art at the time the invention was filed, by identifying or improving resource allocation, an immediate action is performed based on the alerts.
In response to Applicant’s argument (6). Examiner respectfully disagrees. Lyon teaches placing each treatment group associated with a notification in a log for review by at least one developer, wherein the alert system is configured for use by a developer to increase efficiency of the user experiment in ¶ 0040: “Such received metrics are stored by the processor circuit 550 as the statistics data 534, and are subsequently statistically analyzed by the processor circuit 550 to generate results that are stored as the results data 535”, see also ¶ 0062: “The control routine 540 also includes an analysis component 545 to parse the statistics data 534 to analyze the received metrics stored therein to determine the degree of success of one or more versions of the user interface in bringing about a desired behavior on the part of users, and storing the results of the analysis as the results data 535.” […] the control routine 540 may further include a viewing component 548 executable by the processor circuit 550 to visually present the results data 535 on a display 580 of the monitoring device 500, possibly in graphical form” and ¶ 0012: “automated notifications of such conditions […] may be transmitted to personnel who oversee the conduct of A/B testing” the A/B testing is for a website as shown in Figure 2, ¶ 0017-0018: describes that “an A/B test is performed in a manner that employs both a classical statistical analysis technique and an alternative statistical analysis technique in parallel.” In a broadest interpretation and to one of ordinary skill in the art at the time the invention was filed, the results data of the analysis is stored and displayed for the personnel conducting the A/B testing i.e., developer since the A/B testing is for webpages as shown in Lyon ¶ 0062; “the metrics may represent indications of instances of any of a variety of user behaviors selected to be monitored, including and not limited to, viewing time spent at a given webpage or other visual presentation of content, bounces in which a user visits a webpage or other visual presentation of a portal only briefly, instances of “clicking through” a visual presentation of an advertisement, frequency of use of a service provided by the server 400, etc. The control routine 540 also includes an analysis component 545 to parse the statistics data 534 to analyze the received metrics stored therein to determine the degree of success of one or more versions of the user interface in bringing about a desired behavior on the part of users, and storing the results of the analysis as the results data 535.
Applicant’s arguments (6) with respect to claim(s) 1 and 11 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Please see the updated rejection below as necessitated by amendments.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8 and 11-18 are rejected under 35 U.S.C. 103 as being unpatentable over Lyon et al., (US 2014/0278198 A2) hereinafter “Lyon” in view of Nurshuhada et al., "Enhancing Performance Aspect in Usability Guidelines for Mobile Web Application," 2019 6th International Conference on Research and Innovation in Information Systems (ICRIIS), 2019, pp. 1-6, hereinafter “Nurshuhada”, Wang et al., (US 2020/0314503 A1), hereinafter “Wang” and Ivaniuk et al., (US 2020/0104383 A1) hereinafter “Ivaniuk”.
Claim 1:
Lyon as shown discloses a system, the system comprising:
at least one processor at least one non-transitory storage medium comprising instructions that, when executed by the at least one processor, cause the at least one processor to perform steps comprising (¶ 0084: “one or more processors,” ¶ 0119: “one machine-readable storage medium”);
receiving, from a user device over a network, experiment parameters related to a first experiment of a set of experiments, (¶ 0031: “The entry environment also enables the operator to enter various testing parameters, including one or more of a specified number of samples to be collected to enable performance of a classical statistical analysis (e.g., NHST), prior data as an input to an alternative form of statistical analysis (e.g., Bayesian analysis), indications of whether an initial number of samples should be ignored in response to detection of a newness or primacy effect, etc.” see also ¶ 0025: “the network 999 may be based on any of a variety (or combination) of communications technologies by which signals may be exchanged, including without limitation, wired technologies employing electrically and/or optically conductive cabling, and wireless technologies employing infrared, radio frequency or other forms of wireless transmission.”);
the set of experiments comprising at least one of a treatment group experiment and at least one of a control group experiment, (¶ 0017: “The test is then performed with different users being randomly selected to be exposed to different ones of at least one control version of a user interface (usually the version that currently exists) and at least one proposed version of the user interface (in which there is some proposed variation or combination of variations of the manner in which content is presented),”)
the parameters associated with at least one webpage (¶ 0034: “ the control device 500 employs an interaction data and parameters data received from the configuration device to control the performance of a test by the server 400, the control device 500 monitors the progress of the test, recording various metrics of behavior of users who are presented with the different versions of the user interface selected for testing. The metrics are analyzed by the monitoring device 500 to derive ultimate results of the test indicating whether any proposed new variations in content presentation bring about a desired change in the actions of users. Such desired actions may include one or more of purchasing more products, “clicking thru” more advertisements, exploring more webpages of a website, reducing “bouncing” (e.g., where a user accesses a webpage, but then leaves it relatively quickly, rather than lingering there), downloading more files, sharing more information about themselves that may be used in marketing efforts, etc.”);
unique identifier (¶ 0043: “the server 400 may store identifying information from each of the interaction devices 800 a-c along with an indication of what version of the user interface was last presented to each in order to enable the server 400 to provide those same versions to each of the interaction devices 800 a-c during a subsequent access.”);
Lyon teaches in ¶ 0070: “during a test of one or more proposed versions of the user interface in which one or more variations or combinations of variations of the manner in which content is presented are tested, the server is caused to randomly provide different versions of the user interface (one of which is the original version used as a control) to different interaction devices.” See also the Abstract “an automated A/B testing system using a combination of classical and alternative statistical analysis to control the performance A/B tests.” Lyon teaches as explained above a treatment group experiment and a control group experiment. Lyon is silent with regard to the following limitations. However Nurshuhada in an analogous art of web performance management for the purpose of providing the following limitations as shown does:
receiving, from a user device over a network, time till interaction (TTI) data for the first experiment, wherein the TTI data comprises a time value; calculating a value of TTI for each treatment group experiment; calculating a value of TTI for each control group experiment (pages 5-6, see figures 3-7 note the Time to Interactive (TTI) time value i.e., 4.3s and the overall score, and page 5: B. Performance Metrics “The evaluation process between the website version A and website version B were conducted by using Google Lighthouse tool that is available as an extension in Chrome DevTools functionality. The tool is automated and is widely used by a web developer to analyze and measure the performance of a website [19]. The tool takes into consideration of six performance metrics and produces an overall performance score based on each metric individual result. There are six performance metrics that are measured by using Lighthouse tool. Each of these metrics captures some aspect of page load speed. The metrics are: […] 3) Time to Interactive (TTI): TIT metric indicates the time taken for a page to become interactive for the user. Low TTI of a webpage contributes to better performance.”);
Both Lyon and Nurshuhada teach web performance management. Lyon teaches in ¶ 0026: “The instantiation of webpages of a website for each interaction device accessing services through the website is an example of the provision of such a portal with such a user interface.” Nurshuhada teaches in the Abstract: “We used First Contentful Paint (FCP), Speed Index (SI), Time to Interactive (TtI), First Meaningful Paint (FMP), First CPU Idle (FCI) and Estimated Input Latency (EIL) to measure the performances of two case studies and the result shows better score at 90-100 (fast-GREEN) with the proposed performance attributes compared to another website without it which averages at 50-89 (average-ORANGE) range.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Nurshuhada would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nurshuhada to the teaching of Lyon would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as receiving, from a user device over a network, time till interaction (TTI) data for the first experiment, wherein the TTI data comprises a time value; calculating a value of TTI for each treatment group experiment; calculating a value of TTI for each control group experiment into similar systems. Further, as noted by Nurshuhada “the proposed mobile usability guidelines help in improving the performance and usability of a mobile website.” (Nurshuhada, Conclusion).
In addition, Lyon teaches:
comparing the value [of TTI] from the treatment group experiment with the value [of TTI] from the control group experiment (¶ 0112: “commencing collection of a specified number of samples of user responses to the multiple versions of the user interface, analyzing the samples as the samples are collected using Bayesian analysis,”);
determining whether the difference between the value [of TTI] from the treatment group experiment and the value [of TTI] from the control group experiment is greater than a predetermined threshold value (¶ 0112: “determining whether a proposed version of the multiple versions elicits a statistically significant improvement in user response over a control version of the multiple versions,”);
maintaining the treatment group experiment and the control group experiment until the value is greater than the predetermined threshold; and (Figure 8, note the logic flow, see reference character 2130, when the answer is no, the logic flow continues until there is a statistically significant improvement see also ¶ 0041: “The parameters data 137 includes indications of one or more of what forms of statistical analysis to perform, how many samples are required for conduct of a classical statistical analysis, whether a version of the user interface may be culled to reduce the overall duration of the test, and/or whether the test may be terminated early and/or extended.”);
sending a notification by an alert system indicating that the treatment group experiment value is greater than the predetermined threshold value (¶ 0012: “automated notifications of such conditions as early ending of testing and/or detection of primacy/newness effects may be transmitted to personnel who oversee the conduct of A/B testing.”);
Lyon teaches sending a notification as explained above. Nurshuhada teaches in figure 5 an estimated input latency from website version A and B. Lyon in view of Nurshuhada is silent with regard to the following limitations. However Wang in an analogous art of performance management for the purpose of providing the following limitations as shown does:
wherein the notification indicates the need for immediate action based on latency (¶ 0064: “At any monitoring point 108 along the multimedia delivery chain 100, when the QoE score is lower than a threshold value, or when the latency measure is longer than a threshold value, an alert may be generated to identify a QoE degradation or long-latency problem.“);
Since each individual element and its function are shown in the prior art, albeit shown in separate references, the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself - that is in the substitution of an alert/notification based on latency of Wang for the notifications based on early ending of testing and/or detection of primacy/newness effects of Lyon. Thus, the simple substitution of one known element for another producing a predictable result (based on the condition/rule, notify for immediate action to a user) renders the claim obvious.
Further, Lyon teaches:
placing each treatment group associated with the notification in a log for review by at least one developer, wherein the alert system is configured for use by a developer to increase efficiency of the user experiment (¶ 0040: “Such received metrics are stored by the processor circuit 550 as the statistics data 534, and are subsequently statistically analyzed by the processor circuit 550 to generate results that are stored as the results data 535”, see also ¶ 0062: “The control routine 540 also includes an analysis component 545 to parse the statistics data 534 to analyze the received metrics stored therein to determine the degree of success of one or more versions of the user interface in bringing about a desired behavior on the part of users, and storing the results of the analysis as the results data 535.” […] the control routine 540 may further include a viewing component 548 executable by the processor circuit 550 to visually present the results data 535 on a display 580 of the monitoring device 500, possibly in graphical form” and ¶ 0012: “automated notifications of such conditions […] may be transmitted to personnel who oversee the conduct of A/B testing” the A/B testing is for a website as shown in Figure 2, ¶ 0017-0018: describes that “an A/B test is performed in a manner that employs both a classical statistical analysis technique and an alternative statistical analysis technique in parallel.”);
upon sending the notification based on the treatment group value being greater than the predetermined threshold value, automatically disabling the treatment group experiment (¶ 0012: “such requirements as a specified number of samples, a desired degree of uncertainty in results, a desired confidence level, etc., may be used to automatically determine when an A/B test ends. Each A/B test is initially configured to require the specified number samples to determine when it is to end in accordance with typical practices of classical statistics analysis.” See also claim 6);
and maintaining the control group experiment (¶ 0033: “the server 400 recurringly employs the alternative statistical analysis to determine if it has become statistically clear enough that at least one of the versions that implements proposed variations will not show a statistically significant improvement over the control version of the user interface such that it may be culled from the performance of the test as further testing of it is deemed pointless.” See also claim 6);
Lyon as explained above automatic disable the treatment group experiment. Lyon in view of Nurshuhada and Wang is silent with regard to the following limitations. However Ivaniuk in an analogous art of A/B testing management for the purpose of providing the following limitations as shown does:
wherein the automatic disabling of the treatment group experiment decreases network resources associated with the disabled treatment group experiment (¶ 0013: “Once an A/B test is identified as a candidate for removal from the A/B testing platform, a ramp-down of the A/B test is initiated to observe an effect of the treatment variant on a performance metric for the A/B test. If the ramp-down has no effect on the performance metric, the A/B test may be identified to be no longer used to effect user experiences, and the ramp-down is continued. Once the ramp-down is fully complete, the A/B test can be terminated to free up computational and/or storage resources in the A/B testing platform.” See also ¶ 0015
Both Lyon and Ivaniuk teach A/B testing management. Lyon teaches in the Abstract “an automated A/B testing system using a combination of classical and alternative statistical analysis to control the performance A/B tests.” Ivaniuk teaches in the Abstract: “uses A/B testing to safely terminate unused experiments.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Ivaniuk would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Ivaniuk to the teaching of Lyon in view of Nurshuhada and Wang would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein the automatic disabling of the treatment group experiment decreases network resources associated with the disabled treatment group experiment into similar systems. Further, as noted by Ivaniuk “removal of selected experiments may decrease computational and/or storage resources consumed by the experiments without adversely impacting the use of treatment variants that were tested using the experiments. Consequently, the disclosed embodiments may improve the performance of computer systems and/or technologies for performing A/B testing and/or reclaiming computational and/or storage resources.” (Ivaniuk,¶ 0015).
Claim 11:
The limitations of claim 11 encompass substantially the same scope as claim 1. Accordingly, those similar limitations are rejected in substantially the same manner as claim 1, as described above.
Claims 2 and 12:
Lyon teaches as explained above a treatment group experiment and a control group experiment. Lyon is silent with regard to the following limitations. However Nurshuhada in an analogous art of web performance management for the purpose of providing the following limitations as shown does:
wherein the predetermined threshold value is calculated based on TTI values for the treatment group and the control group at percentile values of 50%, 90%, and 95% (page 5: B. Performance Metrics “The evaluation process between the website version A and website version B were conducted by using Google Lighthouse tool that is available as an extension in Chrome DevTools functionality. The tool is automated and is widely used by a web developer to analyze and measure the performance of a website [19]. The tool takes into consideration of six performance metrics and produces an overall performance score based on each metric individual result. There are six performance metrics that are measured by using Lighthouse tool. Each of these metrics captures some aspect of page load speed. The metrics are: […] 3) Time to Interactive (TTI): TIT metric indicates the time taken for a page to become interactive for the user. Low TTI of a webpage contributes to better performance.” And page 6, 1st col. “According to Google Lighthouse benchmarking, the performance scores are indicated as; 0-49 (slow-RED), 50-89 (average– ORANGE), and 90-100 (fast-GREEN).”);
Both Lyon and Nurshuhada teach web performance management. Lyon teaches in ¶ 0026: “The instantiation of webpages of a website for each interaction device accessing services through the website is an example of the provision of such a portal with such a user interface.” Nurshuhada teaches in the Abstract: “We used First Contentful Paint (FCP), Speed Index (SI), Time to Interactive (TtI), First Meaningful Paint (FMP), First CPU Idle (FCI) and Estimated Input Latency (EIL) to measure the performances of two case studies and the result shows better score at 90-100 (fast-GREEN) with the proposed performance attributes compared to another website without it which averages at 50-89 (average-ORANGE) range.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Nurshuhada would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nurshuhada to the teaching of Lyon would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein the predetermined threshold value is calculated based on TTI values for the treatment group and the control group at percentile values of 50%, 90%, and 95% into similar systems. Further, as noted by Nurshuhada “the proposed mobile usability guidelines help in improving the performance and usability of a mobile website.” (Nurshuhada, Conclusion).
Claims 3 and 13:
Lyon as shown discloses the following limitations:
wherein the determining the difference between the value of TTI from the treatment group and the value of TTI from the control group further comprises at least one of: sending a notification if the treatment group experiment value is greater than the predetermined threshold value after seven days of value comparison or disabling activation of the treatment group experiment if the value is greater than the predetermined threshold value after seven days of value comparison (¶ 0012: “automated notifications of such conditions as early ending of testing and/or detection of primacy/newness effects may be transmitted to personnel who oversee the conduct of A/B testing.” See also claim 18: “culling the proposed version from the test to shorten a duration of the test” i.e., days “in response to determining that the proposed version is statistically unlikely to elicit a statistically significant improvement in user response over the control version.”);
Claims 4 and 14:
Lyon as shown discloses the following limitations:
wherein the operations further comprise allowing a user to use the control group experiment if the system disables activation of the treatment group experiment (¶ 0033: “the server 400 recurringly employs the alternative statistical analysis to determine if it has become statistically clear enough that at least one of the versions that implements proposed variations will not show a statistically significant improvement over the control version of the user interface such that it may be culled from the performance of the test as further testing of it is deemed pointless.”);
Claims 5 and 15:
Lyon as shown discloses the following limitations:
wherein the user device is connected to Wi-Fi before running the set of experiments (¶ 0025: “the network 999 may be based on any of a variety (or combination) of communications technologies by which signals may be exchanged, including without limitation, wired technologies employing electrically and/or optically conductive cabling, and wireless technologies employing infrared, radio frequency or other forms of wireless transmission.”);
Claims 6 and 16:
Lyon teaches as explained above a treatment group experiment and a control group experiment. Lyon also teaches in ¶ 0034: “The metrics are analyzed by the monitoring device 500 to derive ultimate results of the test indicating whether any proposed new variations in content presentation bring about a desired change in the actions of users. Such desired actions may include one or more of purchasing more products, “clicking thru” more advertisements, exploring more webpages of a website, reducing “bouncing” (e.g., where a user accesses a webpage, but then leaves it relatively quickly, rather than lingering there), downloading more files, sharing more information about themselves that may be used in marketing efforts, etc.” Lyon is silent with regard to the following limitations. However Nurshuhada in an analogous art of web performance management for the purpose of providing the following limitations as shown does:
wherein the value of TTI for each experiment is based on parameters associated with the webpage (page 5: B. Performance Metrics “The evaluation process between the website version A and website version B were conducted by using Google Lighthouse tool that is available as an extension in Chrome DevTools functionality. The tool is automated and is widely used by a web developer to analyze and measure the performance of a website [19]. The tool takes into consideration of six performance metrics and produces an overall performance score based on each metric individual result. There are six performance metrics that are measured by using Lighthouse tool. Each of these metrics captures some aspect of page load speed. The metrics are: […] 3) Time to Interactive (TTI): TIT metric indicates the time taken for a page to become interactive for the user. Low TTI of a webpage contributes to better performance.” See also page 4: “An existing website developed by an XYZ company is used in this project. The XYZ company is an Australian company which is based in Malaysia. The website is a stall booking website that serves as a platform for the user to book a booth to sell their goods in the marketplace located in Brisbane, Australia.”);
Both Lyon and Nurshuhada teach web performance management. Lyon teaches in ¶ 0026: “The instantiation of webpages of a website for each interaction device accessing services through the website is an example of the provision of such a portal with such a user interface.” Nurshuhada teaches in the Abstract: “We used First Contentful Paint (FCP), Speed Index (SI), Time to Interactive (TtI), First Meaningful Paint (FMP), First CPU Idle (FCI) and Estimated Input Latency (EIL) to measure the performances of two case studies and the result shows better score at 90-100 (fast-GREEN) with the proposed performance attributes compared to another website without it which averages at 50-89 (average-ORANGE) range.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Nurshuhada would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Nurshuhada to the teaching of Lyon would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein the value of TTI for each experiment is based on parameters associated with the webpage into similar systems. Further, as noted by Nurshuhada “the proposed mobile usability guidelines help in improving the performance and usability of a mobile website.” (Nurshuhada, Conclusion).
Claims 7 and 17:
Lyon as shown discloses the following limitations:
wherein the log further includes each time a notification is sent indicating that the treatment group experiment value is greater than the predetermined threshold value, based on the time value and the unique identifier (¶ 0012: “The control routine 540 also includes an analysis component 545 to parse the statistics data 534 to analyze the received metrics stored therein to determine the degree of success of one or more versions of the user interface in bringing about a desired behavior on the part of users, and storing the results of the analysis as the results data 535.” See also ¶ 0012: “automated notifications of such conditions […] may be transmitted to personnel who oversee the conduct of A/B testing”);
Claims 8 and 18:
Lyon as shown discloses the following limitations:
wherein the operations further comprise maintaining the log with disabled treatment group experiments (¶ 0012: “The control routine 540 also includes an analysis component 545 to parse the statistics data 534 to analyze the received metrics stored therein to determine the degree of success of one or more versions of the user interface in bringing about a desired behavior on the part of users, and storing the results of the analysis as the results data 535.” See also ¶ 0018);
Claims 9-10 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Lyon et al., (US 2014/0278198 A2) hereinafter “Lyon”, Nurshuhada et al., "Enhancing Performance Aspect in Usability Guidelines for Mobile Web Application," 2019 6th International Conference on Research and Innovation in Information Systems (ICRIIS), 2019, pp. 1-6, hereinafter “Nurshuhada”,Wang et al., (US 2020/0314503 A1), hereinafter “Wang” and Ivaniuk et al., (US 2020/0104383 A1) hereinafter “Ivaniuk”, as applied to claims 1 and 11, further in view of Ogallo et al., (US 2023/0325871 A1) hereinafter “Ogallo”.
Claims 9 and 19:
Lyon teaches as explained above a notification for each treatment group. Ivaniuk teaches in ¶ 0056: “An owner of the A/B test may optionally be notified that the A/B test has been identified as a candidate for removal from the A/B testing platform to allow the owner to respond before the A/B test is terminated.” Lyon in view of Nurshuhada, Wang and Ivaniuk is silent with regard to the following limitations. However Ogallo in an analogous art of A/B testing management for the purpose of providing the following limitations as shown does:
wherein each treatment group experiment associated with a notification is placed in a priority queue for further analysis (¶ 0019: “to subgroup analysis for improving A/B testing”);
Both Lyon and Ogallo teach A/B testing management. Lyon teaches in the Abstract: “an automated A/B testing system using a combination of classical and alternative statistical analysis to control the performance A/B tests.” Ogallo teaches in the Abstract: “partitioning records in the A/B testing database into a plurality of population strata according to the set of feature values.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Ogallo would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Ogallo to the teaching of Lyon in view of Nurshuhada, Wang and Ivaniuk would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as wherein each treatment group experiment associated with a notification is placed in a priority queue for further analysis into similar systems. Further, as noted by Ogallo “can identify groups that are consistently left out for targeted software design and personalization while also reducing the resources (time, money, etc.) needed to redesign and execute subsequent A/B tests for a specific left-out subgroup.” (Ogallo, ¶ 0022).
Claims 10 and 20:
Lyon teaches as explained above a notification for each treatment group. Lyon in view of Nurshuhada, Wang and Ivaniuk is silent with regard to the following limitations. However Ogallo in an analogous art of A/B testing management for the purpose of providing the following limitations as shown does:
wherein the operations further comprise: accessing the priority queue of treatment groups associated with a notification (¶ 0042: “ the A/B tests 120 are performed and the plurality of population strata 106 are overlayed with the results of the A/B tests 120 during post-test analysis to better understand which stratum 108 preferred which variants in the A/B test 120 (e.g., see operation 210).”);
Both Lyon and Ogallo teach A/B testing management. Lyon teaches in the Abstract: “an automated A/B testing system using a combination of classical and alternative statistical analysis to control the performance A/B tests.” Ogallo teaches in the Abstract: “partitioning records in the A/B testing database into a plurality of population strata according to the set of feature values.” Thus, they are deemed to be analogous references as they are reasonably pertinent to each other and are directed towards solving similar problems within the same environment. One of ordinary skill in the art would have recognized that applying the known technique of Ogallo would have yielded predictable results and resulted in an improved system. It would have been recognized that applying the technique of Ogallo to the teaching of Lyon in view of Nurshuhada, Wang and Ivaniuk would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such as accessing the priority queue of treatment groups associated with a notification into similar systems. Further, as noted by Ogallo “can identify groups that are consistently left out for targeted software design and personalization while also reducing the resources (time, money, etc.) needed to redesign and execute subsequent A/B tests for a specific left-out subgroup.” (Ogallo, ¶ 0022).
In addition, Lyon teaches:
determining that the difference between the value [of TTI] from the treatment group experiment and the value [of TTI] from the control group experiment is less than a predetermined threshold value; and (¶ 0112: “determining whether a proposed version of the multiple versions elicits a statistically significant improvement in user response over a control version of the multiple versions,”);
activating the treatment group experiment (¶ 0064: “the analysis component 447 or 547, in recurringly performing the alternative statistical analysis, determines that one of the proposed versions of the user interface elicits a statistically significant degree of desired change in user behavior such that it is a clear improvement over a current version. In such a situation, depending on whether it is indicated as permitted in the parameters data 137, the analysis component 447 or 547 may terminate the test early, and may cause the server 400 to immediately commence performance of another test.”);
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NADJA CHONG whose telephone number is (571)270-3939. The examiner can normally be reached on Monday-Friday 8:00 am - 2:00 pm ET, Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, RUTAO WU can be reached on 571.272.6045. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NADJA N CHONG CRUZ/
Primary Examiner, Art Unit 3623