Prosecution Insights
Last updated: April 19, 2026
Application No. 18/765,846

CONFIGURABLE RELEVANCE SERVICE PLATFORM INCORPORATING A RELEVANCE TEST DRIVER

Final Rejection §103
Filed
Jul 08, 2024
Examiner
CAO, VINCENT M
Art Unit
3622
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Bytedance Inc.
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
86%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
246 granted / 448 resolved
+2.9% vs TC avg
Strong +32% interview lift
Without
With
+31.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
18 currently pending
Career history
466
Total Applications
across all art units

Statute-Specific Performance

§101
37.1%
-2.9% vs TC avg
§103
39.5%
-0.5% vs TC avg
§102
8.5%
-31.5% vs TC avg
§112
8.7%
-31.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 448 resolved cases

Office Action

§103
DETAILED ACTION Status of Claims The Response filed 18/765,846 filed 12/08/2025 has been acknowledged. Claims 26, 32, 38 are amended. Claims 1-25 are cancelled. Claims 26-43 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 26-43 is/are rejected under 35 U.S.C. 103 as being unpatentable over R et al. (US 20080282231 A1) (hereafter R), In view of Agarwal et al. (US 20120072279 A1) (hereafter Agarwal). As per claim 26: A system comprising one or more processors and one or more non-transitory computer-readable storage media storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to: receive, from a requesting service, test configuration data associated with a test scenario configured to exercise one or a combination of relevance service processing components; (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of receiving test configuration data.) execute the test scenario based at least in part on test input data by identifying one or more promotions relevant to the test input data; (See R ¶0075, “FIG. 2 shows an exemplary method 200 for testing software applications (e.g., for testing independent scenarios of software applications), and can be performed, for example, using a framework such as that shown in FIG. 1. At 210, fields on a screen of a software application are populated with test data. For example, the fields can be populated with test data from a test data source.” R discloses the concept of generating and submitting data into software based on testing configuration data.) validate output data generated based at least in part on executing the test scenario based at least in part on the test input data by comparing the output data to the test configuration data; and (See R ¶0048, “Evaluation of a business rule can indicate whether a software application should produce an error (e.g., an expected result) related to a specific field of a screen of the software application. For example, a business rule for indicating whether an error should be produced can be "loss_date>=policy_date+waiting_period." The business rule can be associated with a field named "Date of loss." When evaluating the business rule (e.g., via an independent test script) test data can be incorporated. For example, if test data assigns the value of "Feb. 15, 2007" to "loss_date," "Feb. 1, 2007" to "policy_date," and "14" to "waiting period," then the business rule will evaluate to true. If the business rule evaluates to true, then the software application should not produce an error related to the field when submitting the screen containing the field. If, however, the test data assigns the value of "Feb. 15, 2007" to "loss_date," "Feb. 1, 2007" to "policy_date," and "20" to "waiting period," then the business rule will evaluate to false. If the business rule evaluates to false, then the software application should produce an error related to the field when submitting the screen containing the field. At a time of submission of the screen, expected results based on the evaluation of business rules can be compared to actual results. For example, if evaluation of a business rule indicates that an error should not be produced (an expected result), and upon actual submission no error is produced (an actual results), then a determination can be made that the software application is operating correctly. Otherwise, a determination can be made that the software application is not operating correctly.” R discloses the concept of validating the output of the software by comparing the output to an expected result for the test configuration.) return, to the requesting service, the output data. (See R ¶0077, “At 230, the screen is submitted and actual results are determined. Actual results can comprise errors produced by the software application upon submission of the screen. Actual results can also comprise an indication that there were no errors upon submission. For example, a screen can comprise four fields. Upon submission of the screen, the software application can indicate that there are errors related to two of the four fields, and the software application can also indicate specific error messages (e.g., two specific error messages, one for each of the two fields that had errors).” R discloses the concept of receiving the actual results outputted by the software.) Although R discloses the above-enclosed invention including the concept of testing software, R fails to explicitly disclose the software to be relevancy matching with multiple processes. However Agarwal as shown, which talks about advertisement experimentation, teaches a software to be a relevance software which uses a plurality of filters. (See Agarwal ¶ 0013, “In general, in another aspect, a plurality of user queries are received, with each query requesting a service from a server; a data file is received, with the data file defining an experiment structure having a plurality of layers each having at least one experiment, and with at least some of the layers overlapping one another such that the same query is allowed to be assigned to two or more experiments in different overlapping layers; a portion of the queries is diverted to experiments in various layers according to the experiment structure defined by the data file, in which queries are diverted to the experiments in each of the overlapping layers independent of diversion of queries to experiments in other overlapping layers; and the experiments are performed on the queries that have been assigned to the experiments, with each experiment modifying zero or more parameters associated with the queries or parameters associated with processing of the queries." See also Agarwal ¶0036, “Referring to FIG. 1, an example experiment system 100 for performing overlapping experiments includes a web server 102 that is coupled to a search results server 104 and an ad results server 106. The web server 102 receives user queries (e.g., search queries) from users 110 and sends responses (e.g., search results and sponsored content) to the users 110. The web server 102 forwards the search queries to the search results server 104 and the ad results server 106. The search results server 104 returns the search results to the web sever 102. The ad results server 106 identifies sponsored content (e.g., ads) relevant to the search queries. In some implementations, the search results server 104 sends the search results to the ad results server 106, and the selection of ads is also based on the search results. The ad results server 106 sends the sponsored content to the web server 102. The web server 102 formats and sends the search results along with the sponsored content to the users 110. Experiments can be conducted at the web server 102, the search results server 104, and the ad results server 106 to adjust various parameters and evaluate the effects of the adjustments.” Agarwal teaches a relevancy system for determining relevant promotions for a user based on a request wherein the relevancy system determines the promotion by applying multiple layers.) Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the invention of R with the teachings of Agarwal. As shown, R discloses the concept of testing different software by creating test cases to check and ensure the operation and output of software. As shown, Agarwal further teaches the determination and identification of relevant ads to be performed by software including testing the software to check for errors or undesirable/non-relevant results (See Agarwal ¶0003-¶0004). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Agarwal with the invention of R as Agarwal teaches the need for advertisement relevance software to be checked and tested to identify errors and undesirable results, and the invention of R discloses software testing. As per claim 27: The system of claim 26, wherein the test input data is derived from one or a combination of user models and promotion models based at least in part on aggregated data that has been collected from previous production runs of the relevance service. (See R ¶0057, “In any of the examples herein, test data can be data for use by independent test scripts. Test data can be stored in various locations, such as a test data source. For example, test data can be stored in a database or file (e.g., in a spreadsheet). Test data can comprise various types of data. For example, test data can include values (e.g., numbers, letters, strings, text, dates, flags, etc.).” R discloses the concept of retrieving test data from a database to determine a test case. See also Agarwal ¶0040, “In this description, performing overlapping experiments on user queries means that multiple experiments are performed on the same user query or the same set of user queries. For example, a search query may be processed by the web server 102 (which receives the search query and returns a response to the sender of the search query), the search results server 104 (which performs a search according to the search query), and the ad results server 106 (which identifies ads relevant to the search results). The search query may be diverted to a first experiment at the web server 102, a second experiment at the search results server 104, and a third experiment at the ad results server 106. The first, second, and third experiments are considered to be overlapping experiments because they are all associated with the same search query. It is also possible to divert the query to multiple overlapping experiments that are conducted in one server.” Agarwal teaches the concept of relevance service to utilize user information and available promotional information.) As per claim 28: The system of claim 26, wherein the test scenario is associated with a promotion search and represents a request generated by a search platform in response to the search platform receiving a search query. (See Agarwal ¶0040, “In this description, performing overlapping experiments on user queries means that multiple experiments are performed on the same user query or the same set of user queries. For example, a search query may be processed by the web server 102 (which receives the search query and returns a response to the sender of the search query), the search results server 104 (which performs a search according to the search query), and the ad results server 106 (which identifies ads relevant to the search results). The search query may be diverted to a first experiment at the web server 102, a second experiment at the search results server 104, and a third experiment at the ad results server 106. The first, second, and third experiments are considered to be overlapping experiments because they are all associated with the same search query. It is also possible to divert the query to multiple overlapping experiments that are conducted in one server.” Agarwal teaches the concept of relevance service to receiving a query as part of the input.) As per claim 29: The system of claim 26, wherein the output data comprises one or more promotions ranked based at least in part on respective relevance to a mock user associated with the test scenario or test input data. (See Agarwal ¶0012, “The user queries can include search queries each associated with one or more query keywords, map queries each associated with at least one geographical location, news queries each associated with at least one news event, queries each associated with information on finance, search queries each associated with products, queries each associated with a non-personalized home page, and queries each associated with a personalized home page. The parameters can include groups of parameters related to user interfaces, to ranking of search results, to advertisements, to matching of keywords, to maps, to news, to finance, to product search, to a personalized home page, to a non-personalized home page, or to mobile devices. An analysis of the experiments can be provided to a computing device for display.” Agarwal teaches the output of relevance software to be ranked promotions based on query and user information.) As per claim 30: The system of claim 26, wherein the test configuration data comprises an input parameter comprising a flag that specifies a list of filtering workflows that are to be applied during execution of the test scenario. (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of the request to define the test to be directed towards a specific component or an end to end test.) As per claim 31: The system of claim 26, wherein the test scenario is one of an end-to-end test scenario or a directed test scenario. (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of the request to define the test to be directed towards a specific component or an end to end test.) As per claim 32: A computer-implemented method, comprising: receiving, by one or more processors and from a requesting service, test configuration data associated with a test scenario configured to exercise one or a combination of relevance service processing components; (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of receiving test configuration data.) executing, by the one or more processors, the test scenario based at least in part on test input data by identifying one or more promotions relevant to the test input data; (See R ¶0075, “FIG. 2 shows an exemplary method 200 for testing software applications (e.g., for testing independent scenarios of software applications), and can be performed, for example, using a framework such as that shown in FIG. 1. At 210, fields on a screen of a software application are populated with test data. For example, the fields can be populated with test data from a test data source.” R discloses the concept of generating and submitting data into software based on testing configuration data.) validating output data generated based at least in part on executing the test scenario based at least in part on the test input data by comparing the output data to the test configuration data; and (See R ¶0048, “Evaluation of a business rule can indicate whether a software application should produce an error (e.g., an expected result) related to a specific field of a screen of the software application. For example, a business rule for indicating whether an error should be produced can be "loss_date>=policy_date+waiting_period." The business rule can be associated with a field named "Date of loss." When evaluating the business rule (e.g., via an independent test script) test data can be incorporated. For example, if test data assigns the value of "Feb. 15, 2007" to "loss_date," "Feb. 1, 2007" to "policy_date," and "14" to "waiting period," then the business rule will evaluate to true. If the business rule evaluates to true, then the software application should not produce an error related to the field when submitting the screen containing the field. If, however, the test data assigns the value of "Feb. 15, 2007" to "loss_date," "Feb. 1, 2007" to "policy_date," and "20" to "waiting period," then the business rule will evaluate to false. If the business rule evaluates to false, then the software application should produce an error related to the field when submitting the screen containing the field. At a time of submission of the screen, expected results based on the evaluation of business rules can be compared to actual results. For example, if evaluation of a business rule indicates that an error should not be produced (an expected result), and upon actual submission no error is produced (an actual results), then a determination can be made that the software application is operating correctly. Otherwise, a determination can be made that the software application is not operating correctly.” R discloses the concept of validating the output of the software by comparing the output to an expected result for the test configuration.) returning, by the one or more processors and to the requesting service, the output data. (See R ¶0077, “At 230, the screen is submitted and actual results are determined. Actual results can comprise errors produced by the software application upon submission of the screen. Actual results can also comprise an indication that there were no errors upon submission. For example, a screen can comprise four fields. Upon submission of the screen, the software application can indicate that there are errors related to two of the four fields, and the software application can also indicate specific error messages (e.g., two specific error messages, one for each of the two fields that had errors).” R discloses the concept of receiving the actual results outputted by the software.) Although R discloses the above-enclosed invention including the concept of testing software, R fails to explicitly disclose the software to be relevancy matching with multiple processes. However Agarwal as shown, which talks about advertisement experimentation, teaches a software to be a relevance software which uses a plurality of filters. (See Agarwal ¶ 0013, “In general, in another aspect, a plurality of user queries are received, with each query requesting a service from a server; a data file is received, with the data file defining an experiment structure having a plurality of layers each having at least one experiment, and with at least some of the layers overlapping one another such that the same query is allowed to be assigned to two or more experiments in different overlapping layers; a portion of the queries is diverted to experiments in various layers according to the experiment structure defined by the data file, in which queries are diverted to the experiments in each of the overlapping layers independent of diversion of queries to experiments in other overlapping layers; and the experiments are performed on the queries that have been assigned to the experiments, with each experiment modifying zero or more parameters associated with the queries or parameters associated with processing of the queries." See also Agarwal ¶0036, “Referring to FIG. 1, an example experiment system 100 for performing overlapping experiments includes a web server 102 that is coupled to a search results server 104 and an ad results server 106. The web server 102 receives user queries (e.g., search queries) from users 110 and sends responses (e.g., search results and sponsored content) to the users 110. The web server 102 forwards the search queries to the search results server 104 and the ad results server 106. The search results server 104 returns the search results to the web sever 102. The ad results server 106 identifies sponsored content (e.g., ads) relevant to the search queries. In some implementations, the search results server 104 sends the search results to the ad results server 106, and the selection of ads is also based on the search results. The ad results server 106 sends the sponsored content to the web server 102. The web server 102 formats and sends the search results along with the sponsored content to the users 110. Experiments can be conducted at the web server 102, the search results server 104, and the ad results server 106 to adjust various parameters and evaluate the effects of the adjustments.” Agarwal teaches a relevancy system for determining relevant promotions for a user based on a request wherein the relevancy system determines the promotion by applying multiple layers.) Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the invention of R with the teachings of Agarwal. As shown, R discloses the concept of testing different software by creating test cases to check and ensure the operation and output of software. As shown, Agarwal further teaches the determination and identification of relevant ads to be performed by software including testing the software to check for errors or undesirable/non-relevant results (See Agarwal ¶0003-¶0004). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Agarwal with the invention of R as Agarwal teaches the need for advertisement relevance software to be checked and tested to identify errors and undesirable results, and the invention of R discloses software testing. As per claim 33: The computer-implemented method of claim 32, wherein the test input data is derived from one or a combination of user models and promotion models based at least in part on aggregated data that has been collected from previous production runs of the relevance service. (See R ¶0057, “In any of the examples herein, test data can be data for use by independent test scripts. Test data can be stored in various locations, such as a test data source. For example, test data can be stored in a database or file (e.g., in a spreadsheet). Test data can comprise various types of data. For example, test data can include values (e.g., numbers, letters, strings, text, dates, flags, etc.).” R discloses the concept of retrieving test data from a database to determine a test case. See also Agarwal ¶0040, “In this description, performing overlapping experiments on user queries means that multiple experiments are performed on the same user query or the same set of user queries. For example, a search query may be processed by the web server 102 (which receives the search query and returns a response to the sender of the search query), the search results server 104 (which performs a search according to the search query), and the ad results server 106 (which identifies ads relevant to the search results). The search query may be diverted to a first experiment at the web server 102, a second experiment at the search results server 104, and a third experiment at the ad results server 106. The first, second, and third experiments are considered to be overlapping experiments because they are all associated with the same search query. It is also possible to divert the query to multiple overlapping experiments that are conducted in one server.” Agarwal teaches the concept of relevance service to utilize user information and available promotional information.) As per claim 34: The computer-implemented method of claim 32, wherein the test scenario is associated with a promotion search and represents a request generated by a search platform in response to the search platform receiving a search query. (See Agarwal ¶0040, “In this description, performing overlapping experiments on user queries means that multiple experiments are performed on the same user query or the same set of user queries. For example, a search query may be processed by the web server 102 (which receives the search query and returns a response to the sender of the search query), the search results server 104 (which performs a search according to the search query), and the ad results server 106 (which identifies ads relevant to the search results). The search query may be diverted to a first experiment at the web server 102, a second experiment at the search results server 104, and a third experiment at the ad results server 106. The first, second, and third experiments are considered to be overlapping experiments because they are all associated with the same search query. It is also possible to divert the query to multiple overlapping experiments that are conducted in one server.” Agarwal teaches the concept of relevance service to receiving a query as part of the input.) As per claim 35: The computer-implemented method of claim 32, wherein the output data comprises one or more promotions ranked based at least in part on respective relevance to a mock user associated with the test scenario or test input data. (See Agarwal ¶0012, “The user queries can include search queries each associated with one or more query keywords, map queries each associated with at least one geographical location, news queries each associated with at least one news event, queries each associated with information on finance, search queries each associated with products, queries each associated with a non-personalized home page, and queries each associated with a personalized home page. The parameters can include groups of parameters related to user interfaces, to ranking of search results, to advertisements, to matching of keywords, to maps, to news, to finance, to product search, to a personalized home page, to a non-personalized home page, or to mobile devices. An analysis of the experiments can be provided to a computing device for display.” Agarwal teaches the output of relevance software to be ranked promotions based on query and user information.) As per claim 36: The computer-implemented method of claim 32, wherein the test configuration data comprises an input parameter comprising a flag that specifies a list of filtering workflows that are to be applied during execution of the test scenario. (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of the request to define the test to be directed towards a specific component or an end to end test.) As per claim 37: The computer-implemented method of claim 32, wherein the test scenario is one of an end-to-end test scenario or a directed test scenario. (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of the request to define the test to be directed towards a specific component or an end to end test.) As per claim 38: A non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the one or more processors to: receive, from a requesting service, test configuration data associated with a test scenario configured to exercise one or a combination of relevance service processing components; (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of receiving test configuration data.) execute the test scenario based at least in part on test input data by identifying one or more promotions relevant to the test input data; (See R ¶0075, “FIG. 2 shows an exemplary method 200 for testing software applications (e.g., for testing independent scenarios of software applications), and can be performed, for example, using a framework such as that shown in FIG. 1. At 210, fields on a screen of a software application are populated with test data. For example, the fields can be populated with test data from a test data source.” R discloses the concept of generating and submitting data into software based on testing configuration data.) validate output data generated based at least in part on executing the test scenario based at least in part on the test input data by comparing the output data to the test configuration data; and (See R ¶0048, “Evaluation of a business rule can indicate whether a software application should produce an error (e.g., an expected result) related to a specific field of a screen of the software application. For example, a business rule for indicating whether an error should be produced can be "loss_date>=policy_date+waiting_period." The business rule can be associated with a field named "Date of loss." When evaluating the business rule (e.g., via an independent test script) test data can be incorporated. For example, if test data assigns the value of "Feb. 15, 2007" to "loss_date," "Feb. 1, 2007" to "policy_date," and "14" to "waiting period," then the business rule will evaluate to true. If the business rule evaluates to true, then the software application should not produce an error related to the field when submitting the screen containing the field. If, however, the test data assigns the value of "Feb. 15, 2007" to "loss_date," "Feb. 1, 2007" to "policy_date," and "20" to "waiting period," then the business rule will evaluate to false. If the business rule evaluates to false, then the software application should produce an error related to the field when submitting the screen containing the field. At a time of submission of the screen, expected results based on the evaluation of business rules can be compared to actual results. For example, if evaluation of a business rule indicates that an error should not be produced (an expected result), and upon actual submission no error is produced (an actual results), then a determination can be made that the software application is operating correctly. Otherwise, a determination can be made that the software application is not operating correctly.” R discloses the concept of validating the output of the software by comparing the output to an expected result for the test configuration.) return, to the requesting service, the output data. (See R ¶0077, “At 230, the screen is submitted and actual results are determined. Actual results can comprise errors produced by the software application upon submission of the screen. Actual results can also comprise an indication that there were no errors upon submission. For example, a screen can comprise four fields. Upon submission of the screen, the software application can indicate that there are errors related to two of the four fields, and the software application can also indicate specific error messages (e.g., two specific error messages, one for each of the two fields that had errors).” R discloses the concept of receiving the actual results outputted by the software.) Although R discloses the above-enclosed invention including the concept of testing software, R fails to explicitly disclose the software to be relevancy matching with multiple processes. However Agarwal as shown, which talks about advertisement experimentation, teaches a software to be a relevance software which uses a plurality of filters. (See Agarwal ¶ 0013, “In general, in another aspect, a plurality of user queries are received, with each query requesting a service from a server; a data file is received, with the data file defining an experiment structure having a plurality of layers each having at least one experiment, and with at least some of the layers overlapping one another such that the same query is allowed to be assigned to two or more experiments in different overlapping layers; a portion of the queries is diverted to experiments in various layers according to the experiment structure defined by the data file, in which queries are diverted to the experiments in each of the overlapping layers independent of diversion of queries to experiments in other overlapping layers; and the experiments are performed on the queries that have been assigned to the experiments, with each experiment modifying zero or more parameters associated with the queries or parameters associated with processing of the queries." See also Agarwal ¶0036, “Referring to FIG. 1, an example experiment system 100 for performing overlapping experiments includes a web server 102 that is coupled to a search results server 104 and an ad results server 106. The web server 102 receives user queries (e.g., search queries) from users 110 and sends responses (e.g., search results and sponsored content) to the users 110. The web server 102 forwards the search queries to the search results server 104 and the ad results server 106. The search results server 104 returns the search results to the web sever 102. The ad results server 106 identifies sponsored content (e.g., ads) relevant to the search queries. In some implementations, the search results server 104 sends the search results to the ad results server 106, and the selection of ads is also based on the search results. The ad results server 106 sends the sponsored content to the web server 102. The web server 102 formats and sends the search results along with the sponsored content to the users 110. Experiments can be conducted at the web server 102, the search results server 104, and the ad results server 106 to adjust various parameters and evaluate the effects of the adjustments.” Agarwal teaches a relevancy system for determining relevant promotions for a user based on a request wherein the relevancy system determines the promotion by applying multiple layers.) Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the invention of R with the teachings of Agarwal. As shown, R discloses the concept of testing different software by creating test cases to check and ensure the operation and output of software. As shown, Agarwal further teaches the determination and identification of relevant ads to be performed by software including testing the software to check for errors or undesirable/non-relevant results (See Agarwal ¶0003-¶0004). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to have combined the teachings of Agarwal with the invention of R as Agarwal teaches the need for advertisement relevance software to be checked and tested to identify errors and undesirable results, and the invention of R discloses software testing. As per claim 39: The non-transitory computer-readable medium of claim 38, wherein the test input data is derived from one or a combination of user models and promotion models based at least in part on aggregated data that has been collected from previous production runs of the relevance service. (See R ¶0057, “In any of the examples herein, test data can be data for use by independent test scripts. Test data can be stored in various locations, such as a test data source. For example, test data can be stored in a database or file (e.g., in a spreadsheet). Test data can comprise various types of data. For example, test data can include values (e.g., numbers, letters, strings, text, dates, flags, etc.).” R discloses the concept of retrieving test data from a database to determine a test case. See also Agarwal ¶0040, “In this description, performing overlapping experiments on user queries means that multiple experiments are performed on the same user query or the same set of user queries. For example, a search query may be processed by the web server 102 (which receives the search query and returns a response to the sender of the search query), the search results server 104 (which performs a search according to the search query), and the ad results server 106 (which identifies ads relevant to the search results). The search query may be diverted to a first experiment at the web server 102, a second experiment at the search results server 104, and a third experiment at the ad results server 106. The first, second, and third experiments are considered to be overlapping experiments because they are all associated with the same search query. It is also possible to divert the query to multiple overlapping experiments that are conducted in one server.” Agarwal teaches the concept of relevance service to utilize user information and available promotional information.) As per claim 40: The non-transitory computer-readable medium of claim 38, wherein the test scenario is associated with a promotion search and represents a request generated by a search platform in response to the search platform receiving a search query. (See Agarwal ¶0040, “In this description, performing overlapping experiments on user queries means that multiple experiments are performed on the same user query or the same set of user queries. For example, a search query may be processed by the web server 102 (which receives the search query and returns a response to the sender of the search query), the search results server 104 (which performs a search according to the search query), and the ad results server 106 (which identifies ads relevant to the search results). The search query may be diverted to a first experiment at the web server 102, a second experiment at the search results server 104, and a third experiment at the ad results server 106. The first, second, and third experiments are considered to be overlapping experiments because they are all associated with the same search query. It is also possible to divert the query to multiple overlapping experiments that are conducted in one server.” Agarwal teaches the concept of relevance service to receiving a query as part of the input.) As per claim 41: The non-transitory computer-readable medium of claim 38, wherein the output data comprises one or more promotions ranked based at least in part on respective relevance to a mock user associated with the test scenario or test input data. (See Agarwal ¶0012, “The user queries can include search queries each associated with one or more query keywords, map queries each associated with at least one geographical location, news queries each associated with at least one news event, queries each associated with information on finance, search queries each associated with products, queries each associated with a non-personalized home page, and queries each associated with a personalized home page. The parameters can include groups of parameters related to user interfaces, to ranking of search results, to advertisements, to matching of keywords, to maps, to news, to finance, to product search, to a personalized home page, to a non-personalized home page, or to mobile devices. An analysis of the experiments can be provided to a computing device for display.” Agarwal teaches the output of relevance software to be ranked promotions based on query and user information.) As per claim 42: The non-transitory computer-readable medium of claim 38, wherein the test configuration data comprises an input parameter comprising a flag that specifies a list of filtering workflows that are to be applied during execution of the test scenario. (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of the request to define the test to be directed towards a specific component or an end to end test.) As per claim 43: The non-transitory computer-readable medium of claim 38, wherein the test scenario is one of an end-to-end test scenario or a directed test scenario. (See R ¶0089, “FIG. 4 shows an exemplary method 400 testing an independent scenario. At 410, the method starts. For example, the method can be started manually (e.g., by a user testing the independent scenario), or as part of an end-to-end scenario.” R discloses the concept of the request to define the test to be directed towards a specific component or an end to end test.) Response to Arguments Applicant's arguments filed 12/08/2025 have been fully considered but they are not persuasive. In response to the Applicant’s arguments as directed towards the 35 U.S.C. 103 rejection under the combination of R and Agarwal, the Examiner respectfully disagrees. The Examiner notes as shown above, R further discloses the concept of performing comparison/validation of outputs to expected outputs for software testing including providing different outputs based on comparison to expected results. Agarwal further teaches the concept of the software to be for relevance determination of promotional content. As such, the Examiner asserts the combination of R and Agarwal teaches the amended claims and the rejection has been maintained. All rejections made towards the dependent claims are maintained due to the lack of a reply by the applicant in regards to distinctly and specifically point out the supposed errors in the Examiner’s action in the prior Office Action (37 CFR 1.111). The Examiner asserts that the applicant only argues that the dependent claims should be allowable because the independent claims are unobvious and patentable over R in view of Agarwal. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT M CAO whose telephone number is (571)270-5598. The examiner can normally be reached Monday - Friday 11-7. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ILANA SPAR can be reached at (571) 270-7537. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /VINCENT M CAO/Primary Examiner, Art Unit 3622
Read full office action

Prosecution Timeline

Jul 08, 2024
Application Filed
Sep 04, 2025
Non-Final Rejection — §103
Dec 08, 2025
Response Filed
Feb 15, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602668
METHOD FOR GENERATING RECYCLING RECORD OF SOLAR PANEL AND RECYCLING SYSTEM IMPLEMENTING THE SAME
2y 5m to grant Granted Apr 14, 2026
Patent 12602709
DYNAMICALLY GENERATING AND SERVING CONTENT ACROSS DIFFERENT PLATFORMS
2y 5m to grant Granted Apr 14, 2026
Patent 12561714
SYSTEMS AND METHODS FOR AUTOMATICALLY DETERMINING USER VETERAN ATTRIBUTES AND UPDATING A VETERAN PROFILE
2y 5m to grant Granted Feb 24, 2026
Patent 12524782
SYSTEM FOR PROVIDING IMPRESSIONS BASED ON CONSUMER PREFERENCES FOR FUTURE PROMOTIONS
2y 5m to grant Granted Jan 13, 2026
Patent 12499463
COMMODITY REGISTRATION SYSTEM AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
86%
With Interview (+31.5%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 448 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month