Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 21 – 40 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 07 August 2025 is being considered by the examiner.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 21 – 40 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3 – 6, 8, 10 and 12 – 14 of U.S. Patent No. 12,248,524. Although the claims at issue are not identical, they are not patentably distinct from each other because of the mapping below.
Current Application 19/069,486
U.S. Patent No. 12,248 524
21. (New) A system for monitoring and optimizing work assistant search engines implemented within a computing system, comprising: a memory; and one or more processing devices, operatively coupled with the memory, to: receive diagnostic data of a work assistant search engine using an initial score model, wherein the diagnostic data includes non-identifying information associated with a user and a query; determine, based on the diagnostic data generated by the work assistant search engine while using the initial score model, a search quality metric value associated with performance of the work assistant search engine; generate, in response to determining that the search quality metric value fails to satisfy a threshold value, an updated score model for the work assistant search engine; and initiate an update to the work assistant search engine to use the updated score model.
1. A system for monitoring and optimizing work assistant search engines that are implemented within secured enterprise user computing systems, comprising:
a storage device; and
one or more processing devices, communicatively connected to the storage device, to:
receive diagnostic data of a work assistant search engine from a secured enterprise user computing system, the diagnostic data comprising feature values and scores, wherein the secured enterprise user computing system is safe from transmitting non-diagnostic data to the system, wherein the feature values are an abstract layer calculated based on a query, an identifier of a user requesting the query, and data objects representing documents identified by the work assistant search engine responsive to the query, and the abstract layer is devoid of personal information of the user or content of the documents identified by the work assistant search engine, and wherein the scores are calculated by applying a score model to the feature values;
determine, by analyzing the diagnostic data, a search quality metric value associated with the feature values and the scores;
responsive to determining that the search quality metric value differs from a target search quality metric value by a predetermined threshold value, determine an updated score model; and
provide the updated score model to the secured enterprise user computing system to update the work assistant search engine.
22. (New) The system of claim 21, wherein the diagnostic data comprises at least one of features or scores associated with search queries issued to the work assistant search engine.
1. A system for monitoring and optimizing work assistant search engines that are implemented within secured enterprise user computing systems, comprising:
a storage device; and
one or more processing devices, communicatively connected to the storage device, to:
receive diagnostic data of a work assistant search engine from a secured enterprise user computing system, the diagnostic data comprising feature values and scores, wherein the secured enterprise user computing system is safe from transmitting non-diagnostic data to the system, wherein the feature values are an abstract layer calculated based on a query, an identifier of a user requesting the query, and data objects representing documents identified by the work assistant search engine responsive to the query, and the abstract layer is devoid of personal information of the user or content of the documents identified by the work assistant search engine, and wherein the scores are calculated by applying a score model to the feature values;
determine, by analyzing the diagnostic data, a search quality metric value associated with the feature values and the scores;
responsive to determining that the search quality metric value differs from a target search quality metric value by a predetermined threshold value, determine an updated score model; and
provide the updated score model to the secured enterprise user computing system to update the work assistant search engine.
23. (New) The system of claim 22, wherein the scores comprise at least one of a topicality score associated with a quality of contents of a document with respect to a search query and a popularity score associated with access features of the document.
4. The system of claim 1, wherein the diagnostic data further comprises:
a history of user interactions associated with the documents identified by the work assistant search engine responsive to the query,
at least one factor value for calculating a topicality feature value, wherein the at least one factor value comprises at least one of a word ratio factor or a temporal factor between the query and a corresponding data object representing a document, and
an affinity value and a signal for calculating a polarity feature value, wherein the affinity value representing an affinity relation between a user and a corresponding document, and the signal is at least of a deprecation signal, a staleness signal, a visits-affinity signal, or a scaled popularity signal, and
wherein the diagnostic data is devoid of personal information of the user, contents of the query, or contents of the documents, and wherein the user interactions comprise at least one of a click on a hyperlink associated with one of the documents or a type of the click
24. (New) The system of claim 23, wherein the one or more processing devices are further configured to: generate one or more potential score models by selecting varying features for each model, functions for calculating the topicality score, functions for calculating the popularity score, and functions for calculating a relevancy score; perform simulations of the one or more potential score models by applying the one or more potential score models to the features included in the diagnostic data to calculate simulated scores for each of the one or more potential score models; and select an updated score model from the one or more potential score models based on user activity.
5. The system of claim 4, wherein to determine an updated score model, the one or more processing devices are further to:
perform simulations by applying one or more updated score models to the feature values to calculate simulated scores, wherein the one or more updated score models are updated by varying functions for calculating the scores based on feature values, functions for calculating the topical feature value based on the at least one factor, or functions for calculating the popularity feature value based on the affinity value and the signal; and
determine the updated score model based on the user's activities.
25. (New) The system of claim 24, wherein the updated score model comprises a neural network, and wherein the simulations comprise training of the neural network.
6. The system of claim 5, wherein the updated score model comprises a neural network, and wherein the simulations comprise a training of the neural network.
26. (New) The system to claim 21, wherein the system is deployed in a third-party computing environment that is securely separate from a computing system executing the work assistant search engine.
8. The system of claim 1, wherein the system is deployed in a software provider computing environment that is securely separate from the enterprise computing environment providing the work assistant search engine.
27. (New) The system of claim 21, wherein the one or more processing devices are further configured to: record diagnostic data over a period of time; calculate a statistics value of the diagnostic data over time; and determine the search quality metric value associated with the diagnostic data based on the statistics value.
3. The system of claim 1, wherein to determine, by analyzing the diagnostic data, a search quality metric value associated with the feature values and the scores, the one or more processing devices are further to:
record the feature values over time;
calculate a statistics value of the feature values over time; and
determine the search quality metric value associated with the feature values and the scores based on the statistics value.
28. (New) The system of claim 21, wherein the diagnostic data comprises: user interactions associated with documents identified by the work assistant search engine responsive to user queries, factor values associated with document content for calculating a topicality feature value, and an affinity value indicating user affinity for a corresponding document and one or more signals indicating use or staleness of the document, wherein the affinity value and one or more signals are used to calculate a popularity score.
4. The system of claim 1, wherein the diagnostic data further comprises:
a history of user interactions associated with the documents identified by the work assistant search engine responsive to the query,
at least one factor value for calculating a topicality feature value, wherein the at least one factor value comprises at least one of a word ratio factor or a temporal factor between the query and a corresponding data object representing a document, and
an affinity value and a signal for calculating a polarity feature value, wherein the affinity value representing an affinity relation between a user and a corresponding document, and the signal is at least of a deprecation signal, a staleness signal, a visits-affinity signal, or a scaled popularity signal, and
wherein the diagnostic data is devoid of personal information of the user, contents of the query, or contents of the documents, and wherein the user interactions comprise at least one of a click on a hyperlink associated with one of the documents or a type of the click
Claims 29 – 36 are rejected using similar mapping to claims 1, 3 – 6 and 8 of the ‘524 Patent above, as well as 10 and 12 – 14.
Claims 37 – 40 are rejected using similar mapping to claims 1, 4 and 5 of the ‘524 Patent above.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 21 – 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2007/0033221 issued to Copperman et al (hereinafter referred to as Copperman) in view of U.S. Patent Application Publication No. 2018/0032890 issued to Podgorny et al (hereinafter referred to as Podgorny).
As to claim 21, Copperman discloses a system for monitoring and optimizing work assistant search engines implemented within a computing system, comprising:
a memory (computer components including memory, storage device, interface (display and input) devices, and CPU(s), see Copperman: Para. 0038); and
one or more processing devices, operatively coupled with the memory (computer components including storage device, interface (display and input) devices, and CPU(s), see Copperman: Para. 0038), to:
receive diagnostic data of a work assistant search engine using an initial score measure (receiving a report including diagnostics such as results and f-measure for a taxonomy in response to a query and performing taxonomy improvement process based on the results and f-measures, see Copperman: Para. 0139 – 0145 and 0155 – 0161, see also assigning scores for classifications of the classifier for improving the taxonomy using a test on train (TOT) process, see Copperman: Para. 0081 – 0082 and 0139 – 0145),
determine, based on the diagnostic data generated by the work assistant search engine while using the initial score measure, a search quality metric value associated with performance of the work assistant search engine (determining the f-measure for the taxonomy based on results in the improvement of the taxonomy process and ranking knowledge containers based on query results using the improved taxonomy, see Copperman: Para. 0139 – 0145 and Para. 0155 – 0161);
generate, in response to determining that the search quality metric value fails to satisfy a threshold value, an updated score measure for the work assistant search engine (determine if the f-measure the taxonomy exceeds 85% and repeating training of the text classifier until the f-measure exceeds the threshold and ranking knowledge containers based on query results using the improved taxonomy, see Copperman: Para. 0139 – 0145 and Para. 0155 – 0161, not meeting a threshold is failing to satisfy, training of the classifier is updating of the classifier (model)); and
initiate an update to the work assistant search engine to use the updated score measure (taxonomy improvement process including adding/removing documents to train the model with good concepts until the f-measure threshold is met and ranking knowledge containers based on query results using the improved taxonomy, see Copperman: Para. 0139 - 0145 and Para. 0155 – 0161, improving the taxonomy of the knowledge base thereby improves the functioning of the search engine).
However, Copperman does not explicitly disclose initial score model; wherein the diagnostic data includes non-identifying information associated with a user and a query; and an updated score model.
Podgorny teaches initial score model (hybrid predictive model, see Podgorny: Para. 0009, 0064, 0078 - 0079);
wherein the diagnostic data includes non-identifying information associated with a user and a query (determination of content quality scores based on likelihood content support content is relevant to a user’s search query by the hybrid predictive model trained using user feedback, see Podgorny: Para. 0009, 0064, 0078 - 0079); and
an updated score model (training the hybrid predictive model based on user feedback, see Podgorny: Para. 0009, 0064, 0078 - 0079).
Podgorny and Copperman are analogous due to their disclosure of question and answering knowledge base
Therefore, it would have been obvious to modify Copperman’s use of improving a search engine by determining f-measure for taxonomy tags of the knowledge base for the search engine with Podgorny’s use of training hybrid predictive models using quality scores based on user feedback in order to improve content searching in a question and answer customer support system by using a crowd-machine learning hybrid predictive model (see Podgorny: Para. 0007).
As to claim 22, Copperman modified by Podgorny discloses the system of claim 21, wherein the diagnostic data comprises at least one of features or scores associated with search queries issued to the work assistant search engine (diagnostics including f-measure (precision and recall measure) for document tags returned from a knowledge container taxonomy, see Copperman: Para. 0139 – 0145, and including resulting documents from the query and document tags, see Copperman: Para. 0145 – 0161).
As to claim 23, Copperman modified by Podgorny discloses the system of claim 22, wherein the scores comprise at least one of a topicality score associated with a quality of contents of a document with respect to a search query and a popularity score associated with access features of the document (probabilistic topic model generating topic model output including topic scores associated with topic terms and probabilistic topic terms that are related to one or more topics that are used to determine relevant customer support content for the search query, see Podgorny: Para. 0075, and a popularity ranking based on relevance determined by the topic, see Podgorny: Para. 0051).
As to claim 24, Copperman modified by Podgorny discloses the system of claim 23, wherein the one or more processing devices are further configured to:
generate one or more potential score models by selecting varying features for each model, functions for calculating the topicality score, functions for calculating the popularity score, and functions for calculating a relevancy score (one or more hybrid predictive models to identify search results to provide to users in response to search queries, see Podgorny: Para. 0045, and probabilistic topic model of the hybrid predictive model generating topic model output including topic scores associated with topic terms and probabilistic topic terms that are related to one or more topics that are used to determine relevant customer support content for the search query, see Podgorny: Para. 0075, and a popularity ranking based on relevance determined by the topic, see Podgorny: Para. 0051);
perform simulations of the one or more potential score models by applying the one or more potential score models to the features included in the diagnostic data to calculate simulated scores for each of the one or more potential score models (estimating relevance of existing customer support content using machine learning to estimate a likelihood and train the hybrid predicative models, see Podgorny: Para. 0072 – 0076, using machine learning to estimate likelihood/relevancy in lieu of direct user feedback is a simulation in place of user feedback); and
select an updated score model from the one or more potential score models based on user activity (training hybrid predictive models based on machine learning feedback and/or user feedback, see Podgorny: Para. 0045, 0051 and 0072 – 0076).
As to claim 25, Copperman modified by Podgorny discloses the system of claim 24, wherein the updated score model comprises a neural network, and wherein the simulations comprise training of the neural network (the hybrid predictive model is generated and trained using artificial neural networks, see Podgorny: Para. 0074).
As to claim 26, Copperman modified by Podgorny discloses the system to claim 21, wherein the system is deployed in a third-party computing environment that is securely separate from a computing system executing the work assistant search engine (system is accomplished through integration of the customer database and external repositories with a customer relationship management system, see Copperman: Para. 0065, wherein the system including an e-service portal, see Para. 0037).
As to claim 27, Copperman modified by Podgorny discloses the system of claim 21, wherein the one or more processing devices are further configured to:
record diagnostic data over a period of time (tags including times of submission and submitter ID amongst others, see Copperman: Para. 0098, the input documents and tags thereof received over time);
calculate a statistics value of the diagnostic data over time (calculate quality values for the tags, see Copperman: Para. 0159 – 0170, the documents and tags received over time); and
determine the search quality metric value associated with the diagnostic data based on the statistics value (determining values about the quality of the source knowledge container and providing ranked listed ordered by relevance in response to the query, see Copperman: Para. 0159 – 0170).
As to claim 28, Copperman modified by Podgorny discloses system the of claim 21, wherein the diagnostic data comprises:
user interactions associated with documents identified by the work assistant search engine responsive to user queries (f-measure based on a test on train report for evaluation of manually entered (user) tags and automatically generated tags, see Copperman: Para. 0139 – 0145 and 0147 – 0149),
factor values associated with document content for calculating a topicality feature value (f-measure values used in the inspection of assigned topics in determining appropriate tags, see Copperman: Para. 0139 – 0145), and
an affinity value indicating user affinity for a corresponding document and one or more signals indicating use or staleness of the document, wherein the affinity value and one or more signals are used to calculate a popularity score (determining level of quality for the taxonomy tags based on taxonomic distance and level of previous user satisfaction with the knowledge container based on user feedback, see Copperman: Para. 0159).
Claims 29 – 36 are rejected using similar rationale to the rejection of claims 21 – 28 above.
Claims 37 – 40 are rejected using similar rationale to the rejection of claims 21 – 24 above.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARK E HERSHLEY whose telephone number is (571)270-7774. The examiner can normally be reached M-F: 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at (571) 270-1698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MARK E HERSHLEY/Primary Examiner, Art Unit 2164