DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed 1/23/2026 has been entered. Claims 1-20 remain pending in the application. Applicant’s amendments to the Specification and Claims have overcome the rejections under 112(b) and objection noted in the previous Office Action.
Response to Arguments
Applicant's arguments filed 1/23/2026 have been fully considered but they are not persuasive.
Applicant argues that:
The Office relies on RHO's disclosure of fairness and performance data as corresponding to the display limitations of claim 11. The Office asserts that RHO discloses values of various metrics that characterize fairness (virtue score data) presented within a user interface based on user selections. To the contrary, the cited portions of RHO do not disclose the claimed display in accordance with customization data, nor do they disclose displaying a protected attribute as claimed.
Regarding the protected attribute, the argument is moot in view of new grounds of rejection as necessitated by amendment (see below). Regarding displaying in accordance with customization data, the Examiner notes RHO discloses customization data in multiple respects. The first respect is customization of the data set, ML processes, and input features (RHO, Figs. 2B-2C with ¶0061, ¶0066-¶0074). The second respect is customization of the graphical output (RHO, Figs. 3C and 5 with ¶0110, ¶0127, ¶0129).
The remainder of Applicant’s arguments with respect to rejections under prior art have been fully considered and are moot upon a new ground(s) of rejection, as necessitated by amendment, as outlined below.
Prior Art
Listed herein below are the prior art references relied upon in this Office Action:
RHO et al. (US Paten Application Publication 2022/0067580), referred to as RHO herein [previously cited].
Sadaghiani (US Patent Number 10,929,756), referred to as Sadaghiani herein [previously cited].
Alford et al. (US Patent Application Publication 2023/0144166), referred to as Alford herein [previously cited].
Mishraky et al. (US Patent Application Publication 2021/0209499), referred to as Mishraky herein.
Examiner’s Note
Strikethrough notation in the pending claims has been added by the Examiner.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 9-12 is/are rejected under 35 U.S.C. 103 as being unpatentable over RHO in view of Sadaghiani in further view of Mishraky.
Regarding claim 1, RHO discloses a method comprising: generating, via a machine that includes at least one processor and a non-transitory machine-readable storage medium and utilizing a graphical user interface, a content analysis control panel (RHO, Figs. 1, 2B-2C and 3C - ¶0020-¶0021, ¶0066-¶0074 and ¶0110, ¶0127 – control interface panel displayed on an analyst device including a processor executing instructions stored in hardware memory);
receiving, via the machine, customization data that indicates a
generating, via the machine, predicted virtue score data associated with the content data for each of the
displaying, via the content analysis control panel and in accordance with the customization data, of: ¶0015, ¶0043-¶0044, ¶0091, ¶0108-¶0110 – values of selected features and various metrics that characterize fairness (virtue score data) presented within the user interface based on user selections. Analyst-specified features and metrics indicative of the contribution of those features. The impact of input features is characterized via evaluation metrics).
However, RHO appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Sadaghiani discloses machine learning model evaluation (Sadaghiani, Abstract, 9:22-42), including
generating an ML explainability score (Sadaghiani, 9:42-63 – value indicating a degree of explainability).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the explainability data of RHO to include explainability score based on the teachings of Sadaghiani. The motivation for doing so would have been enable users to more easily quantify how explainable a model is, and whether that model meets an explainability threshold, additionally assisting users with comparing the explainiability of different selected models.
However, RHO as modified does not appear to expressly disclose a protected attribute. However, in the same field of endeavor, Mishraky discloses evaluating and regulating machine-learning fairness (Mishraky, Abstract), including,
a protected attribute (Mishraky, ¶0007-¶0008, ¶0015, ¶0038, ¶0042 – fairness is assessed based on non-discrimination of protected attributes such as gender, race, age).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the input features and contribution display of RHO to include protected attributes based on the teachings of Mishraky. The motivation for doing so would have been to ensure fairness on identified attributes that should not have an influence in the outcome of the system, especially when the variable leading to potential discrimination is explicitly available in the data (Mishraky, ¶0007-¶0008).
Regarding claim 9, RHO as modified discloses the elements of claim 1 above, and further discloses wherein the content data is an Artificial Intelligence (AI) model (RHO, Fig. 2B with ¶0066 – input a selected ML or AI process. Figs. 3A-C with ¶0108, ¶0111 – fairness data describing the performance of the selected ML or AI process. Sadaghiani, 9:22-63 – value indicating a degree of explainability of a target AI model).
Regarding claim 10, RHO discloses the elements of claim1 above, and further discloses wherein the presentation parameters includes a customized selection of at least one of: at least one statistic, at least one chart, or at least one graph (RHO, Figs. 2B-2C with ¶0066-¶0074 – analytical period, data set selection, sample size, feature data selection, and data set segmentation selection. Figs. 3C and 5 with ¶0110, ¶0127, ¶0129 – analyst can select fairness data, explainability, and performance data. ¶0043 – dependency plots).
Regarding claim 11, RHO discloses the elements of claim1 above, and further discloses wherein at least one protected attribute is based on at least one of: gender, sex, race, age, religion, ethnicity, sexual preference, or disabilities (Mishraky, ¶0007-¶0008, ¶0038 – fairness is assessed based on non-discrimination of protected attributes such as gender, race, age).
Regarding claim 12, RHO disclose the elements of claim 1 above, and further discloses facilitating selection of the content data from at least one of: an Al model, or a content source (RHO, Fig. 2B with ¶0066 – input a selected ML or AI process. Figs. 2B-2C with ¶0066-¶0074 – analytical period, data set selection, sample size, feature data selection, and data set segmentation selection).
Claim(s) 2, 3, 4, 6-8, and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over RHO in view of Sadaghiani in further view of Mishraky in further view of Alford.
Regarding claim 2, RHO as modified discloses the elements of claim 1 above, and further discloses wherein the plurality of virtue scoring models include a plurality of
However, RHO as modified appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor Alford discloses assessing moral outcomes from machine AI models (Alford, ¶0007-¶0009), including
wherein the plurality of virtue scoring models include a plurality of artificial intelligence (AI) models that are each trained based on survey data to generate portions of the predicted virtue score data indicating a corresponding one of a plurality of scores (Alford, ¶0090, ¶0099 – morality scores are output from a cost/reward function. Emotional states are input into the cost/reward function. Figs. 1 and 4 with ¶0063-¶0071, ¶0089 – predicted emotional states are generated by a trained AI algorithm. Abstract with ¶0057-¶0058 – humans are surveyed regarding against real-life scenarios to determine their emotional output, which is used to train the emotional response engine).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the scoring metrics of RHO to include performing the scoring by a trained AI model and including scoring morality and based on the teachings of Alford. The motivation for doing so would have been enable users to aid in accurately evaluating a wider range of moral implications in complex real-life scenarios (Alford, ¶0004-¶0005, ¶0030).
Regarding claim 3, RHO as modified discloses the elements of claim 2 above, and further discloses wherein the plurality of Al models includes a responsibility model and the plurality of scores includes a responsibility score that is based on the content data (RHO, Figs. 3C and 5 with ¶0110, ¶0127, ¶0129 – analyst can select fairness data (responsibility model), explainability, and performance data. Alford, ¶0090, ¶0099 – morality scores are output from a cost/reward function).
Regarding claim 4, RHO as modified discloses the elements of claim 2 above, and further discloses wherein the plurality of Al models includes an equitability model and the plurality of scores includes an equitability score that is based on an amount of bias in the content data (RHO, Figs. 3C and 5 with ¶0110, ¶0127, ¶0129 – analyst can select fairness data (equitability model), explainability, and performance data).
Regarding claim 6, RHO as modified discloses the elements of claim 2 above, and further discloses wherein the plurality of Al models includes an explainability model and the plurality of scores includes an explainability score associated with the content data (Sadaghiani, 9:42-63 – value indicating a degree of explainability).
Regarding claim 7, RHO as modified discloses the elements of claim 2 above, and further discloses wherein the plurality of Al models includes a morality model and the plurality of scores includes a morality score associated with the content data (RHO, Fig. 3A-C with ¶0015, ¶0044, ¶0091, ¶0108-¶0109 – values of various metrics that characterize fairness. Alford, ¶0090, ¶0099 – morality scores).
Regarding claim 8, RHO as modified discloses the limitations of claim 2 above, and further discloses generating improvement data associated with at least one of the plurality of scores (RHO, ¶0061, ¶0091 – data characterizing trends in evaluation metrics).
Regarding claim 13, RHO as modified discloses the elements of claim 1 above. However, RHO appears not to expressly disclose generating, based on user input, survey data corresponding to a survey; collecting survey results data in response to the survey; and facilitating generation of a custom virtue scoring model of the plurality of virtue scoring models.
However, in the same field of endeavor Alford discloses assessing moral outcomes from machine AI models (Alford, ¶0007-¶0009), including
generating, based on user input, survey data corresponding to a survey; collecting survey results data in response to the survey; and facilitating generation of a custom virtue scoring model of the plurality of virtue scoring models (Alford, Figs. 1 and 4 with ¶0063-¶0071, ¶0089 – predicted emotional states are generated by a trained AI algorithm. Abstract with ¶0057-¶0058 – humans are surveyed regarding against real-life scenarios to determine their emotional output, which is used to train the emotional response engine. ¶0075 – customized emotional response engine based on surveys selected from specific countries or cultures).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the scoring metrics of RHO to include performing the scoring by a trained AI model and including scoring morality and based on the teachings of Alford. The motivation for doing so would have been enable users to aid in accurately evaluating a wider range of moral implications in complex real-life scenarios (Alford, ¶0004-¶0005, ¶0030).
Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over RHO in view of Sadaghiani in further view of Mishraky in further view of Alford in further view of Ghosh.
Regarding claim 5, RHO as modified discloses the elements of claim 2 above, and further discloses wherein the plurality of Al models includes a reliability model and the plurality of scores includes a reliability
However, RHO as modified above appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Ghosh discloses auditing ML models for ethical and moral obligations (Ghosh, ¶0031), including
wherein the plurality of Al models includes a reliability model and the plurality of scores includes a reliability score that indicates variations in others of the plurality of scores (Ghosh, ¶0208-¶0209, ¶0216-¶0217, ¶0251-¶0256, ¶0276, ¶0284-¶0285, and ¶0289-¶0290 with equation 9 – AI robustness is calculated. Robustness measures how difficult the model is to deceive or move past a decision boundary. ¶0289 – the counterfactuals are used to assess the impartiality of the model as well as the robustness. Robustness score may be displayed as part of cognitive insight output).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the scoring metrics of RHO as modified to include robustness based on the teachings of Ghosh. The motivation for doing so would have been afford users insight into how well a model withstands or overcomes perturbations on the moral or ethical outcome (Ghosh, ¶0208).
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over RHO in view of Sadaghiani.
Regarding claim 14, RHO discloses a system comprises: a network interface configured to communicate via a network; at least one processor; a non-transitory machine-readable storage medium that stores operational instructions that, when executed by the at least one processor, cause the at least one processor to perform operations that include: generating, via the at least one processor and utilizing a graphical user interface, a content analysis control panel (RHO, Figs. 1, 2B-2C and 3C - ¶0020-¶0021, ¶0066-¶0074 and ¶0110, ¶0127 – control interface panel displayed on an analyst device including a processor executing instructions stored in hardware memory);
receiving, via the at least one processor, customization data that indicates a
generating, via the at least one processor, predicted virtue score data associated with the content data for each of the ¶0015, ¶0044, ¶0091, ¶0108-¶0109 – values of various metrics that characterize fairness (virtue score data) presented within the user interface based on user selections); and
displaying, via the content analysis control panel and in accordance with the customization data, of at least one of: at least one protected attribute, or at least one key performance indicator (RHO, Fig. 3A-C with ¶0015, ¶0044, ¶0091, ¶0108-¶0110 – values of various metrics that characterize fairness (virtue score data) presented within the user interface based on user selections. Note also, Selection of performance information results in a graphical representation of performance data for the analyst-specified features).
However, RHO appears not to expressly disclose the limitation in strikethrough above. However, in the same field of endeavor, Sadaghiani discloses machine learning model evaluation (Sadaghiani, Abstract, 9:22-42), including
generating an ML explainability score (Sadaghiani, 9:42-63 – value indicating a degree of explainability).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the explainability data of RHO to include explainability score based on the teachings of Sadaghiani. The motivation for doing so would have been enable users to more easily quantify how explainable a model is, and whether that model meets an explainability threshold, additionally assisting users with comparing the explainability of different selected models.
Claim(s) 15-17, 19, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over RHO in view of Sadaghiani in further view of Alford.
Regarding claim 15, RHO as modified discloses the elements of claim 14 above, and further discloses wherein the plurality of virtue scoring models include a plurality
However, RHO as modified appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor Alford discloses assessing moral outcomes from machine AI models (Alford, ¶0007-¶0009), including
wherein the plurality of virtue scoring models include a plurality of artificial intelligence (AI) models that are each trained based on survey data to generate portions of the predicted virtue score data indicating a corresponding one of a plurality of scores (Alford, ¶0090, ¶0099 – morality scores are output from a cost/reward function. Emotional states are input into the cost/reward function. Figs. 1 and 4 with ¶0063-¶0071, ¶0089 – predicted emotional states are generated by a trained AI algorithm. Abstract with ¶0057-¶0058 – humans are surveyed regarding against real-life scenarios to determine their emotional output, which is used to train the emotional response engine).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the scoring metrics of RHO to include performing the scoring by a trained AI model and including scoring morality and based on the teachings of Alford. The motivation for doing so would have been enable users to aid in accurately evaluating a wider range of moral implications in complex real-life scenarios (Alford, ¶0004-¶0005, ¶0030).
Regarding claim 16, RHO as modified discloses the elements of claim 15 above, and further discloses wherein the plurality of Al models includes a responsibility model and the plurality of scores includes a responsibility score that is based on the content data (RHO, Figs. 3C and 5 with ¶0110, ¶0127, ¶0129 – analyst can select fairness data (responsibility model), explainability, and performance data. Alford, ¶0090, ¶0099 – morality scores are output from a cost/reward function).
Regarding claim 17, RHO as modified discloses the elements of claim 15 above, and further discloses wherein the plurality of Al models includes an equitability model and the plurality of scores includes an equitability score that is based on an amount of bias in the content data (RHO, Figs. 3C and 5 with ¶0110, ¶0127, ¶0129 – analyst can select fairness data (equitability model), explainability, and performance data).
Regarding claim 19, RHO as modified discloses the elements of claim 15 above, and further discloses wherein the plurality of Al models includes an explainability model and the plurality of scores includes an explainability score associated with the content data (Sadaghiani, 9:42-63 – value indicating a degree of explainability).
Regarding claim 20, RHO as modified discloses the elements of claim 15 above, and further discloses wherein the plurality of Al models includes a morality model and the plurality of scores includes a morality score associated with the content data (RHO, Fig. 3A-C with ¶0015, ¶0044, ¶0091, ¶0108-¶0109 – values of various metrics that characterize fairness. Alford, ¶0090, ¶0099 – morality scores).
Claim(s) 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over RHO in view of Sadaghiani in further view of Alford in further view of Ghosh.
Regarding claim 18, RHO as modified discloses the elements of claim 15 above, and further discloses wherein the plurality of Al models includes a reliability model and the plurality of scores includes a reliability
However, RHO as modified above appears not to expressly disclose the limitations in strikethrough above. However, in the same field of endeavor, Ghosh discloses auditing ML models for ethical and moral obligations (Ghosh, ¶0031), including
wherein the plurality of Al models includes a reliability model and the plurality of scores includes a reliability score that indicates variations in others of the plurality of scores (Ghosh, ¶0208-¶0209, ¶0216-¶0217, ¶0251-¶0256, ¶0276, ¶0284-¶0285, and ¶0289-¶0290 with equation 9 – AI robustness is calculated. Robustness measures how difficult the model is to deceive or move past a decision boundary. ¶0289 – the counterfactuals are used to assess the impartiality of the model as well as the robustness. Robustness score may be displayed as part of cognitive insight output).
Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified the scoring metrics of RHO as modified to include robustness based on the teachings of Ghosh. The motivation for doing so would have been afford users insight into how well a model withstands or overcomes perturbations on the moral or ethical outcome (Ghosh, ¶0208).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL W PARCHER whose telephone number is (303)297-4281. The examiner can normally be reached Monday - Friday, 9:00am - 5:00pm, Mountain Time.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)272-4088 (Eastern Time). The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DANIEL W PARCHER/Primary Examiner, Art Unit 2174