DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/17/2025 has been entered.
Response to Arguments
Applicant's arguments filed 11/17/2025 have been fully considered but they are not persuasive.
Regarding applicant’s remarks directed to the rejection of claims under 35 USC § 101, the applicant argues that the amended claims directed to a technical solution. Examiner respectfully agrees and withdraws the prior rejections of claims under 35 USC § 101.
Regarding applicant’s remarks directed to the rejection of claims under 35 USC § 103, the arguments are directed to newly amended limitations that were not previously examined by the examiner. Therefore, applicants arguments are rendered moot. The examiner refers to the rejection under 35 USC § 103 in the current office action for more details.
Claim Rejections - 35 USC § 112(b)
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-2, 4-13 and 15-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 and analogous claim 12 recites "the ethics datastore" and “the associated ethics data.” There is insufficient antecedent basis for these elements in the claim.
Claim 1 and analogous claim 12 recites “when the decision engine controls an automated system operation in response to the decision by enabling execution of a production process when the combined ethics score is above the threshold score and suspending or delaying the execution when the combined ethics score is equal to or below the threshold score, and initiates a control signal or automated instruction to a networked or hardware component to execute the enabled production process, thereby changing an operational state of the system in response to the ethics-based decision” It is unclear whether the initiation step occurs when the combined ethics score is above the threshold score or equal to/below the threshold score.
For examination purposes, Examiner interprets the initiation step to be performed when the combined ethics score is above the threshold score.
Claims 2, 4-11 are further rejected on virtue of their dependency to claim 1.
Claims 13, 15-20 are further rejected on virtue of their dependency to claim 20.
Claim 4 and analogous claim 15 recites " a historian.” There is insufficient antecedent basis for the element in the claim.
Claim Rejections - 35 USC § 112(d)
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claim 4 and analogous claim 15 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends.
The dependent claims fail to further limit the subject matter of claim 1 and analogous claim 12 as the amended base claims recite “wherein the decision engine records the decision and the associated ethics data in a historian that is part of the ethics datastore;” which Examiner notes is substantially the same as the dependent claims’ recitation of “recording the decision made by the decision engine in a historian, wherein the historian is part of the ethics datastore.”
Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-2, 4, 6-13, 15 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Pub No. US20210097448A1 Gonzalez et al. (“Gonzalez”) in view of Nevanperä, Minna. "Aspects to responsible artificial intelligence: ethics of artificial intelligence and ethical guidelines in SHAPES Project." (2021). (“Nevanperä”) in further view of US Pub No. US20210224663A1 Gil et al. (“Gil”)
In regards to claim 1,
Gonzalez teaches A method, comprising: receiving, by an ethics engine implemented on one or more processors, an output from an automated process that operated on an input asset;
(Gonzalez, “[0092] The sequence diagram starts at step S100 when a set of AI systems send their logs in a context [an output from an automated process that operated on an input asset]. In step S110, the information collector processes the logs and send the new information to the conflict auditor [an ethics engine].”)
Gonzalez teaches labeling the output with an ethics score associated with the input asset
(Gonzalez, “[0084] Confidence Score Generator 60: This component may use heuristics, majority vote and closest neighbour modules, along with historical data to calculate a confidence score for each AI system using a multi-method approach and perform confidence computation based on stored standards or guidelines, such as ethically-aligned confidence computations based on published AI ethics guidelines [labeling the output with an ethics score associated with the input asset; ie ethically-aligned confidence of the logs of the respective AI system]. These may be confidence computations based in other standards, e.g. health and safety/critical systems where standards/guidelines apply.”)
Gonzalez teaches and retrieving, from the ethics datastore, the ethics score associated with the input asset and an ethics score associated with the automated process,
(Gonzalez, [0092], “In step S120 the conflict auditor requests the historical confidence scores of each AI system based on a specific feature (for example taking into account previous de-activation) [the ethics score associated with the input asset and an ethics score associated with the automated process] and with this information calculates the global information (GI).”; see cropped figure 4 for retrieving the confidence scores (which were ethically-aligned) from the historical data
PNG
media_image1.png
239
854
media_image1.png
Greyscale
)
Gonzalez teaches and further based on contextual rules, machine assertions, and peer assessments stored in the ethics datastore;
(Gonzalez, [0025], “In particular, embodiments focus on preventing harm caused by errors/failures in (an) AI system(s) that are detected using the collective knowledge of other AI systems [machine assertions, and peer assessments; wherein the knowledge of other AI systems could be peer assessments] that are presented in a same context [contextual rules] (physical setting). For this, traceability of the systems and reconciliation of conflicting information is essential.”)
(Gonzalez, “[0090] Historical data 80: this local or remote storage component stores and traces all the information received, produced and sent by the different components of the system [stored in the ethics datastore].”)
Gonzalez teaches generating, by the ethics engine, a labeled output comprising metadata linking the output to the ethics scores of the input asset and the automated process;
(Gonzalez, “[0084] Confidence Score Generator 60: This component may use heuristics, majority vote and closest neighbour modules, along with historical data to calculate a confidence score for each AI system using a multi-method approach and perform confidence computation based on stored standards or guidelines, such as ethically-aligned confidence computations based on published AI ethics guidelines [generating, by the ethics engine, a labeled output comprising metadata linking the output to the ethics scores of the input asset and the automated process; ethically-aligned confidence of the logs of the respective AI system]. These may be confidence computations based in other standards, e.g. health and safety/critical systems where standards/guidelines apply.”)
However, Gonzalez does not explicitly teach each ethics score being based on quantified measurements of ethical pillars including accountability, value alignment, explainability, fairness, and user data rights,… receiving the labeled output into a decision engine configured to make a decision regarding an action based on the labeled output; and evaluating the ethics scores included in the output, by the decision engine, and making the decision to perform the action when a combined ethics score computed from the ethics scores of the input asset and the automated process is above a threshold score and then performing the action, when the decision engine controls an automated system operation in response to the decision by enabling execution of a production process when the combined ethics score is above the threshold score and suspending or delaying the execution when the combined ethics score is equal to or below the threshold score, and initiates a control signal or automated instruction to a networked or hardware component to execute the enabled production process, thereby changing an operational state of the system in response to the ethics-based decision, and input is requested when the combined ethics score is below or equal to the threshold score before the action is performed, and wherein the decision engine records the decision and the associated ethics data in a historian that is part of the ethics datastore.
Nevanperä teaches each ethics score being based on quantified measurements of ethical pillars including accountability, value alignment, explainability, fairness, and user data rights,
(Nevanperä, pg. 35, section 5.3.1; “Firstly, they give five areas of ethical focus: Accountability, value alignment, explainability, fairness and user data rights. These can be seen as a selection of European Commission’s most important guidelines that are more straightforward to take into action.”).
Gil teaches receiving the labeled output into a decision engine configured to make a decision regarding an action based on the labeled output; and evaluating the ethics scores included in the output, by the decision engine, and making the decision to perform the action when a combined ethics score computed from the ethics scores of the input asset and the automated process is above a threshold score and then performing the action, when the decision engine controls an automated system operation in response to the decision by enabling execution of a production process when the combined ethics score is above the threshold score
(Gil, “[0090] If the performance measure and/or the confidence measure is above a given threshold [evaluating the ethics scores included in the output], then the circuitry 14 [decision engine] may make the final decision regarding allowing or denying the received account opening request 27. That is, if the performance measure and/or the confidence measure are above the given threshold to grant the received account opening request 27, then the received account opening request 27 may be granted by the circuitry 14 without intervention by a human administrator [making the decision to perform the action when a combined ethics score computed from the ethics scores of the input asset and the automated process is above a threshold score and then performing the action].”)
Gil teaches and suspending or delaying the execution when the combined ethics score is equal to or below the threshold score,
(Gil, “[0119] In some embodiments, if the output is above the threshold for denying the account opening but below the threshold for approving the account opening, the account opening data is sent to a human 705 to make the final decision on the account opening [suspending or delaying the execution when the combined ethics score is equal to or below the threshold score]. Whatever the human's decision is, it is added to the database of past decisions 710 for use in tuning the machine learning engine. Then the routine exits 711.”)
Gil teaches and initiates a control signal or automated instruction to a networked or hardware component to execute the enabled production process, thereby changing an operational state of the system in response to the ethics-based decision,
(Gil, “[0090] If the performance measure and/or the confidence measure is above a given threshold, then the circuitry 14 [decision engine] may make the final decision regarding allowing or denying the received account opening request 27. That is, if the performance measure and/or the confidence measure are above the given threshold to grant the received account opening request 27, then the received account opening request 27 may be granted by the circuitry 14 without intervention by a human administrator [initiates a control signal or automated instruction to a networked or hardware component to execute the enabled production process, thereby changing an operational state of the system in response to the ethics-based decision].”)
Gil teaches and input is requested when the combined ethics score is below or equal to the threshold score before the action is performed,
(Gil, “[0119] In some embodiments, if the output is above the threshold for denying the account opening but below the threshold for approving the account opening, the account opening data is sent to a human 705 to make the final decision on the account opening [input is requested when the combined ethics score is below or equal to the threshold score before the action is performed]. Whatever the human's decision is, it is added to the database of past decisions 710 for use in tuning the machine learning engine. Then the routine exits 711.”)
Gil teaches and wherein the decision engine records the decision and the associated ethics data in a historian that is part of the ethics datastore.
(Gil, [0119], “Whatever the human's decision is, it is added to the database of past decisions 710 [decision engine records the decision and the associated ethics data in a historian that is part of the ethics datastore] for use in tuning the machine learning engine. Then the routine exits 711.”; wherein the past decision storage 710 is substantially similar to the historical data storage of Gonzalez)
Gonzalez and Nevanperä is considered to be analogous to the claimed invention because they are in the same field of ethical consideration of AI systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gonzalez to incorporate the teachings of Nevanperä in order to provide ethical guidelines considered to be the most important as doing so would allow for setting ethical guidelines for AI that takes into account different social contexts (Nevanperä pg. 20, section 5.1.3;)
Gil is considered to be analogous to the claimed invention because they are in the same field of automated decision systems. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gonzalez and Nevanperä to incorporate the teachings of Gil in order to provide a decision system with a decision thresholding filter that applies the ethical considerations of Gonzalez and Nevanperä, and account for past decisions in order to determine options appropriate for customers (Gil, Abstract, “A method for using machine learning techniques to analyze past decisions made by administrators concerning account opening requests and to recommend whether an account opening request should be allowed or denied. Further, the machine learning techniques determine various other products that the customer may be interested in and prioritizes the choices of options that the machine learning algorithm determines appropriate for the customer.”)
In regards to claim 2,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gil teaches further comprising seeking input from a user when the ethics score is below or equal to the threshold score.
(Gil, “[0119] In some embodiments, if the output is above the threshold for denying the account opening but below the threshold for approving the account opening, the account opening data is sent to a human 705 to make the final decision on the account opening [seeking input from a user when the ethics score is below or equal to the threshold score]. Whatever the human's decision is, it is added to the database of past decisions 710 for use in tuning the machine learning engine. Then the routine exits 711.”)
Claim 4 is rejected on the same grounds under 35 U.S.C. 103 as claim 1.
In regards to claim 6,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gonzalez teaches further comprising: selecting a first algorithm and a second algorithm and generating, respectively, a first output and a second output; labeling each of the first output and the second output with, respectively, ethics scores of the first algorithm and the second algorithm; selecting, as the automated process, the first algorithm when the ethics score of the first algorithm is higher than the ethics score of the second algorithm or selecting, as the automated process, the second algorithm when the ethics score of the second algorithm is higher than the ethics score of the first algorithm.
(Gonzalez, “[0109] Therefore, in this example, AI systems [a first algorithm] with more confidence (high ethically-aligned confidence score [ethics scores of the first algorithm and the second algorithm] and/or fewer deactivations) will have more weight [selecting, as the automated process, the first algorithm when the ethics score of the first algorithm is higher than the ethics score of the second algorithm; wherein AI systems with more confidence is weighted more than AI systems with less] than AI systems [a second algorithm] with less confidence for a specific feature [respectively, a first output and a second output].”)
In regards to claim 7,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gonzalez teaches wherein the automated process is an artificial intelligence algorithm.
(Gonzalez, “[0092] The sequence diagram starts at step S100 when a set of AI systems [wherein the automated process is an artificial intelligence algorithm] send their logs in a context. In step S110, the information collector processes the logs and send the new information to the conflict auditor.”)
In regards to claim 8,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gonzalez teaches further comprising recording the output and associated ethics score in a historian and recording actions associated with using the output by the decision engine.
(Gonzalez, “[0090] Historical data 80: this local or remote storage component stores and traces all the information received, produced and sent by the different components of the system [recording the output and associated ethics score in a historian and recording actions associated with using the output by the decision engine; wherein the decision engine is provided by Gil].”)
In regards to claim 9,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gil teaches wherein the input includes allowing the decision to be made, accepting a risk of an ethical violation, terminating the decision, or seeking a new automated process to generate a different output for consideration by the decision engine.
(Gil, “[0090] If the performance measure and/or the confidence measure is above a given threshold, then the circuitry 14 may make the final decision regarding allowing or denying the received account opening request 27. That is, if the performance measure and/or the confidence measure are above the given threshold to grant the received account opening request 27, then the received account opening request 27 may be granted by the circuitry 14 without intervention by a human administrator [wherein the input includes allowing the decision to be made].”)
In regards to claim 10,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gonzalez teaches wherein the ethics engine is configured to track the asset and the output of the asset and to label the output.
(Gonzalez, “[0067] One main benefit is to be able to trace back incorrect or inconsistent inputs that could generate a negative impact in a context by considering AI system's outputs [configured to track the asset and the output of the asset and to label the output; wherein the labeling is through the ethically-aligned confidence scores as previously taught]. Thus, tracing the collective knowledge from different AI systems, it can be clarified why an AI system took a specific action, identify the possible errors, redress them, and eliminate or minimize future unexpected outputs for AI system's auditability purposes.”)
In regards to claim 11,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gonzalez teaches further comprising labeling the output with an ethics score of the automated process and/or an ethics score of hardware associated with generating the output.
(Gonzalez, “[0084] Confidence Score Generator 60: This component may use heuristics, majority vote and closest neighbour modules, along with historical data to calculate a confidence score for each AI system using a multi-method approach and perform confidence computation based on stored standards or guidelines, such as ethically-aligned confidence computations based on published AI ethics guidelines [labeling the output with an ethics score of the automated process; ie ethically-aligned confidence of the logs of the respective AI system]. These may be confidence computations based in other standards, e.g. health and safety/critical systems where standards/guidelines apply.”)
Claim 12 is rejected on the same grounds under 35 U.S.C. 103 as claim 1
Claim 13 is rejected on the same grounds under 35 U.S.C. 103 as claim 2
Claim 15 is rejected on the same grounds under 35 U.S.C. 103 as claim 4
Claim 17 is rejected on the same grounds under 35 U.S.C. 103 as claim 6
Claim 18 is rejected on the same grounds under 35 U.S.C. 103 as claim 11
Claim 19 is rejected on the same grounds under 35 U.S.C. 103 as claim 7 and 8
Clam 20 is rejected on the same grounds under 35 U.S.C. 103 as claim 9
Claim(s) 5 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Gonzalez in view of Nevanperä and Gil in further view of U.S. Pub. No. US20180189680A1 Gupta et al. (“Gupta”)
In regards to claim 5,
Gonzalez in view of Nevanperä and Gil teach The method of claim 1,
Gupta teaches further comprising determining whether the decision is a test decision or a production decision,
(Gupta, fig. 17, “[0032] In one example, a software development system establishes a communication session with a client computing device. The software development system acts as a point of interface between the client computing device and one or more data sources to which decision algorithms can be applied, such as a first database in which test data is stored and a second database in which production data is stored. The software development system can implement this functionality by, for example, providing a software management interface to the client computing devices via one or more data networks. The software management interface can include one or more menus or other elements for selecting different decision algorithms (e.g., a current version and an alternative version of an algorithm). The software management interface can include one or more menus or other elements switching between a mode in which decision algorithms are applied to the segregated test data and a mode in which decision algorithms are applied to the live production data [determining whether the decision is a test decision or a production decision; wherein Gupta provides an interface to apply the decision algorithm to a test environment or a live production environment].”)
However, Gupta does not explicitly teach wherein the test decision is performed regardless of the ethics score and wherein the production decision is performed without user input only if the ethics score is above the threshold score.
Gil teaches wherein the test decision is performed regardless of the ethics score
(Gil, “[0090] If the performance measure and/or the confidence measure is above a given threshold, then the circuitry 14 may make the final decision regarding allowing or denying the received account opening request 27 [wherein the test decision is performed regardless of the ethics score; wherein the confidence measure is not utilized to make the final decision; thus a decision made in the test environment is performed regardless of the ethics score]. That is, if the performance measure and/or the confidence measure are above the given threshold to grant the received account opening request 27, then the received account opening request 27 may be granted by the circuitry 14 without intervention by a human administrator.”)
Gil teaches and wherein the production decision is performed without user input only if the ethics score is above the threshold score.
(Gil, “[0090] If the performance measure and/or the confidence measure is above a given threshold, then the circuitry 14 may make the final decision regarding allowing or denying the received account opening request 27 [wherein the production decision is performed without user input only if the ethics score is above the threshold score]. That is, if the performance measure and/or the confidence measure are above the given threshold to grant the received account opening request 27, then the received account opening request 27 may be granted by the circuitry 14 without intervention by a human administrator.”)
Gupta is considered to be analogous to the claimed invention because they are in the same field of decision-making systems and testing and deploying decision algorithms. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified Gonzalez and Nevanperä and Gil to incorporate the teachings of Gupta in order to provide a separate test and production environment so a model can be analyzed completely with its various versions before deployment (Gupta, “[0004] Various embodiments involve software development platforms for performing one or more of testing, modifying, and deploying decision algorithms. For example, a computing system provides software development interface to a client device. The system sets, based on an input from the client device via the interface, a decision engine to a test mode that causes the decision engine to operate on test data stored in a first database and that prevents the decision engine from applying operations from the client device to production data stored in a second database. The system also configures the decision engine in the test mode to execute a different decision algorithms on the test data. The system also sets, based on another input via the interface, the decision engine to a deployment mode that causes the decision engine to operate on the production data. The system configures the decision engine in the deployment mode to execute one or more of the tested decision algorithms.”)
Claim 16 is rejected on the same grounds under 35 U.S.C. 103 as claim 5
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US Pub No. US20170364934A1 Tiell teaches Demographic based adjustment of data processing decision results
US Pub No. US20140188776A1 Shuster teaches Decision making using algorithmic or programmatic analysis
US Pub No. US20200219009A1 Dao et al. teaches Method for securing a machine learning based decision system
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASMINE THAI whose telephone number is (703)756-5904. The examiner can normally be reached M-F 8-4.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Michael Huntley can be reached at (303) 297-4307. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/J.T.T./Examiner, Art Unit 2129
/MICHAEL J HUNTLEY/Supervisory Patent Examiner, Art Unit 2129