DETAILED ACTION
Claims 1-24 are presented for examination.
This office action is in response to submission of application on 26-AUGUST-2025.
In the previous action, US 20230100740 A1 was referred to as Malur throughout. The examiner has used Malur Srinivasan in this action to refer to this piece of prior art to better reflect the surname as provided in the prior art.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 23-DECEMBER-2021 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
The information disclosure statement (IDS) submitted on 23-MAY-2022 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Amendment
The amendment filed 26-AUGUST-2025 in response to the non-final office action mailed 26-MARCH-2025 has been entered. Claims 1-24 remain pending in the application.
With regards to the non-final office action’s rejection under 101, the amendments to the claims have overcome the original rejection with regards to the claims being directed towards non-statutory subject matter.
With regards to the non-final office action’s rejection under 103, the amendments to the claims do not overcome the original rejection. A response to the applicant’s arguments with regards to 110 has been provided further below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4-6, 13, 16-19, 22-24 are rejected under 35 U.S.C. 103 as being unpatentable over Hansraj et al. (Pub. No. US 20230096235 A1, filed September 30th 2021, hereinafter Hansraj) in view of Malur Srinivasan et al. (Pub. No. US 20230100740 A1, filed September 29th 2021, hereinafter Malur Srinivasan).
Regarding claim 1:
Claim 1 recites:
A computing system comprising: a network controller; a processor coupled to the network controller; and memory coupled to the processor, the memory storing instructions which, when executed by the processor, cause the computing system to: generate, via a first machine learning model, a bias score and a saliency bias map based on input image data; and adjust, based on the bias score and the saliency bias map, one or more of a second machine learning model or a machine learning training set.
Regarding the limitation a computing system comprising: a network controller; a processor coupled to the network controller; and memory coupled to the processor, the memory storing instructions which, when executed by the processor, cause the computing system to:
Hansraj teaches a processor that executes memory storing instructions (Paragraph 5) that is coupled with a candidate engagement optimizer (Paragraph 4), which would be a network controller as controls receiving data sets, inputs, and processing, among other tasks (Paragraph 3).
Regarding the limitation generate, via a first machine learning model, a bias score
Hansraj teaches reducing unconscious bias of a model down to a level that is determined to be acceptable according to a client (Paragraph 27). This would be analogous to generating a bias score as reducing bias to a particular level would mean that the bias is measured.
Regarding the limitation adjust, based on the bias score and the saliency bias map, one or more of a second machine learning model or a machine learning training set:
Hansraj teaches that a removal of bias through course correction to classifiers, where course correction may be an alteration of an approach or strategy of the classifier to reduce bias (Paragraph 39). This would be an example of adjusting based on the bias score a second machine learning model.
A saliency map is taught by Malur Srinivasan below.
However, Hansraj does not teach a saliency map based on the input image data:
Malur Srinivasan in the same field of endeavor of reinforcement learning teaches determining a saliency map that is associated with a first image from a set of images (Paragraph 4). This would be a saliency map based on the input image data.
Hansraj, Malur Srinivasan, and the present application are analogous art because they are all in the same field of endeavor of reinforcement learning. Furthermore, Malur Srinivasan also addresses issues of human bias (Paragraph 33), so it is directed towards a similar issue as Hansraj and the present application.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Hansraj and the teachings of Malur Srinivasan. This would have granted the advantage of allowing the model to be used to understand its underlying learning process (Malur Srinivasan, Paragraph 24).
Regarding claim 4, which depends upon claim 1:
Claim 4 recites:
The computing system of claim 1, wherein to adjust the machine learning training set, the instructions, when executed, cause the computing system to perform one or more of: revise the image data to reduce a bias region; remove the image data from the machine learning training data set; or add the image data to the machine learning training data set.
Hansraj in view of Malur Srinivasan discloses the method of claim 1 upon which claim 4 depends. Furthermore, regarding the limitation of claim 4:
Hansraj teaches an expanded dataset to train the model on to correct unconscious bias (Paragraph 38). An expanded dataset would be adding data to the training data set, which would be one or more of: revise the data to reduce a bias region; remove the data from the machine learning training data set; or add the data to the machine learning training data set.
Hansraj does not teach the use of image data. However, Malur Srinivasan has previous taught the use of image data (Paragraph 4).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Hansraj and the teachings of Malur Srinivasan. This would have granted the advantage of reducing human bias within image datasets (Malur Srinivasan, Paragraph 33).
Regarding claim 5, which depends upon claim 1:
Claim 5 recites:
The computing system of claim 1, wherein to adjust the second machine learning model, the instructions, when executed, cause the computing system to perform one or more of: modify parameters of the second machine learning model; or retrain the second machine learning model using a low bias training set.
Hansraj in view of Malur Srinivasan discloses the method of claim 1 upon which claim 5 depends. Furthermore, regarding the limitation of claim 5:
Hansraj teaches retraining the machine learning model over time as a form of course correction to reduce bias, such as on a dataset that excludes historical values that may be omitted without impacting accuracy (Paragraph 39). This would be retrain the second machine learning model using a low bias training set.
Regarding claim 6, which depends upon claim 1:
Claim 6 recites:
The computing system of claim 1, wherein the instructions, when executed, further cause the computing system to train the first machine learning model using training image data, and wherein the training image data includes a plurality of training images and, for each of the plurality of training images, a respective predetermined bias score and a respective predetermined saliency bias map.
Hansraj in view of Malur Srinivasan discloses the method of claim 1 upon which claim 6 depends. However, Hansraj does not teach the limitation of claim 6:
Malur Srinivasan teaches a set of images, wherein first image is associated with a first saliency map and a second image is associated with a second saliency map (Paragraph 4). These images would be an example of a plurality of training images with a respective predetermined saliency map as well as training image data that includes a plurality of training images where the machine learning model is trained on the training image data.
As Hansraj has already taught the concept of a bias score, a combination of Hansraj and Malur Srinivasan that associates a bias score with an image would be obvious.
Hansraj does not teach the use of image data. However, Malur Srinivasan has previous taught the use of image data (Paragraph 4).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Hansraj and the teachings of Malur Srinivasan. This would have granted the advantage of allowing the model to be used to understand its underlying learning process (Malur Srinivasan, Paragraph 24) and furthermore human bias within image datasets (Malur Srinivasan, Paragraph 33).
Claims 13, 16-18 recite a computer readable storage medium that parallels the system of claims 1, 4-6 respectively. Hansraj teaches a computer-readable storage medium that may be executable from a processor, or a computing system (Paragraph 5). Therefore, the analysis discussed above with respect to claims 1, 4-6 also applies to claims 13, 16-18 respectively. Accordingly, claims 13, 16-18 are rejected based on substantially the same rationale as set forth above with respect to claims 1, 4-6 respectively.
Claims 19, 22-24 recite a method that parallels the system of claims 1, 4-6 respectively. Therefore, the analysis discussed above with respect to claims 1, 4-6 also applies to claims 19, 22-24 respectively. Accordingly, claims 19, 22-24 are rejected based on substantially the same rationale as set forth above with respect to claims 1, 4-6 respectively.
Claims 2-3, 7-12, 14-15, 20-21 are rejected under 35 U.S.C. 103 as being unpatentable over Hansraj in view of Malur Srinivasan, further in view of Su et al. (Pub. No. US 20190340469 A1, filed March 20th 2017, hereinafter Su).
Regarding claim 2, which depends upon claim 1:
Claim 2 recites:
The computing system of claim 1, wherein to generate, via the first machine learning model, the bias score and the saliency bias map, the instructions, when executed, cause the computing system to: generate a set of complex feature vectors; and convert the complex feature vectors to a complex wave via wave expansion, the complex wave providing a feature representation in a transformed domain.
Hansraj in view of Malur Srinivasan discloses the method of claim 1 upon which claim 2 depends. However, Hansraj in view of Malur Srinivasan does not teach the limitation of claim 2:
Su in the same field of endeavor of reinforcement learning teaches the use of an inverse Fourier transform on the multiplicative product of the Fourier transform of an estimated topic and the Fourier transform of the word feature vectors (Paragraph 22). An inverse Fourier transform would here be a conversion of a feature vector into a wave, where the wave proves a feature representation is a transformed domain. Furthermore, the word feature vectors were generated by a generation circuit (Paragraph 21).
Hansraj, Malur Srinivasan, Su, and the present application are analogous art because they are all in the same field of endeavor of reinforcement learning.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Hansraj in view of Malur Srinivasan and the teachings of Su. This would have granted the advantage of a more complex representation of a feature than a vector can provide (Su, Paragraph 22).
Regarding claim 3, which depends upon claim 2:
Claim 3 recites:
The computing system of claim 2, wherein to generate the set of complex feature vectors, the instructions, when executed, cause the computing system to: select a sequence of data points from the input image data; and transform the sequence of data points into the frequency domain.
Hansraj in view of Malur Srinivasan further in view of Su discloses the method of claim 2 upon which claim 3 depends. However, Hansraj in view of Malur Srinivasan does not teach the limitation of claim 3:
Su teaches multiplication in the frequency domain for the feature vectors (Paragraph 22) which would be a conversion into the frequency domain. While Su does not teach a sequence of data point from the input image data, Malur Srinivasan has previously taught the use of image data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the present application to implement a system that utilized the teachings of Hansraj in view of Malur Srinivasan and the teachings of Su. This would have granted the advantage of a more complex representation of a feature than a vector can provide (Su, Paragraph 22) and furthermore reducing human bias within image datasets (Malur Srinivasan, Paragraph 33).
Claims 7-12 recite an apparatus that parallels the system of claims 1-6 respectively. However, claims 7-12 are directed towards a semiconductor apparatus. Hansraj in view of Malur Srinivasan do not teach the use of semiconductors. Su teaches the use of semiconductors, and with them, substrates which are inherent to using semiconductors (Paragraph 53). In all other respects, these claims parallel claims 1-6. Therefore, the analysis discussed above with respect to claims 1-6 also applies to claims 7-12 respectively. Accordingly, claims 7-12 are rejected based on substantially the same rationale as set forth above with respect to claims 1-6 respectively.
Claims 14-15 recite a computer readable storage medium that parallels the system of claims 2-3 respectively. Hansraj teaches a computer-readable storage medium that may be executable from a processor, or a computing system (Paragraph 5). Therefore, the analysis discussed above with respect to claims 2-3 also applies to claims 14-15 respectively. Accordingly, claims 14-15 are rejected based on substantially the same rationale as set forth above with respect to claims 2-3 respectively.
Claims 20-21 recite a method that parallels the system of claims 2-3 respectively. Therefore, the analysis discussed above with respect to claims 2-3 also applies to claims 20-21 respectively. Accordingly, claims 20-21 are rejected based on substantially the same rationale as set forth above with respect to claims 2-3 respectively.
Response to Arguments
Applicant’s arguments filed 26-AUGUST-2025 have been fully considered, but the examiner believes that not all are fully persuasive.
Regarding the applicant’s remarks on the non-final office action’s 103 rejection of the claims, the applicant argues that Hansraj, Malur Srinivasan, and Su do not teach the amended limitations of these claims. As such, the applicant argues that all claims dependent on the above would additionally not be obvious under 103. However, the examiner believes that Hansraj, Malur Srinivasan, and Su does teach the amended limitations and respectfully requests applicant’s consideration of the following:
With regards to claim 1, applicant argues that Hansraj in view of Malur Srinivasan does not teach “generate, via a first machine learning model, a bias score and a saliency bias map based on input image data”. Specifically, the applicant argues that neither reference teaches generating a bias score based on input image data.
The examiner agrees that either reference alone does not teach this limitation. However, Hansraj does generate a bias score, as Hansraj teaches reducing unconscious bias of a model down to a level that is determined to be acceptable according to a client (Paragraph 27). This would be analogous to generating a bias score as reducing bias to a particular level would mean that the bias is measured.
Malur Srinivasan in the same field of endeavor of reinforcement learning teaches determining a saliency map that is associated with a first image from a set of images (Paragraph 4). This would be a saliency map based on the input image data.
The generation of data from input image data could likewise be applied to Hansraj’s generating of a bias score, as Malur Srinivasan provides among other advantages the ability to use machine learning models, through the use of input images, that would allow for better understanding of the functioning of AI models as well as the development of human interpretable AI system (Malur Srinivasan, paragraph 24).
Hansraj, Malur Srinivasan, and the present application are analogous art because they are all in the same field of endeavor of reinforcement learning. Furthermore, Malur Srinivasan also addresses issues of human bias (Paragraph 33), so it is directed towards a similar issue as Hansraj and the present application.
Furthermore, the applicant argues that Hansraj does not teach the adjusting of a model or a training set, only adjusting the result of the model. The applicant cites paragraph 39 of Hansraj, which was used to teach this limitation in the previous action, and claims that this paragraph and the associated Figure 4A only describes adjusting the result of a model. However, paragraph 39 explicitly states:
“The course correction may mainly lead to training machine learning algorithm/classifiers over time to improve the accuracy of the algorithms and to reduce the bias” (Paragraph 39).
The examiner believes that this sentence refers to changing the training of the machine learning algorithm itself in order to improve its accuracy and reduce bias, which would not merely be a change to the output, but an “alteration to the approach of conventional algorithms” (Hansraj, Paragraph 39). For this reason, the examiner does not believe that the applicant’s arguments overcome the original rejection.
These arguments are repeated for claim 7, which is a counterpart to claim 1. The arguments above apply similarly to claim 7 as they do to claim 1, and as such are not viewed by the examiner to overcome the rejection at this time.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRIA JOSEPHINE MILLER whose telephone number is (703)756-5684. The examiner can normally be reached Monday-Thursday: 7:30 - 5:00 pm, every other Friday 7:30 - 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mariela Reyes can be reached on (571) 270-1006. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/A.J.M./Examiner, Art Unit 2142
/Mariela Reyes/Supervisory Patent Examiner, Art Unit 2142