Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-11 and 13-19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Horesh (U.S.P. 11,972,333).
With respect to claim 1, Horesh teaches over a long period of time, (weeks, months, years) data used for training the original AI model will generate stale and incorrect output information regarding certain standards. Such stale or misinformative information may skew the accuracy of the results. To some users, stale information is not desired and the information should be purged from the old AI model. Yet to other users, older information may provide a useful result. In the case that the stale or outdated information is to be disgorged, the old AI model generates a first output data. The old AI model is then trained with a second data training set to produce a second output data. The first and second outputs are sent to a classifier to determine their similarity of results. Based on the similarity results, the computer system may elect to adopt different policies to optimize the performance of the AI model.
Horesh teaches (see para. 4, lines 22-30), using a second generative AI model trained on a second input training data set.
The output from the first generative model is supplied as an input to the second generative AI model.
The respective outputs of the first generative AI model 170 and the second generative AI model 140 are sent to a classification model 150 (see para. 4, lines 30-35), see also col. 8, lines 40-45. The classification model makes a comparison between the first output (initial training dataset) and second output (emulated training set). See col. 4, lines 32-34.
Horsh teaches a computer system (col. 4. lines 35-36) for measuring the distance of similarity (similarity indication), see also col. 9, line 65- col. 10, line 6 where the reference teaches determining if the similarity metric is less than a defined threshold, therefore indicating that the result of the first output is not relevant or otherwise desired as the output.
Horesh teaches training the second generative AI model 140, see col. 7, line 65, col. 8, line 6. At col. 10, Horesh teaches a policy engine 160 that looks at the threshold distance and makes a determination that the output from the second generative model 140 provides the output that the user indicates yields acceptable results (see col. 10, lines 25-30 and 35-40). See also col. 5, lines 5-11 regarding retraining or updating a training set based on the results of the similarity check.
With respect to claim 2, Horesh teaches the outputs from both the first and second AI models, 170 and 140 respectively, may be text in in a natural language description (format) – (See col. 8, lines 55-60). Therefore, the output of the first model, which represents the initial training dataset, may be converted by a natural language processor. The BERT transformer model has been identified as one possible means for generating the natural language description. Moreover, Horesh teaches that output of the emulated training set by means of the second generative AI model may also convert text into a natural language format by a natural language processor, see col. 8, lines 55-60.
With respect to claim 3, Horesh teaches the system 100 will provide a prompt to the user to approve results generated by the policy engine 160 between the first and second AI engines. See col. 10, lines 27-40.
With respect to claim 4, Horesh teaches part of the first data could be data that is older, and in some regards could be considered to be outdated. However, the output result of the first AI model 170 is sent as in input to the second AI model 140. The output results of both models 140 and 170 are fed to a classifier where a determination is made as to how similar the outputs are.
Horesh teaches a computer system (col. 4. Lines 35-36) for measuring the distance of similarity (similarity indication), see also col. 9, line 65- col. 10, line 6. If the similarity result is greater than a threshold value, then the results are said to be very similar and data from the first AI model has been input to the second AI model 140 and is a part of the data set for the second model which generates the second output information.
With respect to claim 5, Horesh teaches a method comprising: receiving an initial training dataset by a first machine learning model (170), see col. 4, lines 20-25).
Horesh teaches (see para. 4, lines 22-30), using a second generative AI model trained on a second input training data set.
The output from the first generative model is supplied as an input to the second generative AI model.
Horesh teaches training the second generative AI model 140, see col. 7, line 65, col. 8, line 6. At col. 10, lines 1-6 and 15-17, Horesh teaches a policy engine 160 that looks at the threshold distance and makes a determination that the output from the second generative model 140 provides the output that the user indicates yields acceptable results (see col. 10, lines 25-30 and 35-40). See also col. 5, lines 5-11 regarding retraining or updating a training set based on the results of the similarity check.
With respect to claim 6, Horesh teaches wherein the first and second machine models are generative machine models, see col. 8, lines 33-36.
With respect to claim 7, Horesh teaches the outputs from both the first and second AI models, 170 and 140 respectively, may be text in a natural language description (format) – (See col. 8, lines 55-60). Therefore, the output of the first model, which represents the initial training dataset, may be converted by a natural language processor. The BERT transformer model has been identified as one possible means for generating the natural language description. Moreover, Horesh teaches that output of the emulated training set by means of the second generative AI model may also convert text into a natural language format by a natural language processor, see col. 8, lines 55-60.
With respect to claim 8, Horesh teaches a computer system (col. 4., lines 35-36) for measuring the distance of similarity (similarity indication). At col. 9, line 65 - col. 10, line 6. Horesh teaches determining whether the similarity metric is less than a defined threshold, therefore indicating that the result of the first output is not relevant or otherwise desired for output.
At col. 5, lines 5-11, Horesh teaches making a system determination of retraining or training with a different dataset using the emulated training dataset which is fed to generative AI model 140.
With respect to claim 9, Horesh teaches determining whether the similarity satisfies a predetermined distance between generative AI models. At col. 9, line 65 - col. 10, line 6, Horesh teaches determining whether the similarity metric is less than a defined threshold. The similarity metric being less than a “defined threshold”, corresponds with the claimed threshold value.
With respect to claim 10, Horesh teaches the system 100 will provide a prompt to the user to approve results generated by the policy engine 160 between the first and second AI engines. See col. 10, lines 27-40.
With respect to claim 11, Horesh teaches part of the first data could be data that is older and in some regards could be considered to be outdated. However, the output result of the first AI model 170 is sent as in input to the second AI model 140. The output results of both models 140 and 170 are fed to a classifier where a determination is made as to how similar the outputs are.
Horesh teaches a computer system (col. 4. lines 35-36) for measuring the distance of similarity (similarity indication), see also col. 9, line 65- col. 10, line 6. If the similarity result is greater than a threshold value, then the results are said to be very similar and data from the first AI model has been input to the second AI model 140 and is a part of the data set for the second model which generates the second output information.
With respect to claim 13, Horesh teaches a system having a memory (computer readable medium, which may include a RAM, ROM, EEPROM or other disk or optical storage media), see col. 25, lines 12-21. Horesh teaches wherein the computer readable medium stores instructions (col. 25, lines 14, 15 and lines 20-26) Horesh teaches one or more processors, see para. 24, lines 55-64, for executing the instructions stored in the memory, as set forth above.
Horesh teaches receiving an initial training dataset by a first machine learning model (170); col. 4, lines 20-25).
Horesh teaches (see para. 4, lines 22-30), using a second generative AI model trained on a second input training data set. The output from the first generative model is supplied as an input to the second generative AI model.
Horesh teaches training the second generative AI model 140, see col. 7, line 65, and col. 8, line 6. At col. 10, lines 1-6 and 15-17,Horesh teaches a policy engine 160 that looks at the threshold distance and makes a determination that the output from the second generative model 140 provides the output that the user indicates yields acceptable results (see col. 10, lines 25-30 and 35-40). See also col. 5, lines 5-11 regarding retraining or updating a training set based on the results of the similarity check.
With respect to claim 14, Horesh teaches the first and second machine models are generative machine models, see col. 8, lines 33-36.
With respect to claim 15, Horesh teaches one or more processors for executing the instructions stored in the memory, see para. 24, lines 55-64. Horesh teaches causing the instructions to output from both the first and second AI models, 170 and 140 respectively, a text in the natural language description (format) – (See col. 8, lines 55-60). Therefore, the output of the first model, which represents the initial training dataset, may be converted by a natural language processor. The BERT transformer model has been identified as one possible means for generating the natural language description. Moreover, Horesh teaches that output of the emulated training set by means of the second generative AI model may also convert text into a natural language format by a natural language processor, see col. 8, lines 55-60.
With respect to claim 16, Horesh teaches a computer system (see col. 4., lines 35-36) for measuring the distance of similarity (similarity indication). At col. 9, line 65 - col. 10, line 6, Horesh teaches determining whether the similarity metric is less than a defined threshold, therefore indicating that the result of the first output is not relevant or otherwise desired for output.
At col. 5, lines 5-11, Horesh teaches making a system determination of retraining or training with a different dataset using the emulated training dataset which is fed to generative AI model 140.
With respect to claim 17, Horesh teaches determining whether the similarity satisfies a predetermined distance between generative AI models. At col. 9, line 65 - col. 10, line 6 Horesh, teaches determining whether the similarity metric is less than a defined threshold. The similarity metric being less than a “defined threshold”, corresponds with the claimed threshold value.
With respect to claim 18, Horesh teaches the system 100 will provide a prompt to the user to approve results generated by the policy engine 160 between the first and second AI engines. See col. 10, lines 27-40.
With respect to claim 19, Horesh teaches part of the first data could be data that is older and in some regards could be considered to be outdated. However, the output result of the first AI model 170 is sent as in input to the second AI model 140. The output results of both models 140 and 170 are fed to a classifier where a determination is made as to how similar the outputs are.
Horesh teaches a computer system (col. 4. lines 35-36) for measuring the distance of similarity (similarity indication), see also col. 9, line 65- col. 10, line 6. If the similarity result is greater than a threshold value, then the results are said to be very similar and data from the first AI model has been input to the second AI model 140 and is a part of the data set for the second model which generates the second output information.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 12 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Horesh (US Pat. 11, 972,333) in view of Sargent (2021/0082092).
With respect to claim 12, Horesh teaches all of the subject matter upon which the claim depends except for a first data including biometric information and the second image not including biometric information. Horesh does not teach biometric information. Therefore, biometric information is not included in the second image.
Sargent teaches a computer system that uses a processor for removing medical artifacts in an image. Sargent teaches biometric information 114, such as prior images of the patient, area of imaging and possible the voiceprints of the patient, see col. 26, lines 1-4, col. 27, lines 5-9 and para. 28, last four lines.
Since Horesh and Sargent are directed to removing artifacts that are either stale or personal, the purpose of removing biometric information would have been recognized by Horesh as set forth by Sargent.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teaching of Horesh and to use any combination of voice or text as the data that is subject to Generative models 140 and 170 for the purpose of purging a first generative model which could contain biometric information or any other information that is stale or private.
With respect to claim 20, Horesh teaches one or more processors for executing the instructions stored in the memory, see para. 24, lines 55-64 to perform the limitations of the claim. Horesh teaches causing the instructions to control the first and second AI models, 170 and 140 respectively.
Horesh does not teach biometric information. Therefore, biometric information is not included in the second image.
Sargent teaches a computer system that uses a processor for removing medical artifacts in an image. Sargent teaches biometric information 114, such as prior images of the patient, area of imaging and possible the voiceprints of the patient, see col. 26, lines 1-4, col. 27, lines 5-9 and para. 28, last four lines.
Since Horesh and Sargent are directed to removing artifacts that are either stale or personal, the purpose of removing biometric information would have been recognized by Horesh as set forth by Sargent.
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to use the teaching of Horesh and to use any combination of voice or text as the data that is subject to Generative models 140 and 170 for the purpose of purging a first generative model which could contain biometric information or any other information that is stale or private.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JEROME GRANT II whose telephone number is (571) 272-7463. The examiner can normally be reached M-F 9:00 a.m. - 5:00 p.m..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at 571-272- 2976 The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JEROME GRANT II/Primary Examiner, Art Unit 2664