CTNF 18/458,130 CTNF 74911 Notice of Pre-AIA or AIA Status 07-03-aia AIA 15-10-aia The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA. Claims 1-9 are pending in the application under prosecution and have been examined. The specification has not been checked to the extent necessary to determine the presence of all possible minor errors. The specification should be amended to reflect the status of all related application, whether patented or abandoned. Therefore, applications noted by their serial number and/or attorney docket number should be updated with correct serial number and patent number if patented. The first instance of all acronyms or abbreviation should be spelled out for clarity, whether or not considered well known in the art. In the response to this Office action, the Examiner respectfully requests that support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line numbers in the specification and/or drawing figure(s). This will assist the Examiner in prosecuting this application. Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. 37 C.F.R. § 1.83(a) requires the Drawings to illustrate or show all claimed features. Applicant must clearly point out the patentable novelty that they think the claims present, in view of the state of the art disclosed by the references cited or the objections made, and must also explain how the amendments avoid the references or objections. See 37 C.F.R. § 1.111(c). Claim Rejections - 35 USC § 112 07-30-02 AIA The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 6, and 8 limitation “extract a first portion and a second portion having a lower rate of match with the information data for learning than the first portion from the document data for learning based on a rate of match between the information data for learning and each portion of the document data for learning ” is indefinite. It is unclear whether “extract a first portion and a second portion” is related to “extract from document data”. The claims are interpreted as reciting “extract a first portion and a second portion from the document data” (similar to “a first portion extracted from document data for learning” and “a second portion, which is extracted from the document data for learning” of exemplary claim 5. Claim Rejections - 35 USC § 103 07-06 AIA 15-10-15 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 07-20-aia AIA The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 07-23-aia AIA The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. 07-21-aia AIA Claim s 1-9 are rejected under 35 U.S.C. 103 as being unpatentable over US 20210398676 A1 (EVANS et al) in view of US 20230109426 A1 (HASHIMOTO) . With respect to claims 1, 6 and 8, EVANS teaches model generation apparatus comprising: at least one processor that is configured to: acquire information data for learning, acquire document data for learning (processing unit programmed to receive image input from an imaging device, receive patient health data, encode the patient health data to convert the patient health data to encoded patient health data, and transmit the encoded patient health data into the machine learning algorithm, i.e., receive an input from a device, wherein the input comprises data obtained by the device; receive patient health data as input; encode the patient health data to make a medical condition state determination based on the image input and the encoded patient health data via the machine learning algorithm) [Abstract; Par. 0005-0007; Par. 0030] . EVANS fails to specifically teach, however, HASHIMOTO teaches extract a first portion and a second portion having a lower rate of match with the information data for learning than the first portion from the document data for learning based on a rate of match between the information data for learning and each portion of the document data for learning (computer configured to execute machine learning of a learning model, the computer acquires a plurality of learning data sets, to accept an input from a first portion of the feature amounts, and execute the first estimation task on the input data based on the input first portion; accept an input from a second portion of the feature amounts, and execute the second estimation task on the input data based on the input second portion wherein: the first estimation task with respect to the training data trained machine learning model has relatively high explainability for computation content, the data trained models having respective matching expectancy or explainability for computation content)) [Fig. 7; Par. 0156-0159; Par. 0166-0167; Par. 0030-0031; Par. 0037-0038) , generate a first machine learning model by using first learning data in which first data for learning included in the information data for learning is used as input data and the first portion is used as correct answer data, and generate a second machine learning model by using second learning data in which second data for learning included in the information data for learning is used as input data and the second portion is used as correct answer data (acquiring a plurality of learning data sets, each of the learning data sets being constituted by a combination of training data, first correct answer data that indicates a correct answer of a first estimation task with respect to the training data, and second correct answer data that indicates a correct answer of a second estimation task with respect to the training data, a result of the first estimator executing the first estimation task matches the first correct answer data, and a result of the second estimator executing the second estimation task matches the second correct answer data) [Par. 0015-0018; Par. 0030-0032; Par. 0156-0159; Par. 0166-0167]. Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing of the instant application to modify the machine learning model based on manual input health data (information data) with patient health data from medical records (document data) , as taught by EVANS , with the two step machine learning, as above and taught by HASHIMOTO , in order to improve the confidence level of the conclusion reached by machine learning algorithm, information as to localize and classify of the correct condition or result, as taught by EVANS [Par. 0040]. The combination is proper because HASHIMOTO teaches a trained machine learning model that has relatively high explainability for computation content with data acquisition unit configured to acquire a plurality of learning data sets, each of the learning data sets being constituted by a combination of training data, first correct answer data that indicates a correct answer of a first estimation task with respect to the training data, and second correct answer data that indicates a correct answer of a second estimation task with respect to the training data [Par. 0015-0017] . With respect to claims 5, 7 and 9, EVANS teaches at least one processor that is configured to: acquire information data, acquire a first document by inputting first data included in the information data to the first machine learning model, acquire a second document by inputting second data included in the information data to the second machine learning model, and generate a third document from the first document and the second document (processing unit is programmed to receive image input from an imaging device, receive patient health data, encode the patient health data to convert the patient health data to encoded patient health data, and transmit the encoded patient health data into the machine learning algorithm) [Abstract; Par. 0005-0007; Par. 0030] . EVANS fails to specifically teach, however, HASHIMOTO teaches document generation apparatus (model generation apparatus and model generation method generating machine learning model in two training steps) [Par. 0030-0031; Par. 0037-0038] comprising: a first machine learning model generated by using first learning data in which first data for learning included in information data for learning is used as input data and a first portion extracted from document data for learning based on a rate of match between the information data for learning and each portion of the document data for learning is used as correct answer data (training data constituting a first data learning set wherein a first estimator configured to accept an input data and a first portion of feature data matching the first correct answer data, the feature data being converted or extracted input data, (corresponding to document data) the learning data sets being constituted by a combination of input data and first correct answer data that indicates a correct answer of a first estimation task with respect to the training data trained machine learning model that having relatively high explainability for computation content) [Par. 0015-0018; Par. 0030-0032; Par. 0156-0159; Par. 0166-0167] ; a second machine learning model generated by using second learning data in which second data for learning included in the information data for learning is used as input data and a second portion, which is extracted from the document data for learning and has a lower rate of match with the information data for learning than the first portion, is used as correct answer data (training data constituting a second data learning set wherein a second estimator configured to accept an input data and a second portion of feature data matching second correct answer data, the feature data being converted or extracted input data, the learning data sets being constituted by a combination of training data, second correct answer data that indicates a correct answer of a second estimation task with respect to the training data trained machine learning model that having relatively lower explainability for computation content) [Par. 0015-0018; Par. 0030-0032; Par. 0156-0159; Par. 0166-0167] . Therefore, it would have been obvious to one having ordinary skill in the art before the effective filing of the instant application to modify the machine learning model based on manual input health data (information data) with patient health data from medical records (document data) , as taught by EVANS , with the two step machine learning, as above and taught by HASHIMOTO , in order to improve the confidence level of the conclusion reached by machine learning algorithm, information as to localization and classification of the correct condition or result, as taught by EVANS [Par. 0040]. The combination is proper because HASHIMOTO teaches a trained machine learning model that has relatively high explainability for computation content with data acquisition unit configured to acquire a plurality of learning data sets, each of the learning data sets being constituted by a combination of training data, first correct answer data that indicates a correct answer of a first estimation task with respect to the training data, and second correct answer data that indicates a correct answer of a second estimation task with respect to the training data [Par. 0015-0017] . HASHIMOTO teaches computer configured to execute machine learning of a learning model, the computer acquires a plurality of learning data sets, accept an input from a first portion of the feature amounts, and execute the first estimation task on the input data based on the input first portion; accept an input from a second portion of the feature amounts, and execute the second estimation task on the input data based on the input second portion wherein: the first estimation task with respect to the training data trained machine learning model has relatively high explainability for computation content, the data trained models having respective matching expectancy or explainability for computation content [Fig. 7; Par. 0156-0159; Par. 0166-0167; Par. 0030-0031; Par. 0037-0038] . With respect to claim 2, the combination HASHIMOTO and EVANS teach the model generation apparatus, wherein the document data for learning is patient data related to a specific patient with which first date information is associated, and the information data for learning includes a plurality pieces of document data which are patient data related to the specific patient with which the first date information or second date information indicating a date earlier than a date indicated by the first date information is associated (receive an image input from an imaging device, wherein the image input comprises one or more images obtained by the imaging device; receive patient health data as input; encode the patient health data to convert the patient health data to encoded patient health data; the encoded patient health data is embedded into at least one image of the image input at or before a time that the machine learning algorithm analyzes the image input, such that the machine learning algorithm analyzes the image input together with the encoded patient health data embedded in the at least one image of the image input) [EVANS Par. 0030-0031; Par. 0037] . With respect to claim 3, the combination HASHIMOTO and EVANS teach the model generation apparatus, wherein the at least one processor is configured to: generate a third machine learning model that uses the document data for learning as input, and outputs at least one of the first portion or the second portion through reinforcement learning in which performance of the first machine learning model and performance of the second machine learning model are used as rewards, and extract the first portion and the second portion from the document data for learning by using the third machine learning model (EVANS teaches machine learning algorithms to exhibit improved accuracy and/or availability in medical condition state determinations using previous artificial intelligence diagnoses,, i.e., patient health data pertaining to previous tests or procedures can indicate areas of increased signal intensity objective decision-making processes based on previous training of machine learning model to determine updated medical condition state determination in real-time) [Par. 0030-0031]; HASHIMOTO teaches second training by alternately and repeatedly executing a first step of training the first estimator and the second adversarial estimator so that a result of the estimator executing the second estimation task matches the correct answer data in order improve the accuracy in the estimation) [Par. 0020-0023] . With respect to claim 4, the combination HASHIMOTO and EVANS teach the model generation apparatus, wherein the second machine learning model is a machine learning model that includes a machine learning model outputting a prediction result based on the information data for learning, and outputs a combination of the prediction result and a template (EVANS teaches machine learning model outputting prediction models pertaining to measurements, indications, and/or recommendations associated with that particular learning) [Par. 0049-0050]; HASHIMOTO teaches machine learning to execute tasks of estimation including prediction such as classification given training data to make inference task) [Par. 0007-0008] . Conclusion 07-96 AIA The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20220358361 A1 (OTSUKA et al) teaching generation apparatus includes a generation unit configured to use a machine learning model learned in advance, with a document as an input, to extract one or more ranges that are likely to be answers in the document and generate a question representation whose answer is each of the ranges that are extracted. US 8442926 teaching learning document data which belongs to previously specified document, generating first learning result information of the plurality of pieces of learning document data showing whether the first classified information matches the correct answer information. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to PIERRE MICHEL BATAILLE whose telephone number is (571)272-4178. The examiner can normally be reached Monday - Thursday 7-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, TIM VO can be reached at (571) 272-3642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PIERRE MICHEL BATAILLE/Primary Examiner, Art Unit 2138 Application/Control Number: 18/458,130 Page 2 Art Unit: 2138 Application/Control Number: 18/458,130 Page 3 Art Unit: 2138 Application/Control Number: 18/458,130 Page 4 Art Unit: 2138 Application/Control Number: 18/458,130 Page 5 Art Unit: 2138 Application/Control Number: 18/458,130 Page 6 Art Unit: 2138 Application/Control Number: 18/458,130 Page 7 Art Unit: 2138 Application/Control Number: 18/458,130 Page 8 Art Unit: 2138 Application/Control Number: 18/458,130 Page 9 Art Unit: 2138 Application/Control Number: 18/458,130 Page 10 Art Unit: 2138 Application/Control Number: 18/458,130 Page 11 Art Unit: 2138 Application/Control Number: 18/458,130 Page 12 Art Unit: 2138