DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The following is a Final Office Action in response to applicant’s filing on October 30, 2025. Claims 1-3, 5-8, 10-13, 15, 17-18, and 20. Claims 1-20 are pending, of which claims 1, 11, and 20 are in independent form.
Response to Amendment
The amendment filed 10/30/2025 has been entered. Amendments to the claims have overcome the previous § 112(a) issue. Thus, the rejection has been withdrawn.
However, claims 5 and 15 are rejected under 35 USC § 112 (a) as failing to comply with the written description requirement.
Response to Arguments
In view of the remarks submitted on October 30, 2025, applicant’s arguments have been carefully and respectfully considered but they are not persuasive.
Claim Rejections - 35 USC § 112(a)
The examiner acknowledges that removal of the limitation “retrain the classification model based on the medical imaging file” form the independent claims withdraws the rejection under 35 USC § 112 (a). However, Applicant’s amendment regarding claims 5 and 15 continues to recite “retraining the classification model based on the medical imaging file”. The examiner notes that the specification does not clearly distinguish initial model training from post deployment retraining. Moreover, the disclosure treats the classification model as fixed to retrain new medical imaging files and not an adaptive learning model. Therefore, claims 5 and 15 are reject under 35 USC § 112(a).
Claim Rejections - 35 USC § 103
On Pages 9-12 of remarks, Applicant argues that “Neither Goswami nor Briliauskas, alone or in combination, teaches … "evaluating the medical imaging file using a classification model to...identify suspected anomalous or malicious data within the medical imaging file" and "based at least in part on determining that the first score meets or exceeds a first threshold, modifying the medical imaging file by removing the suspected anomalous or malicious data from the medical imaging file while retaining valid metadata in the header and the one or more images in the data set" as recited in the amended independent claim 1”. The examiner disagrees with the applicant and has a different view of prior art teachings and claim interpretation. The examiner is relying on Goswami to teach “evaluating the medical imaging file using a classification model to: i) generate a first score representative of a likelihood that the medical imaging file contains anomalous or malicious data”. Goswami discloses in one possible implementation, the input data set may represent a medical image, such as an x-ray image, CT scan image, MRI image, or the like, that is to have portions of the image, or the image as a whole, classified into one or more predefined classifications (medical images as inputs), see paragraphs [0093-0094]. Goswami discloses the CAID system 100 provides a mechanism for detecting adversarial inputs that are part of an attack on the target ML model 104 and then initiate mitigation operations to minimize the effects of such attacks, see paragraphs [0066-0068]. In addition, Goswami discloses rather than the input image based classification 106 comprising a vector output, the input image based classification 106 may be the final classification along with the corresponding probability value for that classification, also sometimes referred to as the confidence score for the classification. Thus, the term “classification” or “class” in the context of the output generated by the machine learning model may refer to either a vector output with probability values or scores associated with different predefined categories (or classifications), see paragraphs [0071-0072]. Thus, the classifier outputs probability/confidence values for classifications, which function as a score representative of likelihood. Under the broadest reasonable interpretation, the likelihood is framed as adversarial, anomalous or malicious, does not narrow the score’s functional role of the score.
Furthermore, Goswami discloses the misclassification that the adversarial input intends to cause is often referred to as the “target” label (t) generated by the computing model based on the input data, whereas the correct or “true” label (t0) is the label that the computing model should output for the original (non-perturbed) input data, in paragraph [0017]. Goswami further discloses evaluate input data sent to the request processing pipeline, detect adversarial inputs in the input data, and initiate mitigation operations to mitigate the effects of adversarial inputs, in paragraph [0092]. Therefore, Goswami treats adversarial perturbations as data embedded within the image that cause misclassification, and identifies when such perturbations are present.
Further, Applicant argues that “Briliauskas cannot reasonably be mapped to "evaluating the medical imaging file using a classification model to...identify suspected anomalous or malicious data within the medical imaging file" and "based at least in part on determining that the first score meets or exceeds a first threshold, modifying the medical imaging file by removing the suspected anomalous or malicious data from the medical imaging file while retaining valid metadata in the header and the one or more images in the data set”.
The examiner disagrees with the applicant and has a different view of prior art teachings and claim interpretation. Briliauskas teaches extracting and removing malicious portions of a file and reconstructing a remaining file that preserves non-malicious information. The claims do not recite any technical limitation restricting the manner of removal beyond removing suspected anomalous data while retraining other data. Therefore, both references address detection and mitigation of anomalous data in files using machine-learning based analysis. Incorporating Briliauskas selective removal and reconstruction techniques into Goswami’s medical -imaging pipeline. Therefore, Applicant’s amendments and arguments do not overcome the rejections under 35 USC § 103.
The same reasons apply to independent claims 11 and 20, and the dependent claims at least virtue of their dependencies.
Therefore, the examiner maintains the rejection under 35 USC § 103.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL. — The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Claim 1 recites “the valid metadata corresponding to a portion of the metadata that is not included in the suspected anomalous or malicious data”; there is no disclosure as to how “a portion of the metadata” is partitioned, (i.e., if the score meets or exceeds a first threshold, generate a modified version of the medical imaging file by removing the suspected anomalous or malicious data from the medical imaging file while retaining valid metadata in the header and the images in the data set;, see paragraph [0003]). The disclosure does not describe identifying and partitioning “the valid metadata corresponding to a portion of the metadata”. There is no disclosure of any mechanism for identifying, classifying or partitioning metadata within a medical imaging file. In particular, the disclosure does not define how metadata is evaluated to determine validity.
Claim 5 recites “retraining the classification model based on the medical imaging file”; there is no disclosure of other techniques and algorithms that would be suitable for performing retraining the classification model, (i.e., the classification model of anomaly detector 212 is periodically or continuously updated/retrained as new medical imaging files are received and predicted not to contain anomalies, see paragraph [0053]). The disclosure treats the classification model as fixed to retrain new medical imaging files and not an adaptive learning model. The same reasons apply to dependent claim 15.
The level of detail required to satisfy the written description requirement varies depending on the nature and scope of the claims and on the complexity and predictability of the relevant technology. Ariad, 598 F.3d at 1351, 94 USPQ2d at 1172; Capon v. Eshhar, 418 F.3d 1349, 1357-58, 76 USPQ2d 1078, 1083-84 (Fed. Cir. 2005). Computer-implemented inventions are often disclosed and claimed in terms of their functionality. For computer-implemented inventions, the determination of the sufficiency of disclosure will require an inquiry into the sufficiency of both the disclosed hardware and the disclosed software due to the interrelationship and interdependence of computer hardware and software. The critical inquiry is whether the disclosure of the application relied upon reasonably conveys to those skilled in the art that the inventor had possession of the claimed subject matter as of the filing date. Vasudevan Software, Inc. v. MicroStrategy, Inc., 782 F.3d 671, 682. 114 USPQ2d 1349, 1356 (citing Ariad Pharm., Inc. V. Eli Lilly & Co, 598 F.3d 1336, 1351, 94 USPQ2d 1161, 1172 (Fed. Cir. 2010) in the context of determining possession of a claimed means of accessing disparate databases).
The same reasons apply to independent claims 11 and 20, and the dependent claims at least virtue of their dependencies.
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION. — The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation “the valid metadata corresponding to a portion of the metadata that is not included in the suspected anomalous or malicious data” renders the claim indefinite because the specification does not provide any objective criteria for determining how and when metadata is “valid”. In particular, the disclosure does not define how metadata is evaluated to determine validity. As a result, a person ordinary skill in the art would not be able to ascertain the scope of the claim with reasonable certainty.
The same reasons apply to independent claims 11 and 20, and the dependent claims at least virtue of their dependencies.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 9-15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Goswami et al. (US 2021/0056404 A1), hereinafter Goswami in view of Briliauskas et al. (US 11,693,965 B1), hereinafter Briliauskas.
In regards to claim 1, Goswami discloses a system comprising: one or more processors (Goswami, Para. 0148, a data processing system suitable for storing and/or executing program code will include at least one processor coupled directly or indirectly to memory elements through a communication bus, such as a system bus); and
one or more memories storing processor-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform operations (Goswami, Para. 0148, the memory elements can include local memory employed during actual execution of the program code, bulk storage, and cache memories which provide temporary storage of at least some program code in order to reduce the number of times code must be retrieved from bulk storage during execution):
obtaining a medical imaging file comprising a header and a data set, wherein the header includes metadata associated with the medical imaging file (Goswami, Para. 0083, a training dataset 210 is provided that comprises training image data 212 and image metadata 214. The image metadata 214 comprises metadata indicating a correct classification for the corresponding image data 212. Thus, for each image in the training dataset 210, there is a set of image data 212 and a corresponding image metadata 214) and the data set includes one or more images captured by a medical imaging device (Goswami, Para. 0093, the input data set may represent a medical image, such as an x-ray image, CT scan image, MRI image, or the like, that is to have portions of the image, or the image as a whole, classified into one or more predefined classifications);
evaluating the medical imaging file using a classification model to: i) generate a first score representative of a likelihood that the medical imaging file contains anomalous or malicious data (Goswami, Paras. 0071-0072, thus, the term “classification” or “class” in the context of the output generated by the machine learning model may refer to either a vector output with probability values or scores associated with different predefined categories (or classifications), or a one-hot or binary output indicating a classification of the input. For purposes of the present description of an example embodiment, the classification or class will be considered to be a vector output comprising probability values or scores indicating the likelihood that a corresponding class is a correct classification for the input to the machine learning model) see also paras 0093-0094 and 0071-0072 , and the valid metadata corresponding to a portion of the metadata that is not included in the suspected anomalous or malicious data (Goswami, Para. 0068, the similar images retrieved from the image repository 114 are guaranteed to be clean (non-attacked or non-adversarial) images because they are chosen to be part of the image repository 114 and hence adversarial images would not be selected for inclusion in the image repository 114 or an image repository 114 that has adversarial images would not be used. Thus, the cohort based classification 122 output by the cohort based ML classifier 120 should be significantly similar, i.e. within a given tolerance or threshold difference, of the input image based classification 106 generated by the target ML classifier 104 in the absence of an adversarial input) and (Goswami, para.0083, the image metadata 214 can be considered a machine learning ground truth data structure in that the image metadata 214 provides the actual true classification for the corresponding image, against which the output of the target ML classifier 104 may be compared in order to perform the machine learning training operation).
Goswami does not explicitly disclose ii) identify suspected anomalous or malicious data within the medical imaging file; based at least in part on determining that the first score meets or exceeds a first threshold, modifying the medical imaging file by removing the suspected anomalous or malicious data from the medical imaging file while retaining valid metadata in the header and the one or more images in the data set,
However, Briliauskas teaches ii) identify suspected anomalous or malicious data within the medical imaging file (Briliauskas, Col. 5, Lines. 10-15, and Lines 64-67, the remote device labels a file as malicious if a match is identified between the file characterization information of the file and the file characterization information of one of the plurality of known malicious files);
based at least in part on determining that the first score meets or exceeds a first threshold (Briliauskas, Col. 8, Lines 38-42, predicting whether the file is malicious includes generating, by the first malware detection model, a maliciousness score for the file, where the is labeled as malicious if the maliciousness score meets or exceeds a threshold), modify the medical imaging file by removing the suspected anomalous or malicious data from the medical imaging file while retaining valid metadata in the header and the one or more images in the data set (Briliauskas, Col. 8, Lines 55-58, responsive to predicting that a file is malicious using the first malware detection model, extracting features of the file, where the extracted features do not include any sensitive information related to the file); and
Goswami and Briliauskas are both considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami to incorporate the teachings of Briliauskas to include ii) identify suspected anomalous or malicious data within the medical imaging file (Briliauskas, Col. 5, Lines 64-67); based at least in part on determining that the first score meets or exceeds a first threshold (Briliauskas, Col. 8, Lines 38-42), modify the medical imaging file by removing the suspected anomalous or malicious data from the medical imaging file while retaining valid metadata in the header and the one or more images in the data set (Briliauskas, Col. 8, Lines 55-58). Doing so would aid the federate learning methods described herein leverage the unique files stored on each client device, which can result in a more robust and accurate model that reflects client preferences (Briliauskas, Col. 10, Lines 1-3).
In regards to claim 2, the combination of Goswami and Briliauskas teaches the system of claim 1, the operations further comprising: evaluating the modified medical imaging file using the classification model to generate a second score representative of a likelihood that the modified medical imaging file contains anomalous or malicious data; and based at least in part on determining that the second score meets or exceeds the first threshold, quarantining the medical imaging file or flag the medical imaging file for additional review (Briliauskas, Col. 12, Lines 15-26, once a file is labeled (e.g., “clean” or “malicious”), the file is stored with its label in the training data set on the client device 300. Additionally, or alternatively, malicious files may be quarantined, deleted, etc., and/or an alert may be presented to a user recommending that the malicious file be removed. If, however, a match in the malware properties database is not identified for a local file and the maliciousness of the file cannot be accurately predicted, the file may be flagged for additional evaluation, quarantined, discarded, and/or removed from the training data). Goswami and Briliauskas are both considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami to incorporate the teachings of Briliauskas to include the instructions further causing the system to: evaluate the modified medical imaging file using the classification model to generate a second score representative of a likelihood that the modified medical imaging file contains anomalous or malicious data; and if the second score meets or exceeds the first threshold, quarantine the medical imaging file or flag the medical imaging file for additional review (Briliauskas, Col. 12, Lines 15-26). Doing so would aid the federate learning methods described herein leverage the unique files stored on each client device, which can result in a more robust and accurate model that reflects client preferences (Briliauskas, Col. 10, Lines 1-3).
In regards to claim 3, the combination of Goswami and Briliauskas teaches the system of claim 1, the operations further comprising: determining the first score is less than a second threshold (Briliauskas, Col. 12, Lines 13-18), wherein the medical imaging file is stored without modification if the first score is less than the second threshold, the second threshold being lower than the first threshold (Briliauskas, Col. 12, Lines 13-18, a file is only labeled as clean if the maliciousness score, generated by the model, is below a first threshold (e.g., 0.5) or if a confidence score of the prediction is above a second threshold (e.g., above 0.8). Once a file is labeled (e.g., “clean” or “malicious”), the file is stored with its label in the training data set on the client device 300). Goswami and Briliauskas are both considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami to incorporate the teachings of Briliauskas to include wherein the medical imaging file is stored without modification if the first score is less than a second threshold, wherein the second threshold is lower than the first threshold (Briliauskas, Col. 12, Lines 13-18). Doing so would aid the federate learning methods described herein leverage the unique files stored on each client device, which can result in a more robust and accurate model that reflects client preferences (Briliauskas, Col. 10, Lines 1-3).
In regards to claim 4, the combination of Goswami and Briliauskas teaches the system of claim 1, wherein the medical imaging file is used to retrain the classification model if the first score is between the first threshold and a second threshold, wherein the second threshold is lower than the first threshold (Briliauskas, Col. 10, Lines 65-68d Col. 11, Lines1-8, the output of the model is a malicious “score” (e.g., a fraction from 0-1) which indicates a predicted likelihood that the file is malicious. For example, a file with a maliciousness score of 0.86 or 86% is highly likely to be malicious. In some embodiments, the model outputs both a classification for the file (e.g., malicious or not malicious) and a confidence score, which indicates a confidence level of the prediction. For example, an output with a low confidence score (e.g., less than 0.5 or 50%) indicates that the classification for the file may be inaccurate).
Goswami and Briliauskas are both considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami to incorporate the teachings of Briliauskas to include wherein the medical imaging file is used to retrain the classification model if the first score is between the first threshold and a second threshold, wherein the second threshold is lower than the first threshold (Briliauskas, Col. 10, Lines 65-68 and Col. 11, Lines1-8). Doing so would aid the federate learning methods described herein leverage the unique files stored on each client device, which can result in a more robust and accurate model that reflects client preferences (Briliauskas, Col. 10, Lines 1-3).
In regards to claim 5, the combination of Goswami and Briliauskas teaches the system of claim 1, wherein the first score is less than the first threshold (Briliauskas, Col. 26, Lines 9-14, for example, a file may only be labeled as malicious if the confidence score exceeds 0.7 or 70%. If the confidence score is below 0.7, then the file may be labeled as clean. In some embodiments, a second threshold may be set for labeling a file as "clean), and wherein the operations further comprise at least one of (i) storing the medical imaging file without said modification (Briliauskas, Col. 12, Lines 17-19, once a file is labeled (e.g., "clean" or "malicious"), the file is stored with its label in the training data set on the client device 300), or (ii) retraining the classification model based on the medical imaging file. Goswami and Briliauskas are both considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami to incorporate the teachings of Briliauskas to include wherein the first score is less than the first threshold (Briliauskas, Col. 26, Lines 9-14), and wherein the operations further comprise at least one of (i) storing the medical imaging file without said modification (Briliauskas, Col. 12, Lines 17-19). Doing so would aid the federate learning methods described herein leverage the unique files stored on each client device, which can result in a more robust and accurate model that reflects client preferences (Briliauskas, Col. 10, Lines 1-3).
In regards to claim 9, the combination of Goswami and Briliauskas teaches the system of claim 1, wherein the classification model is one of a multi-layer perceptron (MLP) model, a support vector machine (SVM) model, random forest model, or a convolutional neural network (CNN) (Goswami, Para. 0085, for purposes of the present description, it is again assumed that the computer model 104 is a CNN that is trained to perform an image classification operation on input data that represents one or more images, and thus the computer model is identified as a target ML classifier 104).
In regards to claim 10, the combination of Goswami and Briliauskas teaches the system of claim 1, wherein the classification model is a first classification model and the first score is representative of a likelihood that the medical imaging file contains an anomaly (Briliauskas, Col. 8, Lines 38-42, predicting whether the file is malicious includes generating, by the first malware detection model, a maliciousness score for the file, where the is labeled as malicious if the maliciousness score meets or exceeds a threshold), the operations further comprising evaluating the medical imaging file using a second classification model that generates a second score representative of a likelihood that the medical imaging file contains malware (Briliauskas, Col. 6, Lines 18-20, the operations further include predicting a maliciousness of at least one additional local file using the second malware detection model). Goswami and Briliauskas are both considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami to incorporate the teachings of Briliauskas to include wherein the medical imaging file is used to retrain the classification model if the first score is between the first threshold and a second threshold, wherein the second threshold is lower than the first threshold (Briliauskas, Col. 10, Lines 65-68d Col. 11, Lines1-8). Doing so would aid the federate learning methods described herein leverage the unique files stored on each client device, which can result in a more robust and accurate model that reflects client preferences (Briliauskas, Col. 10, Lines 1-3).
In regards to claim 11, the method of claim 11 is similarly analyzed and rejected as the system claim 1.
In regards to claim 12, the method of claim 12 is similarly analyzed and rejected as the system claim 2.
In regards to claim 13, the method of claim 13 is similarly analyzed and rejected as the system claim 3.
In regards to claim 14, the method of claim 14 is similarly analyzed and rejected as the system claim 4.
In regards to claim 15, the method of claim 15 is similarly analyzed and rejected as the system claim 5.
In regards to claim 19, the method of claim 19 is similarly analyzed and rejected as the system claim 9.
In regards to claim 20, the non-transitory, computer-readable medium of claim 20 is similarly analyzed and rejected as the system claim 1 and method claim 11.
Claims 6-8 and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Goswami et al. (US 2021/0056404 A1), hereinafter Goswami in view of Briliauskas et al. (US 11,693,965 B1), hereinafter Briliauskas and further in view of Prosky et al. (US 2020/0160978 A1), hereafter Prosky.
In regards to claim 6, the combination of Goswami and Briliauskas does not teach the system of claim 1, wherein the one or more processors and the one or more memories are components of an edge server of a picture archiving and communication system (PACS), and wherein the medical imaging file is received by the edge server from the medical imaging device.
However, Prosky teaches wherein the processor and the memory are components of an edge server of a picture archiving and communication system (PACS), and wherein the medical imaging file is received by the edge server from the medical imaging device (Prosky, Para. 0259, in various embodiments, the medical picture archive system is a Picture Archive and Communication System (PACS) server, and the first DICOM image is received in response to a query sent to the medical picture archive system by the transmitter in accordance with a DICOM communication protocol).
Goswami, Briliauskas and Prosky are all considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami and Briliauskas to incorporate the teachings of Prosky to include wherein the processor and the memory are components of an edge server of a picture archiving and communication system (PACS), and wherein the medical imaging file is received by the edge server from the medical imaging device (Prosky, Para. 0259). Doing so would aid the model parameters to update over time to improve existing inference functions and/or to add new inference functions, for example corresponding to new scan categories. In particular, the some or all of the de-identified medical scans generated by the de-identification system 2608 can be transmitted back to the central server system, and the central server system 2640 can train on this data to improve existing models by producing updated model parameters of an existing inference function and/or to generate new models, for example, corresponding to new scan categories, by producing new model parameters for new inference functions (Prosky, Para. 0167).
In regards to claim 16, the method of claim 16 is similarly analyzed and rejected as the system claim 6.
In regards to claim 7, the combination of Goswami and Briliauskas does not teach the system of claim 1, the operations further comprising converting the images in the data set of the medical imaging file to greyscale prior to evaluating the medical imaging file using the classification model.
However, Prosky teaches the instructions further causing the system to convert the images in the data set of the medical imaging file to greyscale prior to evaluating the medical imaging file using the classification model (Prosky, Para. 0259, contrasting parameters and/or density windowing may have already been applied and/or the image data may have been undergone other pre-processing to convert density values to greyscale values) and (Para. 0260, the input can correspond to density values of raw sensor data, and the output can correspond to greyscale values of a JPEG).
Goswami, Briliauskas and Prosky are all considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami and Briliauskas to incorporate the teachings of Prosky to include the instructions further causing the system to convert the images in the data set of the medical imaging file to greyscale prior to evaluating the medical imaging file using the classification model (Prosky, Para. 0259). Doing so would aid the model parameters to update over time to improve existing inference functions and/or to add new inference functions, for example corresponding to new scan categories. In particular, the some or all of the de-identified medical scans generated by the de-identification system 2608 can be transmitted back to the central server system, and the central server system 2640 can train on this data to improve existing models by producing updated model parameters of an existing inference function and/or to generate new models, for example, corresponding to new scan categories, by producing new model parameters for new inference functions (Prosky, Para. 0167).
In regards to claim 17, the method of claim 17 is similarly analyzed and rejected as the system claim 7.
In regards to claim 8, the combination of Goswami and Briliauskas does not teach the combination of Briliauskas and Goswami does not teach the system of claim 1, wherein the medical imaging file is a digital imaging and communications in medicine DICOM file.
However, Prosky teaches the system of claim 1, wherein the medical imaging file is a DICOM file (Prosky, Para. 0150, the receiver can receive DICOM images from the medical picture archive system 2620. The transmitter 2604 can send annotated DICOM files to the medical picture archive system 2620).
Goswami, Briliauskas and Prosky are all considered to be analogous to the claim invention because they are in the same field of detecting anomalies and/or malware in imaging files. Therefore, it would have been obvious to someone ordinary skill in the art before the effective filling date of the claimed invention to have modified Goswami and Briliauskas to incorporate the teachings of Prosky to include teaches the system of claim 1, wherein the medical imaging file is a DICOM file (Prosky, Para. 0150). Doing so would aid the model parameters to update over time to improve existing inference functions and/or to add new inference functions, for example corresponding to new scan categories. In particular, the some or all of the de-identified medical scans generated by the de-identification system 2608 can be transmitted back to the central server system, and the central server system 2640 can train on this data to improve existing models by producing updated model parameters of an existing inference function and/or to generate new models, for example, corresponding to new scan categories, by producing new model parameters for new inference functions (Prosky, Para. 0167).
In regards to claim 18, the method of claim 18 is similarly analyzed and rejected as the system claim 8.
Conclusion
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GITA FARAMARZI whose telephone number is (571)272-0248. The examiner can normally be reached Monday- Friday 9:00 am- 6:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jorge L. Ortiz-Criado can be reached at (571) 272-7624. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GITA FARAMARZI/Examiner, Art Unit 2496
/JORGE L ORTIZ CRIADO/Supervisory Patent Examiner, Art Unit 2496