Prosecution Insights
Last updated: April 19, 2026
Application No. 18/027,931

ANONYMOUS FINGERPRINTING OF MEDICAL IMAGES

Final Rejection §103
Filed
Mar 23, 2023
Examiner
BEATTY, TY MITCHELL
Art Unit
2663
Tech Center
2600 — Communications
Assignee
Koninklijke Philips N V
OA Round
2 (Final)
70%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
19 granted / 27 resolved
+8.4% vs TC avg
Strong +42% interview lift
Without
With
+42.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
15 currently pending
Career history
42
Total Applications
across all art units

Statute-Specific Performance

§101
7.1%
-32.9% vs TC avg
§103
42.8%
+2.8% vs TC avg
§102
27.1%
-12.9% vs TC avg
§112
23.1%
-16.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 27 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The Amendment filed 5 September, 2025 (hereinafter “the Amendment’) has been entered and considered. Claims 1, 14, and 15 have been amended. Claims 1-15, all the claims pending in the application, are rejected. All modifications to the grounds of rejection set forth in the present action were necessitated by Applicants’ claim amendments; accordingly, this action is made final. Response to Amendment Prior Art Rejections 1. On page 2 of the Amendment, the Applicant contends that the prior art of record, Bae, Toyoda, Bouslimi, and Buras, alone or in combination, do not teach or suggest the newly added feature added to independent claims 1, 14, and 15, which recites, “wherein the image assessment is provided by comparison of the anonymized image fingerprint to image fingerprints in the historical image database.” Claim Rejections - 35 USC § 103 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 3. Claims 1-7, 9-11, and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over “AnomiGAN: Generative Adversarial Networks for Anonymizing Private Medical Data” by Ho Bae et al., (herein after “Bae”) in view of US 20200066394 A1: Tetsuya Toyoda et al., (herein after “Toyoda”), and in further view of “A Review of Medical Image Watermarking Requirements for Teleradiology” by Hussain Nyeem et al., (herein after “Nyeem”). Regarding claim 1, A medical system comprising: - a memory configured to store machine executable instructions and at least one trained neural network (Bae, §Abstract: “The code is available at https://github.com/hobae/AnomiGAN/”, wherein each of the at least one neural network is configured for receiving a medical image as input (is contemplated by Bae, §2.4: “GANs have achieved astonishing results in synthetic image generation”, and furthermore in §5: “when a patient consents to the use of medical diagnostic techniques, the propagation of that information to a third party cannot guarantee that the same privacy policies”, where medical imaging is a medical diagnostic technique. Bae discloses further in §3.3: “we have constructed as input entries of the k × m medical record x ∈ X^(k×m), where the medical record data is undefined by Bae.”) Even though Bae does not explicitly disclose that the input to their system is a medical image, this feature is explicitly disclosed by Toyoda in the §Abstract: “A medical image management system creating, sharing and managing a medical image data file-” Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bae to include medical images within their medical record data, as taught by Toyoda, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of increasing security in transmission of medical image data between parties. ,wherein each of the at least one trained neural network comprises multiple hidden layers (Bae, Fig. 3 discloses a plurality of hidden layers, where ReLU is used for the activation function for the hidden layers of the neural network.), wherein each of the at least one trained neural network has been modified to provide hidden layer output in response to receiving the medical image (where ReLU of Bae provides the hidden layers, and Toyoda provides medical images as input, as discussed above.), wherein the hidden layer output is outputted directly from one or more of the multiple hidden layers (Bae, Fig. 2-3 discloses that the output of the encoder is used as the input for the discriminator which provides the hidden layer as the output.); - a computational system (Bae, Fig. 2-3 discloses the computational system and architecture), wherein execution of the machine executable instructions causes the computational system to: - receive the medical image (where Toyoda provides the medical image in Bae’s medical record data. Furthermore, Bae discloses in the description of Fig. 3: “The encoder accepts x and r as input”; - receive the hidden layer output in response to inputting the medical image into each of the at least one trained neural network (Bae, Fig. 3: “The discriminator takes an original input and output of the encoder to output probabilities from the last fully connected layer. The target classifier takes an input ˆx and outputs the prediction score.”); - provide an anonymized image fingerprint comprising the hidden layer output from each of the at least one trained neural network (Bae, §3: “The encoder generates synthetic data with the aim of mimicking the input data”, and furthermore in §3.1: “x̂ is the anonymized output corresponding to x and r”); and - receive an image assessment of the medical image in response to querying a historical image database using the anonymized image fingerprint is disclosed by Bae in Fig. 2 where it discloses that the anonymized output is linked to the service provider which is the historical image database where the image is assessed in §4.2: “Many services are incorporate disease classifiers using machine learning techniques. For our experiments, we selected breast cancer, chronic kidney disease, heart disease and prostate cancer models from the kaggle competitions as the target classifiers.” Furthermore, Bae discloses online medical services, medical research, and other third parties in Fig. 1. obabilities from the last fully connected layer. The target classifier takes an input ˆx and outputs the prediction score.”); - provide an anonymized image fingerprint comprising the hidden layer output from each of the at least one trained neural network (Bae, §3: “The encoder generates synthetic data with the aim of mimicking the input data”, and furthermore in §3.1: “x̂ is the anonymized output corresponding to x and r”); and - receive an image assessment of the medical image in response to querying a historical image database using the anonymized image fingerprint is disclosed by Bae in Fig. 2 where it discloses that the anonymized output is linked to the service provider which is the historical image database where the image is assessed in §4.2: “Many services are incorporate disease classifiers using machine learning techniques. For our experiments, we selected breast cancer, chronic kidney disease, heart disease and prostate cancer models from the kaggle competitions as the target classifiers.” Furthermore, Bae discloses online medical services, medical research, and other third parties in Fig. 1. - wherein the image assessment is provided by comparison of the anonymized image fingerprint (Bae, Fig. 2, x̂, §3.1: “x̂ is the anonymized output corresponding to x and r”) to image fingerprints in the historical image database, where the historical image database is provided by Bae in Fig. 2, Service provider, where x̂ is sent to the database “D”, which contains information relating to diagnosing diseases relating to breast cancer, chronic kidney disease, heart disease, and prostate cancer. Where the dataset, the breast cancer dataset for example, contains image data. Then the provided anonymized image x̂ is compared to the image data from the breast cancer dataset in order to provide diagnosis results. The combination of Bae and Toyoda does not explicitly disclose that the images/image data used for comparison are also fingerprinted/watermarked. However, Nyeem discloses in §Digital Watermarking in Teleradiology, §§Choice of Design and Evaluation Parameters, P[001]: “Fidelity requirements guarantee that the watermarked medical images are useable for diagnosis and other clinical uses.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Bae and Toyoda to rely on fingerprinted/watermarked images for image comparison and diagnosis, as taught by Nyeem, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of increased security/anonymity for medical image data. Regarding claim 2, wherein the historical image database is queried via a network connection is disclosed by Bae in Fig. 1, where it shows connected networks. Furthermore, Bae discloses in the description of Fig. 1: “Patient’s medical data are transferred to the online medical service” Regarding claim 3, The medical system of claim 1 or 2, wherein the image assessment comprises at least one of the following: - an identification of one or more image artifacts; - an assignment of an image quality value; - a retrieved diagnostic guideline; - instructions to repeat the measurement of the medical image; - suggestion of follow up acquisition of additional medical images; - an identification of image acquisition problems; - an identification of an incorrect field of view; - an identification of an improper subject positioning; - an identification of irregular subject inspiration; - an identification of metal artifacts; - an identification of motion artifacts; - an identification of foreign objects in the image; - medical image scan planning instructions ;or- a set of workflow recommendations is disclosed by Bae in §I, P[004]: “We evaluated the proposed method using target classifiers for four diseases-”, where detected presence of the diseases detects and identifies foreign objects like tumors. Regarding claim 4, wherein the medical system comprises the historical image database, wherein the historical image database is configured to provide the image assessment by: identifying a set of similar images by comparing the anonymized image fingerprint to image fingerprints in the historical image database, wherein the set of similar images each comprises historical data is disclosed by Bae where Fig. 2 discloses the service provider containing the historical images and data. Furthermore, Bae discloses in Fig. 1: “Patient’s medical data are transferred to the online medical service that, in turn, provides diagnostic results-” Furthermore, Bae discloses in §I, P[004]: “We evaluated the proposed method using target classifiers for four diseases-”, where the anonymized fingerprint is compared to other images in the class for detecting diseases. providing at least a portion of the historical data as the image assessment is disclosed by Bae in §I, P[004]: “We evaluated the proposed method using target classifiers for four diseases-”, where the class is the portion provided from the historical label data. Regarding claim 5, wherein the comparison between the anonymized image fingerprint to image fingerprints in the historical image database is performed using at least one of the following: - applying a similarity measure to the anonymized image fingerprint and each of the image fingerprints; - applying a learned similarity measure to the anonymized image fingerprint and each of the image fingerprints; - applying a metric to the anonymized image fingerprint and each of the image fingerprints; - calculating a Minkowski distance between the anonymized image fingerprint and each of the image fingerprints; - calculating a Mahalanobis distance between the anonymized image fingerprint and each of the image fingerprints; - applying a cosine similarity measure to a difference between the anonymized image fingerprint and each of the image fingerprints; or- using a trained vector comparison neural network is disclosed by Bae, where the similarity measurement is the confidence score attached to the classifier for the diseases disclosed in §I, P[004]: “We evaluated the proposed method using target classifiers for four diseases-”, and the description of Fig. 3 discloses “The target classifier takes an input ˆx and outputs the prediction score.”, where the score is based from a similarity determination to the ground truth. Regarding claim 6, wherein the neural network is at least on of the following: - a pretrained image classification neural network; - a pretrained image segmentation neural network; - a U-Net neural network; - a ResNet neural network; - a DenseNet neural network; - an EfficientNet neural network; - an Xception neural network; - an Inception neural network; - a VGG neural network; - an auto-encoder neural network; - a recurrent neural network; - a LSTM neural network; - a feedforward neural network; - a multi-layer perceptron; or- a network resulting from a neural network architecture search is disclosed by Bae in §3.2, P[003]: “The target classifier is a fixed pre-trained model and exploited in the GANs training model.” Regarding claim 7, wherein the provided hidden layer output is provided from at least one of the following: - a convolutional layer; - a dense layer; - an activation layer; - a pooling layer; - an unpooling layer; - a normalization layer; - a padding layer; - a dropout layer; - a recurrent layer; - a transformer layer; - a linear layer; - a resampling layer; or- an embedded representation from an autoencoder is disclosed by Bae in Fig. 3 where the ReLU layer is an activation layer. Regarding claim 9, wherein the medical system further comprises a medical imaging system, wherein execution of the machine executable instructions further causes the computational system to: - control the medical imaging system to acquire medical image data (Toyoda, P[0002]: “Of course, an invention related to a program medium configured to control the apparatuses is also included.”, where the program controls a camera to acquire medical image data); and - reconstruct the medical image from the medical imaging data (Toyoda, P[0002]: “Of course, an invention related to a program medium configured to control the apparatuses is also included.”, where the program controls a camera to acquire and reconstruct medical image data to provide a medical image). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bae to include automatic medical image data acquisition and reconstruction, as taught by Toyoda, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of reducing the need for manual image acquisition and reconstruction. Regarding claim 10, wherein the medical image system is at least one of the following: a magnetic resonance imaging system, a computed tomography system, an ultrasonic imaging system, an X-ray system, a fluoroscope, a positron emission tomography system, and a single photon emission computed tomography system is disclosed by Toyoda in P[0011]: “more specifically, the medical image includes, for example, a radiograph (X-ray image), a computed tomography (CT) image, an MRI (magnetic resonance imaging) image, an ultrasonic tomographic image, an angiographic image, an endoscopic image, a thermographic image and a microscope image. In addition, general photographing images and the like acquired using a common communication-function-equipped image pickup apparatus (such as a digital camera) or an image acquisition apparatus such as an image-pickup/communication-function-equipped information terminal apparatus (a smartphone, a tablet PC or the like) may be included.” Regarding claim 11, wherein the anonymized image fingerprint further comprises metadata (Toyoda, P[0021]: “one image data file is constructed by an image data body and metadata, which is accompanying information accompanying the image data body.”) descriptive of a configuration of the medical imaging system during acquisition of the medical image data (Toyoda, P[0064]: “Photographing conditions at the time of acquiring image data may be included in “medical-related particular-use information”. Especially, in the case of a medical image, clarity of peripheral information such as information about who has requested photographing and information about who (or what) has been photographed is required. Therefore, it is preferable that such peripheral information is also included in the same image data file as metadata.”) Furthermore, Toyoda discloses in P[0065] configuration of the medical imaging system, “In addition to such control, the control circuit performs exposure control and focusing control performed in a general camera, and may further have a function of confirming and identifying a target object.”, and furthermore, the image itself contains the orientation at which the image was taken as shown in Fig. 1 where the orientation of the camera is horizontal. Furthermore, Toyoda discloses in Fig. 2, Elements 10a, “Photographing Conditions” data information is shared and stored. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Bae to transmit medical-related particular-use information about a particular individual as metadata, as taught by Toyoda, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of being able to match photographing conditions to reduce variability between images. Claims 14 and 15 recite features nearly identical to those recited in claim 1. Claims 14 and 15 are rejected for reasons analogous to those discussed above in conjunction with claim 1. 4. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bae, Toyoda, and Nyeem in view of “Using A Bag of Words for Automatic Medical Image Annotation With A Latent Semantic” by Raidh Bouslimi et al., (herein after “Bouslimi”). Regarding claim 8, Bae contemplates in Fig. 1 that descriptors relating to medical data records are acquired and presented, and when the data is anonymized, the anonymized fingerprint data contains the set of descriptors. Bae does not explicitly disclose that they use a bag of words model along with images to provide the description. That is, Bae does not explicitly disclose “wherein the memory further stores a bag-of-words model configured to output a set of image descriptors in response to receiving the medical image, wherein execution of the machine executable instructions further comprises receiving the set of image descriptors in response to inputting the medical image into the bag-of-words model, wherein the anonymized image fingerprint further comprises the set of image descriptors.” However, Bouslimi discloses in Fig. 1-2 the annotation of medical images using a vocabulary of visual words to annotate the images in response to inputting the image into the bag of words model. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Bae and Toyoda to include processing the anonymized medical image with a bag of words model to provide descriptors/annotations of the medical image, as taught by Bouslimi, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of reducing the need for manual annotation of the image. 5. Claims 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bae, Toyoda, and Nyeem in view of US 11017695 B2: William Buras et al., (herein after “Buras”). Regarding claim 12, Bae contemplates classification of various diseases and treatment strategies, but does not explicitly disclose additional scanning. That is, the combination of Bae and Toyoda do not explicitly disclose “wherein the image assessment comprises scan planning instructions” However, Buras discloses providing scan planning instructions based on image assessment in P[0027]: “The library 500 includes detailed information on the medical equipment system 200, which may include instructions (written, auditory, and/or visually) for performing one or more medical procedures using the medical equipment system, and reference information or data in the use of the system to enable a novice user to achieve optimal outcomes (i.e., similar to those of an expert user) for those procedures.”, and Buras further discloses ultrasound scanning in P[0072]: “depending on how the ultrasound machine is configured for an ultrasound scan.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Bae and Toyoda to include scan planning instructions, as taught by Buras, to arrive at the claimed invention discussed above. Such a modification is the result of combining prior art elements according to known methods to yield predictable results. It is predictable that the proposed modification would have provided the benefit of enabling a novice user to achieve optimal outcomes similar to those of an expert user. Regarding claim 13, wherein the medical system further comprises a display, wherein execution of the machine executable instructions further causes the processor to render at least the scan planning instructions on the display is disclosed by Buras in P[0010]: “display (HMD) for presenting information pertaining to both real and virtual objects to the user during the performance of the medical procedure” Furthermore, Buras discloses in P[0017]: “includes a screen upon which virtual objects or information can be displayed to aid a medical equipment user in real-time” Conclusion 6. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TY M BEATTY whose telephone number is (703)756-5370. The examiner can normally be reached Mon-Fri: 8AM-4PM EST.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Gregory Morse can be reached at (571) 272 - 3838. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TY MITCHELL BEATTY/Examiner, Art Unit 2663 /GREGORY A MORSE/Supervisory Patent Examiner, Art Unit 2698
Read full office action

Prosecution Timeline

Mar 23, 2023
Application Filed
Jun 02, 2025
Non-Final Rejection — §103
Sep 05, 2025
Response Filed
Sep 29, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597275
VEHICLE INTERIOR MONITORING SYSTEM
2y 5m to grant Granted Apr 07, 2026
Patent 12579653
AUTOMATED METHOD FOR TOOTH SEGMENTATION OF THREE DIMENSIONAL SCAN DATA USING TOOTH BOUNDARY CURVE AND COMPUTER READABLE MEDIUM HAVING PROGRAM FOR PERFORMING THE METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12555212
OBJECT DETECTION DEVICE AND METHOD FOR DETECTING MALFUNCTION OF OBJECT DETECTION DEVICE
2y 5m to grant Granted Feb 17, 2026
Patent 12511787
METHOD, DEVICE AND SYSTEM OF POINT CLOUD COMPRESSION FOR INTELLIGENT COOPERATIVE PERCEPTION SYSTEM
2y 5m to grant Granted Dec 30, 2025
Patent 12511750
IMAGE PROCESSING METHOD AND APPARATUS BASED ON IMAGE PROCESSING MODEL, ELECTRONIC DEVICE, STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Dec 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
70%
Grant Probability
99%
With Interview (+42.3%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 27 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month