Prosecution Insights
Last updated: April 19, 2026
Application No. 18/696,891

SYSTEMS AND METHODS FOR ANONYMIZATION OF IMAGE DATA

Non-Final OA §103
Filed
Mar 28, 2024
Examiner
LANTZ, KARSTEN FOSTER
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Sloan-Kettering Institute For Cancer Research
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 9m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
19 currently pending
Career history
19
Total Applications
across all art units

Statute-Specific Performance

§103
73.8%
+33.8% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
11.9%
-28.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application is a National Stage application of PCT PCT/US22/45122. Priority to 63/249,896 with a priority date of 09/29/2021 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 3/28/2024 has been considered and placed in the application file. 1st Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 5, 6, 7, 9, 12, 14, 17, and 18 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0037019 A1, (Covington et al.). Claim 1 Regarding Claim 1, Covington et al. teach a method comprising: obtaining, by a computing system, a medical image of a subject, the medical image comprising a set of slices ("The medical scan image analysis system 112 can be operable to receive a plurality of medical scans that represent a three-dimensional anatomical region and include a plurality of cross-sectional image slices," par. 47) and being associated with a set of metadata regarding the medical image and the subject; ("the medical picture archive system 2620 can receive image data from a plurality of modality machines 2622, such as CT machines, MRI machines, x-ray machines, and/or other medical imaging machines that produce medical scans 3120. The medical scans 3120 can include imaging data corresponding to a CT scan, x-ray, MRI, PET scan, Ultrasound, EEG, mammogram, or other type of radiological scan or medical scan taken of an anatomical region of a human body, animal, or other organism and further can include metadata corresponding to the imaging data," par. 282) identifying, by the computing system, based on the set of metadata, one or more regions of interest (ROls) of the subject in the medical image, the ROls corresponding with a condition to be evaluated by a clinician using the medical image; ("The abnormality annotation data 442 for each abnormality can include abnormality location data 443, which can include an anatomical location and/or a location specific to pixels, image slices, coordinates or other location information identifying regions of the medical scan itself," par. 67 wherein "abnormality" suggests something that differs from expected, healthy structure, prompting closer inspection) selecting, by the computing system, based on the ROls, a modification technique to apply to the medical image, wherein selecting the modification technique comprises determining an image segment that is situated outside of the identified ROls, the image segment comprising a distinguishing feature of the subject; ("At least one region of the image data that includes identifying facial structure can be determined by utilizing a medical image analysis function … In some embodiments, the facial structure obfuscation function can perform a one-way function on the region that preserves abnormalities of the corresponding portions of the image, such as nose fractures or facial skin legions, while still obfuscating the identifying facial structure such that the patient is not identifiable," par. 231,232 wherein the regions of interest are the regions that consist of the abnormalities as taught above) generating, by the computing system, a modified image by applying the selected modification technique to the medical image to modify the set of slices or a subset thereof to thereby render the distinguishing feature in the image segment indistinguishable; ("For example, the facial structure obfuscation function can mask, scramble, replace with a fiducial, or otherwise obfuscate the pixels of the region identified by the facial detection function," par. 232) and performing, by the computing system, an operation using the modified image, wherein the operation comprises at least one of (1) transmitting the modified image to another computing system, ("Each client device 120 can receive the application data from the corresponding subsystem via network 150 by utilizing network interface 260, for storage in the one or more memory devices 240," par. 50) (2) displaying the modified image on a display screen, ("At least one cross-sectional image can be selected from each medical scan of the subset of medical scans for display on a display," par. 49) or (3) storing the modified image in a non-volatile computer-readable storage medium of the computing system ("The memory device may be in a form a solid state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information," par. 327). It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Covington et al. explicitly motivates doing so at least in paragraphs [0323] including “One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof” and otherwise motivating experimentation and optimization. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of computing system claim 14 while noting that the rejection above cites to both device and method disclosures. Claim 14 is mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 5 Regarding claim 5, Covington et al. teach the method of claim 1,wherein obtaining the medical image comprises using a set of imaging detectors to scan the subject and thereby generate the set of slices ("The medical scan image data 410 can include one or more image slices 412, for example, corresponding to a single x-ray image, a plurality of cross-sectional, tomographic images of a scan such as a CT scan, or any plurality of images taken from the same or different point at the same or different angles. The medical scan image data 410 can also indicate an ordering of the one or more image slices," par. 59). Covington et al. are motivated as per claim 1. Claim 6 Regarding claim 6, Covington et al. teach the method of claim 1,wherein determining the image segment comprises detecting the distinguishing feature in the medical image, and delineating the distinguishing feature to encapsulate, in the medical image segment, the distinguishing feature or a portion thereof ("For example, the medical image analysis function can include a facial detection function that determines the regions of the image data that include identifying facial structure based on searching the image data for pixels with a density value that corresponds to facial skin, facial bone structure, or other density of an anatomical mass type that corresponds to identifying facial structure, and the facial obfuscation function can be performed on the identified pixels," par. 231). Covington et al. are motivated as per claim 1. Claim 7 Regarding claim 7, Covington et al. teach the method of claim 1,wherein determining the image segment comprises determining an intensity threshold that will identify a contour of the image segment ("abnormality classifier categories 444, which can include size, volume, pre-post contrast, doubling time, calcification, components, smoothness, spiculation, lobulation, sphericity, internal structure, texture, or other categories that can classify and/or otherwise characterize an abnormality. Abnormality classifier categories 444 can be assigned a binary value, indicating whether or not such a category is present. For example, this binary value can be determined by comparing some or all of confidence score data 460 to a threshold," par. 67 wherein the pre-post contrast category involves measuring changes in pixel intensity which could indicate a gradient in the image). Covington et al. are motivated as per claim 1. Claim 9 Regarding claim 9, Covington et al. teach the method of claim 1,wherein determining the image segment comprises receiving, via a user input, a selection of a boundary of the distinguishing feature ("In some embodiments, a solid or semi-transparent outline and/or shading of the pixels determined to include the lesion in an image slice of medical scan entry 3005 can be overlaid upon the corresponding pixel coordinates in the display of the corresponding image slice of medical scan entry 3006 by the interface," par. 266). Covington et al. are motivated as per claim 1. Claim 12 Regarding claim 12, Covington et al. teach the method of claim 1, wherein the image segment comprises one or more facial features of the subject ("In some embodiments where the image data of a medical scan includes an anatomical region corresponding to a patient's head, the image data may include an identifying facial structure and/or facial features that could be utilized to determine the patient's identity," par. 230). Covington et al. are motivated as per claim 1. Claim 14 Regarding claim 14, Covington et al. teach a computing system comprising one or more processors configured to: ("The medical scan image analysis system can include a processing system that includes a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations," par. 46) obtain a medical image of a subject, the medical image comprising a set of slices ("The medical scan image analysis system 112 can be operable to receive a plurality of medical scans that represent a three-dimensional anatomical region and include a plurality of cross-sectional image slices," par. 47) and being associated with a set of metadata regarding the medical image and the subject; ("the medical picture archive system 2620 can receive image data from a plurality of modality machines 2622, such as CT machines, MRI machines, x-ray machines, and/or other medical imaging machines that produce medical scans 3120. The medical scans 3120 can include imaging data corresponding to a CT scan, x-ray, MRI, PET scan, Ultrasound, EEG, mammogram, or other type of radiological scan or medical scan taken of an anatomical region of a human body, animal, or other organism and further can include metadata corresponding to the imaging data," par. 282) identify, based on the set of metadata, one or more regions of interest (ROls) of the subject in the medical image, the ROls corresponding with a condition to be evaluated by a clinician using the medical image; ("The abnormality annotation data 442 for each abnormality can include abnormality location data 443, which can include an anatomical location and/or a location specific to pixels, image slices, coordinates or other location information identifying regions of the medical scan itself," par. 67 wherein "abnormality" suggests something that differs from expected, healthy structure, prompting closer inspection) select, based on the ROls, a modification technique to apply to the medical image, wherein selecting the modification technique comprises determining an image segment that is situated outside of the identified ROls, the image segment comprising a distinguishing feature of the subject; ("At least one region of the image data that includes identifying facial structure can be determined by utilizing a medical image analysis function … In some embodiments, the facial structure obfuscation function can perform a one-way function on the region that preserves abnormalities of the corresponding portions of the image, such as nose fractures or facial skin legions, while still obfuscating the identifying facial structure such that the patient is not identifiable," par. 231,232 wherein the regions of interest are the regions that consist of the abnormalities as taught above) generate a modified image by applying the selected modification technique to the medical image to modify the set of slices or a subset thereof to thereby render the distinguishing feature in the image segment indistinguishable; ("For example, the facial structure obfuscation function can mask, scramble, replace with a fiducial, or otherwise obfuscate the pixels of the region identified by the facial detection function," par. 232) and perform an operation using the modified image, wherein the operation comprises at least one of (1) transmitting the modified image to another computing system, ("Each client device 120 can receive the application data from the corresponding subsystem via network 150 by utilizing network interface 260, for storage in the one or more memory devices 240," par. 50) (2) displaying the modified image on a display screen, ("At least one cross-sectional image can be selected from each medical scan of the subset of medical scans for display on a display," par. 49) or (3) storing the modified image in a non-volatile computer-readable storage medium of the computing system ("The memory device may be in a form a solid state memory, a hard drive memory, cloud memory, thumb drive, server memory, computing device memory, and/or other physical medium for storing digital information," par. 327). Covington et al. are motivated as per claim 1. Claim 17 Regarding claim 17, Covington et al. teach the computing system of claim 14, wherein determining the image segment comprises detecting the distinguishing feature in the medical image, and delineating the distinguishing feature to encapsulate, in the medical image segment, the distinguishing feature or a portion thereof ("For example, the medical image analysis function can include a facial detection function that determines the regions of the image data that include identifying facial structure based on searching the image data for pixels with a density value that corresponds to facial skin, facial bone structure, or other density of an anatomical mass type that corresponds to identifying facial structure, and the facial obfuscation function can be performed on the identified pixels," par. 231). Covington et al. are motivated as per claim 14. Claim 18 Regarding claim 18, Covington et al. teach the computing system of claim 14, wherein determining the image segment comprises determining an intensity threshold that will identify a contour of the image segment ("abnormality classifier categories 444, which can include size, volume, pre-post contrast, doubling time, calcification, components, smoothness, spiculation, lobulation, sphericity, internal structure, texture, or other categories that can classify and/or otherwise characterize an abnormality. Abnormality classifier categories 444 can be assigned a binary value, indicating whether or not such a category is present. For example, this binary value can be determined by comparing some or all of confidence score data 460 to a threshold," par. 67 wherein the pre-post contrast category involves measuring changes in pixel intensity which could indicate a gradient in the image). Covington et al. are motivated as per claim 14. 2nd Claim Rejections - 35 USC § 103 Claims 2, 3, 4, 13, 15, and 16 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0037019 A1, (Covington et al.) in view of US Patent Publication 2019 0378607 A1, (Chen et al.). Claim 2 Regarding Claim 2, Covington et al. teach the method of claim 1 as noted above. PNG media_image1.png 232 274 media_image1.png Greyscale Covington et al. do not explicitly teach all of wherein the medical image is a volume rendering of the set of slices. [AltContent: textbox (Figure 1B shows the 3D rendering of the subject.)]However, Chen et al. teach wherein the medical image is a volume rendering of the set of slices ("FIG. 1B shows coronal (120), sagittal (130), and axial (140) views of a 3D rendering of a patient's head as reconstructed from a 3D MRI image," par. 35). Therefore, taking the teachings of Covington et al. and Chen et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the abnormality detecting system as taught by Covington et al. to use the volume rendering techniques as taught by Chen et al. The suggestion/motivation for doing so would have been that, “FIGS. 1A and 1B illustrate the potential to reconstruct patient-identifying surface anatomical features from a medical image. Shown in FIG. 1A is a two-dimensional image 100 of an axial slice of a patient's brain. Images such as image 100 are routinely obtained from patients using MRI scanners or the like” as noted by the Chen et al. disclosure in paragraph [0034], which also motivates combination because the combination would improve the visualization of anatomical features as there is a reasonable expectation that the application of volume rendering to the 2D MRI data would provide enhanced 3D context; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. The rejection of method claim 2 above applies mutatis mutandis to the corresponding limitations of computing system claim 15 while noting that the rejection above cites to both device and method disclosures. Claim 15 is mapped below for clarity of the record and to specify any new limitations not included in claim 2. Claim 3 Regarding claim 3, Covington et al. and Chen et al. teach the method of claim 2 as noted above. Covington et al. do not explicitly teach all of wherein the subject is identifiable based on the distinguishing feature as a result of the volume rendering. However, Chen et al. teach wherein the subject is identifiable based on the distinguishing feature as a result of the volume rendering ("FIG. 1B shows coronal (120), sagittal (130), and axial (140) views of a 3D rendering of a patient's head as reconstructed from a 3D MRI image. Distinctive facial features, such as eyes, nose, mouth, ears, and chin are visible. In principle, it would be possible for someone (either a person viewing the rendering or a computer-based image-analysis system) to identify the patient whose head was imaged," par. 35). Covington et al. and Chen et al. are combined as per claim 2. Claim 4 Regarding claim 4, Covington et al. and Chen et al. teach the method of claim 2 as noted above. Covington et al. do not explicitly teach all of wherein obtaining the medical image comprises applying a volume rendering technique to the set of slices to generate the medical image. [AltContent: textbox (Figure 1A shows the 2D axial slice of the subject.)] PNG media_image2.png 258 326 media_image2.png Greyscale However, Chen et al. teach wherein obtaining the medical image comprises applying a volume rendering technique to the set of slices to generate the medical image ("FIGS. 1A and 1B illustrate the potential to reconstruct patient-identifying surface anatomical features from a medical image. Shown in FIG. 1A is a two-dimensional image 100 of an axial slice of a patient's brain … A number of medical imaging processes involve obtaining a set of 2D images similar to image 100, from which a three-dimensional model of an anatomical structure of interest (e.g., the patient's brain or other organs) can be constructed," par. 34). Covington et al. and Chen et al. are combined as per claim 2. Claim 13 Regarding claim 13, Covington et al. teach the method of claim 1 as noted above. While Covington et al. teach the abnormality of the subject ("Abnormalities can include nodules, for example malignant nodules identified in a chest CT scan. Abnormalities can also include and/or be characterized by one or more abnormality pattern categories such as such as cardiomegaly, consolidation, effusion, emphysema, and/or fracture, for example identified in a chest x-ray. Abnormalities can also include any other unknown, malignant or benign feature of a medical scan identified as not normal," par. 56 wherein some abnormalities are surface features). Covington et al. do not explicitly teach all of wherein the distinguishing feature is an anatomical and/or physiological abnormality. However, Chen et al. teach wherein the distinguishing feature is an anatomical and/or physiological abnormality ("Some surface anatomical features may be usable to determine the patient's identity," par. 33). Therefore, Covington et al. and Chen et al. are combined as per claim 2. Claim 15 Regarding claim 15, Covington et al. teach the computing system of claim 14 as noted above. Covington et al. do not explicitly teach all of wherein the medical image is a volume rendering of the set of slices, and wherein the subject is identifiable based on the distinguishing feature as a result of the volume rendering. However, Chen et al. teach wherein the medical image is a volume rendering of the set of slices, and wherein the subject is identifiable based on the distinguishing feature as a result of the volume rendering ("FIG. 1B shows coronal (120), sagittal (130), and axial (140) views of a 3D rendering of a patient's head as reconstructed from a 3D MRI image. Distinctive facial features, such as eyes, nose, mouth, ears, and chin are visible. In principle, it would be possible for someone (either a person viewing the rendering or a computer-based image-analysis system) to identify the patient whose head was imaged," par. 35). Covington et al. and Chen et al. are combined as per claim 2. Claim 16 Regarding claim 16, Covington et al. teach the computing system of claim 14 as noted above and wherein obtaining the medical image comprises: using a set of imaging detectors to scan the subject and thereby generate the set of slices. Covington et al. do not explicitly teach all of wherein obtaining the medical image comprises: applying a volume rendering technique to a set of slices to generate the medical image. However, Chen et al. teach wherein obtaining the medical image comprises: applying a volume rendering technique to a set of slices to generate the medical image ("FIGS. 1A and 1B illustrate the potential to reconstruct patient-identifying surface anatomical features from a medical image. Shown in FIG. 1A is a two-dimensional image 100 of an axial slice of a patient's brain," par. 34). Covington et al. and Chen et al. are combined as per claim 15. 3rd Claim Rejections - 35 USC § 103 Claims 8, 10, 19, and 20 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0037019 A1, (Covington et al.) in view of US Patent Publication 2022 0138933 A1, (Wang et al.). Claim 8 Regarding Claim 8, Covington et al. teach the method of claim 7 as noted above. Covington et al. do not explicitly teach all of applying a filter to the contour of the medical image segment. However, Wang et al. teach applying a filter to the contour of the medical image segment ("The more one moves towards the interior of the lesion mask, the further one will be away from its contour (boundary). Thus, the distance transform identifies the lesion mask's center points, i.e., those points with a larger distance than others. In one embodiment, the mechanism optionally performs Gaussian smoothing on the distance map," par. 168). Therefore, taking the teachings of Covington et al. and Wang et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the abnormality detecting system as taught by Covington et al. to use the application of a filter to the pixel gradient/contour as taught by Wang et al. The suggestion/motivation for doing so would have been that the modification is based on the use of known techniques to improve similar devices in the same way. More specifically, it is within the capabilities of one of ordinary skill in the art to modify the abnormality detecting system to include a Gaussian smoothing filter with the predictable result of reducing high-frequency noise, smoothing out pixel fluctuations, and preventing false-positive edge detections, thereby enhancing the precision of the edge-detection algorithm and improving the overall robustness of the abnormality detection. The rejection of method claim 8 above applies mutatis mutandis to the corresponding limitations of computing system claim 19 while noting that the rejection above cites to both device and method disclosures. Claim 19 is mapped below for clarity of the record and to specify any new limitations not included in claim 8. Claim 10 Regarding claim 10, Covington et al. teach the method of claim 1 as noted above. Covington et al. do not explicitly teach all of wherein the selected modification technique comprises adjusting one or more intensities of pixels in the image segment. However, Wang et al. teach wherein the selected modification technique comprises adjusting one or more intensities of pixels in the image segment ("After calculating this contrast and variance prior to in-painting, the in-painting may be performed with respect to the selected lesion 1611 such that pixels associated with other lesion contours, e.g., 1612, and areas of the anatomical structure representing healthy tissues in the image, are in-painted with an average pixel intensity value of the healthy tissue," par. 204). Covington et al. and Wang et al. are combined as per claim 8. Claim 19 Regarding claim 19, Covington et al. teach the computing system of claim 18 as noted above and the one or more processors ("The medical scan image analysis system can include a processing system that includes a processor and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations," par. 46). Covington et al. do not explicitly teach all of the one or more processors further configured to apply a filter to the contour of the medical image segment. However, Wang et al. teach the one or more processors further configured to apply a filter to the contour of the medical image segment ("The more one moves towards the interior of the lesion mask, the further one will be away from its contour (boundary). Thus, the distance transform identifies the lesion mask's center points, i.e., those points with a larger distance than others. In one embodiment, the mechanism optionally performs Gaussian smoothing on the distance map," par. 168). Covington et al. and Wang et al. are combined as per claim 8. Claim 20 Regarding claim 20, Covington et al. teach the computing system of claim 14 as noted above. Covington et al. do not explicitly teach all of wherein the selected modification technique comprises adjusting one or more intensities of pixels in the image segment. However, Wang et al. teach wherein the selected modification technique comprises adjusting one or more intensities of pixels in the image segment ("After calculating this contrast and variance prior to in-painting, the in-painting may be performed with respect to the selected lesion 1611 such that pixels associated with other lesion contours, e.g., 1612, and areas of the anatomical structure representing healthy tissues in the image, are in-painted with an average pixel intensity value of the healthy tissue," par. 204). Covington et al. and Wang et al. are combined as per claim 8. 4th Claim Rejections - 35 USC § 103 Claim 11 is rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2022 0037019 A1, (Covington et al.) in view of US Patent Publication 2021 0150269 A1, (Choudhury et al.). Claim 11 Regarding Claim 11, Covington et al. teach the method of claim 1 as noted above and applying the first modification technique or a second modification technique to the medical image ("In some embodiments, the facial structure obfuscation function can perform a one-way function on the region that preserves abnormalities of the corresponding portions of the image, such as nose fractures or facial skin legions, while still obfuscating the identifying facial structure such that the patient is not identifiable," par. 232). Covington et al. do not explicitly teach all of generating an anonymization metric based on application of the modification technique to the medical image, and in response to determining the anonymization metric is below a threshold. However, Choudhury et al. teach generating an anonymization metric based on application of the modification technique to the medical image, and in response to determining the anonymization metric is below a threshold ("The parameter “k” can be referred to as an anonymity parameter or threshold, and have a predetermined minimum value in one or more embodiments of the present invention. Its value is selected by the data owner at each local site based on a number of factors, including the sensitivity of the data. The probability of re-identification … is bounded by 1/k, where k is set to at least the predetermined minimum value, (or a predetermined value) which is an acceptable value for privacy," par. 36 wherein the anonymization metric is the probability of re-identification). Therefore, taking the teachings of Covington et al. and Choudhury et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify the abnormality detecting system as taught by Covington et al. to use the re-identification probability value and threshold as taught by Choudhury et al. The suggestion/motivation for doing so would have been that, “The probability of re-identification … is bounded by 1/k, where k is set to at least the predetermined minimum value, (or a predetermined value) which is an acceptable value for privacy” as noted by the Choudhury et al. disclosure in paragraph [0034], which also motivates combination because the combination would predictably have a predictable privacy risk as there is a reasonable expectation that the resulting data system would be more secure/private; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Publication 2022 0254022 A1 to Karki et al. discloses generating a delineation of the abnormality using a difference between the first and second images, and tagging the segmented lesions. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KARSTEN F LANTZ whose telephone number is (571) 272-4564. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Karsten F. Lantz/Examiner, Art Unit 2664 Date: 2/19/2026 /JENNIFER MEHMOOD/Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Mar 28, 2024
Application Filed
Feb 20, 2026
Non-Final Rejection — §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 9m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month