Prosecution Insights
Last updated: April 19, 2026
Application No. 18/171,744

METHOD FOR GENERATING TRAINING DATA AND FOR TRAINING A DEEP LEARNING ALGORITHM FOR DETECTION OF A DISEASE INFORMATION, METHOD, SYSTEM AND COMPUTER PROGRAM FOR DETECTION OF A DISEASE INFORMATION

Non-Final OA §103
Filed
Feb 21, 2023
Examiner
CHOI, TIMOTHY WING HO
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Siemens Healthcare GmbH
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 2m
To Grant
95%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
199 granted / 331 resolved
-1.9% vs TC avg
Strong +35% interview lift
Without
With
+35.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
21 currently pending
Career history
352
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
56.5%
+16.5% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 331 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Drawings Figures 1 and 4 are objected to as depicting a block diagram without “readily identifiable” descriptors of each block, as required by 37 CFR 1.84(n). Rule 84(n) requires “labeled representations” of graphical symbols, such as blocks; and any that are “not universally recognized may be used, subject to approval by the Office, if they are not likely to be confused with existing conventional symbols, and if they are readily identifiable.” In the case of Figures 1 and 4, the blocks are not readily identifiable per se and therefore require the insertion of text that identifies the function of that block. That is, each vacant block should be provided with a corresponding label readily identifying its function or purpose. CLAIM INTERPRETATION The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a first interface”, “an analyzing unit”, “a determining unit”, and “a second interface” in claim 14. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4-12, and 14-20 are rejected under 35 U.S.C. 103 as being unpatentable over Poole et al. (US 2018/0025255), herein Poole. Regarding claim 1, Poole discloses a method for generating training data for training a deep learning algorithm, the method comprising: receiving medical imaging data of an examination area of a patient, the examination area of the patient including a first part of a symmetric organ and a second part of the symmetric organ (see Poole [0019]-[0022], where volumetric imaging data representative of an anatomical structure of a patient is obtained, where the anatomical structure is the brain or may be any anatomical structure that is substantially symmetrical; see Poole [0031], where training data sets are obtained comprising CT scans of the brains of a plurality of training subjects); splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part of the symmetric organ and the second dataset includes the medical imaging data of the second part of the symmetric organ (see Poole [0072]-[0073] and [0077], where a line of symmetry in transformed training data set is determined and taken to be a midline of the transformed training data set and used to extract a midline centered data set from the transformed training data set; see Poole [0086]); mirroring the second dataset along the symmetry plane or the symmetry axis (see Poole [0091], where in the folding stage, one of the left and the right part is inverted with respect to the midline, where the voxels of the inverted part overlay the voxels of the non-inverted part; see Poole [0097]-[0098], where the right part voxels and inverted left part voxels occupy exactly the same positions in space and may consider that each pair of mirror-image voxels, each having a respective intensity value, is replaced by a single right part voxel with two intensity values); generating the training data by stacking the first dataset and the mirrored second dataset (see Poole [0115], where the folded datasets are concatenated to provide a single data set); and providing the training data (see Poole [0120], where patches of the combined folded data sets are selected on which to train a detector). While Poole does not explicitly teach splitting the midline centered data set into two separate data sets, Poole does teach that the determined midline may be considered to divide a first and second part of the midline centered data set (see Poole [0091]), and provides an implicit teaching for the broadest reasonable interpretation of “splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset”. See also MPEP 2144.01. At the time of filing, one of ordinary skill in the art would have found it obvious from Poole’s teachings that the determined midline divides the midline centered data set into a first and second part provides an implicit teaching for a first data set comprising the first part of the midline centered data set and a second data set comprising the second part of the midline centered data set. This modification is rationalized as some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. In this instance, Poole teaches that the determined midline may be considered to divide a first and second part of the midline centered data set. One of ordinary skill in the art would understand that the determined midline of the midline centered data allows for a divide between a first and second part of the midline centered data set, allowing for the first and second part of the midline centered data set to be separately considered. Thus, it would have been within the general knowledge of one of ordinary skill in the art to reasonably expect that the midline dividing the first and second part of the midline centered data set provides an implicit teaching for a first data set comprising the first part of the midline centered data set and a second data set comprising the second part of the midline centered data set and providing the broadest reasonable interpretation of “splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset”. Regarding claim 2, please see the above rejection of claim 1. Poole discloses the method according to claim 1, further comprising: receiving organ atlas data (see Poole [0048]-[0049], where set of reference landmarks are received for use in atlas alignment, where the atlas data set comprise a set of reference data); generating registered imaging data based on the medical imaging data and the organ atlas data (see Poole [0052]-[0054], where the training data and ground truth data sets are aligned to the atlas data set; see Poole [0065]-[0076], where the alignment process performs a rigid registration of the training data and the atlas data set using the relationship between corresponding landmark locations; and see Poole [0077]-[0081], where midline centered data set is extracted from the transformed training data set); and using the registered imaging data as the medical imaging data for the splitting of the medical imaging data into the first dataset and the second dataset (see Poole [0090]-[0092] and [0096]-[0100], where the midline centered data set is divided by the midline into a first part and second part and two folded data sets are obtained), wherein the first dataset includes the registered imaging data of the first part of the symmetric organ and the second dataset includes the registered imaging data of the second part of the symmetric organ (see Poole [0098]-[0100], where the first folded data set includes intensity data from the left part of the midline centered data and the second folded data set includes intensity data from the right part of the midline centered data). Regarding claim 4, please see the above rejection of claim 1. Poole discloses the method according to claim 1, further comprising: receiving at least one channel or calculating the at least one channel based on the medical imaging data, wherein the at least one channel includes channel data (see Poole [0102], where semi-atlas channel data is added to first and second folded data set, the first folded data set and second folded data set comprise of intensity channels representing intensities of the left and right parts ); generating registered channel data based on the channel data and organ atlas data (see Poole [0102]-[0107], where the coordinate space of the folded data sets, including the added semi-atlas channel data is the same as the coordinate space of the atlas data set); splitting the registered channel data along the symmetry plane or the symmetry axis into at least a first channel dataset and at least a second channel dataset, wherein the first channel dataset includes the registered channel data of the first part of the symmetric organ and the second channel dataset includes the registered channel data of the second part of the symmetric organ (see Poole [0102], where the semi-atlas channel data comprises channels which are representative of the coordinates for each voxel in the coordinate space of the folded datasets; thus suggesting that the semi-atlas channel data are similarly divided and folded to the corresponding folded datasets); mirroring the second channel dataset along the symmetry plane or the symmetry axis (see Poole [0102], where the semi-atlas channel data comprises channels which are representative of the coordinates for each voxel in the coordinate space of the folded datasets; thus suggesting that the semi-atlas channel data are similarly divided and folded to the corresponding folded datasets); and generating the training data by stacking the first dataset, the first channel dataset, the mirrored second dataset and the mirrored second channel dataset (see Poole [0109], where the folded data sets are concatenated to form the combined folded data set). Regarding claim 5, please see the above rejection of claim 4. Poole discloses the method according to claim 4, wherein at least one of (i) the at least one channel is a vessel channel including vessel data or (ii) the at least one channel is a bone channel including bone data (see Poole [0050], where anatomical landmarks may be defined anatomically, in relation to anatomical structures such as bones, vessels or organs; see Poole [0106]-[0107], where semi-atlas channel data added to the folded data comprise label for each voxel corresponding to segmented atlas data indicating the anatomical structure to which they belong), the vessel data are based on a segmentation of a cerebrovascular vessel tree in the medical imaging data, and the bone data are based on at least one of a bone mask or a segmentation of bones in the medical imaging data (see Poole [0106]-[0107], where atlas data is segmented and voxels are labelled to indicate the anatomical structure to which they belong, where each segmented region of the atlas data set is used as an individual binary mask and the semi-atlas channel data comprises data from those binary masks). Regarding claim 6, please see the above rejection of claim 4. Poole discloses the method according to claim 4, wherein the training data are configured as an ordered set (see Poole [0099], where the two folded data sets have ordered channels representative of the left part and of the right part; and see Poole Fig. 8 and [0115], where the folded datasets are concatenated to provide a single data set), the first dataset is prior to the first channel dataset (see Poole Fig. 8 and [0113]-[0115], where the intensity channel comprising intensities of the left part is prior to the semi-atlas channel data of the left part in the first folded dataset of the combined folded data set), the first channel dataset is prior to the mirrored second dataset (see Poole Fig. 8 and [0113]-[0115], where the semi-atlas channel data of the left part in the first folded dataset of the combined folded data set is prior to the intensity channel comprising intensities of the inverted right part in the second folded dataset of the combined folded data set), and the mirrored second dataset is prior to the mirrored second channel dataset (see Poole Fig. 8 and [0113]-[0115], where the intensity channel comprising intensities of the inverted right part in the second folded dataset is prior to the semi-atlas channel data of the second folded dataset of the combined folded data set). Regarding claim 7, please see the above rejection of claim 1. Poole discloses the method according to claim 1, further comprising: generating the training data by stacking a first dataset and a mirrored second dataset based on medical imaging data of different patients (see Poole [0031], where the training data sets are comprised of imaging data sets obtained from CT scans of the brains of a plurality of training subjects). Regarding claim 8, please see the above rejection of claim 7. Poole discloses the method according to claim 7, further comprising: choosing the first dataset and the mirrored second dataset for generating the training data based on boundary conditions, wherein the boundary conditions concern patient information data (see Poole [0031], where the training data sets have been identified as exhibiting signs of a pathology of interest or may be representative of anatomical structure, a different abnormality and/or a different modality). Regarding claim 9, please see the above rejection of claim 1. Poole discloses a method for training a deep learning algorithm to detect disease information in medical imaging data of an examination area of a patient, the method comprising: receiving training data, wherein the training data are generated based on the method according to claim 1 (see above claim 1 rejection; and see Poole [0120], where patches of the combined folded data sets are selected on which to train a detector); receiving output training data, wherein the output training data include disease information of the training data (see Poole [0031]-[0035], where the training data sets have been identified as exhibiting sings of a pathology of interest; see Poole [0125]-[0134], where a detector comprising a classifier is trained to detect abnormal voxels, suggesting that a detection output is provided from the detector being trained); training the deep learning algorithm based on the training data and the output training data (see Poole [0122]-[0134], where the patches and corresponding ground truth data are used to train the detector to detect abnormal voxels, where the classifier of the detector to be trained comprises a convolutional neural network); and providing the trained deep learning algorithm (see Poole [0133] and [0136], where a trained detector is output). Regarding claim 10, please see the above rejection of claim 9. Poole discloses the method according to claim 9, wherein the training of the deep learning algorithm exploits a symmetry of the training data and the output training data, wherein the symmetry describes an influence of switching the first dataset and the mirrored second dataset in the training data on disease information in the output training data (see Poole [0125]-[0126], where the detector is trained on patches extracted from the combined folded data sets and trained such that it is left-right agnostic, where the first and second intensity data is used without taking into account whether the first intensity data came from the left or from the right side). Regarding claim 11, Poole discloses method for detecting disease information, the method comprising: receiving medical imaging data of an examination area of a patient, the examination area of the patient including a first part of a symmetric organ and a second part of the symmetric organ (see Poole [0137], where novel data set is received by the detection circuitry; and see Poole [0139]-[0141], where a novel midline centered data set is extracted from transformed novel data sets and corresponds to symmetrical portion of the atlas data set and used to obtain a first part and second part); splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part of the symmetric organ and the second dataset includes the medical imaging data of the second part of the symmetric organ (see Poole [0138]-[0141], where the novel dataset is aligned with the atlas data set and a midline of the novel data set is determined and refined to extract a novel midline centered data set and corresponds to the same symmetrical portion of the atlas data set and used to obtain a first part and second part); mirroring the second dataset along the symmetry plane or the symmetry axis (see Poole [0141]-[0142], where the midline centered data set is folded at its midline to obtain a first part and a second part, producing two channel volume which are mirror imaged); generating subject patient image data by stacking the first dataset and the mirrored second dataset (see Poole [0141]-[0142], where the first folded data set comprises a first channel representative of intensity of the left part of the midline centered data and a second channel representative of the intensity of the right part of the midline data set); analyzing the subject patient image data by applying a trained deep learning algorithm to the subject patient image data (see Poole [0144]-[0147], where the trained detector is applied to the first folded data set); and detecting the disease information based on the analyzing (see Poole [0148]-[0150], where a probability that a voxel is abnormal is assigned to each voxel, and outputs a probability volume that comprises for each voxel of the novel midline centered data set, a probability that the voxel is abnormal), wherein the trained deep learning algorithm has been trained with training data (see Poole [0122]-[0134], where the patches and corresponding ground truth data are used to train the detector to detect abnormal voxels, where the classifier of the detector to be trained comprises a convolutional neural network). At the time of filing, one of ordinary skill in the art would have found it obvious from Poole’s teachings that the determined midline divides the novel midline centered data set into a first and second part provides an implicit teaching for a first data set comprising the first part of the novel midline centered data set and a second data set comprising the second part of the novel midline centered data set. This modification is rationalized as some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. In this instance, Poole teaches that the determined midline may be considered to divide a first and second part of the novel midline centered data set. One of ordinary skill in the art would understand that the determined midline of the novel midline centered data allows for a divide between a first and second part of the novel midline centered data set, allowing for the first and second part of the novel midline centered data set to be separately considered. Thus, it would have been within the general knowledge of one of ordinary skill in the art to reasonably expect that the midline dividing the first and second part of the novel midline centered data set provides an implicit teaching for a first data set comprising the first part of the novel midline centered data set and a second data set comprising the second part of the novel midline centered data set and providing the broadest reasonable interpretation of “splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset”. Regarding claim 12, please see the above rejection of claim 11. Poole discloses the method according to claim 11, further comprising: receiving a radiological finding based on the medical imaging data or the subject patient image data (see Poole [0151]-[0153], where for a given voxel, if the probability of being abnormal is above a threshold value, the voxel is classified as abnormal); and determining view information by applying a view determining algorithm to the medical imaging data or to the subject patient image data, wherein the view information includes at least one of a projection or a projection angle to show a region of the radiological finding based on the medical imaging data or based on the subject patient image data (see Poole [0154]-[0162], the voxels that have been classified as abnormal are grouped into abnormal regions, the abnormal regions are ranked, and presented to a clinician or user, where an image from the novel data set is rendered with the abnormal region or regions displayed on the rendered image, and that a list of detections is displayed that is ranked by probability, and clicking on a listing of an abnormal region navigates to an MPR (multi-planar reformatting) view of the abnormal region, where each detected abnormal region is highlighted by colour in the MPR view). Regarding claim 14, it recites a system performing the method of claim 11. Poole teaches a system performing the method of claim 11. Please see above for detailed claim analysis, with the exception to the following further limitations: a first interface (see Poole Fig. 1 and [0024], where volumetric imaging data sets are provided to the computing apparatus from the memory), a processing unit, an analyzing unit, a determining unit (see Poole Fig. 1 and [0025]-[0028], where the computer apparatus includes a central processing unit, hard drive and other components of a PC including RAM and ROM, and that the disclosed circuitries are each implemented in the computing apparatus by means of a computer program having a computer-readable instructions that are executable to perform the disclosed methods), and a second interface (see Poole Fig. 1 and [0019], where the computing apparatus is connected to one or more display screens). Please see the above rejection for claim 11, as the rationale to modify the teachings of Poole are similar, mutatis mutandis. Regarding claim 15, please see the above rejection of claim 1. Poole discloses a non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by a computer, cause the computer to carry out the method according to claim 1 (see Poole [0025]-[0028], where the computer apparatus includes a central processing unit, hard drive and other components of a PC including RAM and ROM, and that the disclosed circuitries are each implemented in the computing apparatus by means of a computer program having a computer-readable instructions that are executable to perform the disclosed methods). Regarding claim 16, see above rejection for claim 2. It is a method claim reciting similar subject matter as claim 4. Please see above claim 4 for detailed claim analysis as the limitations of claim 16 are similarly rejected. Regarding claim 17, see above rejection for claim 3. It is a method claim reciting similar subject matter as claim 4. Please see above claim 4 for detailed claim analysis as the limitations of claim 17 are similarly rejected. Regarding claim 18, see above rejection for claim 17. It is a method claim reciting similar subject matter as claim 5. Please see above claim 5 for detailed claim analysis as the limitations of claim 18 are similarly rejected. Regarding claim 19, see above rejection for claim 5. It is a method claim reciting similar subject matter as claim 6. Please see above claim 6 for detailed claim analysis as the limitations of claim 19 are similarly rejected. Regarding claim 20, please see the above rejection of claim 1. Poole discloses a method for detecting disease information, the method comprising: receiving medical imaging data of an examination area of a patient, the examination area of the patient including a first part of a symmetric organ and a second part of the symmetric organ (see Poole [0137], where novel data set is received by the detection circuitry; and see Poole [0139]-[0141], where a novel midline centered data set is extracted from transformed novel data sets and corresponds to symmetrical portion of the atlas data set and used to obtain a first part and second part); splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset, wherein the first dataset includes the medical imaging data of the first part of the symmetric organ and the second dataset includes the medical imaging data of the second part of the symmetric organ (see Poole [0138]-[0141], where the novel dataset is aligned with the atlas data set and a midline of the novel data set is determined and refined to extract a novel midline centered data set and corresponds to the same symmetrical portion of the atlas data set and used to obtain a first part and second part); mirroring the second dataset along the symmetry plane or the symmetry axis (see Poole [0141]-[0142], where the midline centered data set is folded at its midline to obtain a first part and a second part, producing two channel volume which are mirror imaged); generating subject patient image data by stacking the first dataset and the mirrored second dataset (see Poole [0141]-[0142], where the first folded data set comprises a first channel representative of intensity of the left part of the midline centered data and a second channel representative of the intensity of the right part of the midline data set); analyzing the subject patient image data by applying a trained deep learning algorithm to the subject patient image data (see Poole [0144]-[0147], where the trained detector is applied to the first folded data set); and detecting the disease information based on the analyzing (see Poole [0148]-[0150], where a probability that a voxel is abnormal is assigned to each voxel, and outputs a probability volume that comprises for each voxel of the novel midline centered data set, a probability that the voxel is abnormal), wherein the trained deep learning algorithm has been trained with training data according to the method of claim 1 (see Poole [0122]-[0134], where the patches and corresponding ground truth data are used to train the detector to detect abnormal voxels, where the classifier of the detector to be trained comprises a convolutional neural network; see above rejection of claim 1). At the time of filing, one of ordinary skill in the art would have found it obvious from Poole’s teachings that the determined midline divides the novel midline centered data set into a first and second part provides an implicit teaching for a first data set comprising the first part of the novel midline centered data set and a second data set comprising the second part of the novel midline centered data set. This modification is rationalized as some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. In this instance, Poole teaches that the determined midline may be considered to divide a first and second part of the novel midline centered data set. One of ordinary skill in the art would understand that the determined midline of the novel midline centered data allows for a divide between a first and second part of the novel midline centered data set, allowing for the first and second part of the novel midline centered data set to be separately considered. Thus, it would have been within the general knowledge of one of ordinary skill in the art to reasonably expect that the midline dividing the first and second part of the novel midline centered data set provides an implicit teaching for a first data set comprising the first part of the novel midline centered data set and a second data set comprising the second part of the novel midline centered data set and providing the broadest reasonable interpretation of “splitting the medical imaging data along a symmetry plane or a symmetry axis into a first dataset and a second dataset”. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Poole as applied to claim 2 above, and further in view of Hemingway (US 2023/0139458, effectively filed 28 October 2021). Regarding claim 3, please see the above rejection of claim 2. Poole discloses the method according to claim 2, wherein the organ atlas data are brain atlas data and the symmetric organ is a brain, (see Poole [0031] and [0033], where the training imaging data sets are imaging data of brains of a plurality of training subjects; see Poole [0047]-[0051], where the set of reference landmarks comprises landmarks in or near the brain). While Poole teaches that a midline is determined for dividing the midline centered data of an anatomical structure into left and right parts (see Poole [0072]-[0073] and [0091]), and that the anatomical structure is the brain (see Poole [0031] and [0033]); Poole does not explicitly disclose wherein the first part of the symmetric organ is a first cerebral hemisphere and the second part of the symmetric organ is a second cerebral hemisphere Hemingway teaches in a related and pertinent medical image processing apparatus s and method to generate symmetrized volume data based on mirrored volume data (see Hemingway Abstract), where image data that is representative of an anatomical structure of a patient, where the anatomical structure is the brain and is substantially symmetrical between the left and right sides of the brain (see Hemingway [0030]-[0032]), and a symmetry detection is performed to detect a symmetry plane that divides the volume into first and second sections, where the symmetry plane divides the head into a left half comprising a left hemisphere of the brain and a right half comprising a right hemisphere of the brain (see Hemingway [0043]). At the time of filing, one of ordinary skill in the art would have found it obvious to combine the teachings of Poole with the teachings of Hemingway such that when the midline determined to divide a midline centered data set corresponding to a brain anatomical structure into symmetrical left and right parts, the left part includes the left hemisphere of the brain and the right part includes the right hemisphere of the brain. This modification is rationalized as combining prior art elements according to known methods to yield predictable results. Poole discloses determining a midline for dividing the midline centered data corresponding to a brain anatomical structure into symmetrical left and right parts. Hemingway teaches detecting a symmetry plane that divides an imaging volume of a brain that is substantially symmetrical between the left and right sides into first and second sections, where the symmetry plane divides the head into a left half comprising a left hemisphere of the brain and a right half comprising a right hemisphere of the brain. One of ordinary skill in the art could have combined the disclosed teachings of Poole and Hemingway, where dividing the midline centered data set corresponding to the brain anatomical structure into symmetrical left and right parts with a determined midline would indicate that the left part includes the left hemisphere of the brain and the right part includes the right hemisphere of the brain, and predictably result in separating the midline centered data set corresponding to the brain into symmetrical left and right parts. Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Poole as applied to claim 12 above, and further in view of Yu et al. (US 2020/0334869), herein Yu. Regarding claim 13, please see the above rejection of claim 12. Poole does not explicitly disclose the method according to claim 12, wherein the medical imaging data includes 2D-projections from different views onto the region of the radiological finding, and the method further comprises: determining a coarse location of the region of the radiological finding by applying a back-projection on the 2D- projections. Yu teaches in a related and pertinent systems and methods for reconstructing an image in an imaging system (see Yu Abstract), where the imaging device may generate or provide imaging data via scanning a subject, and can be a CT scanner (see Yu [0036]), where scan data representing an intensity distribution of energy detected at a plurality of detector elements is obtained from an imaging system performing a scan on a subject, the subject may include a specific portion of a body, such as a head, and the scan may be generated based on multiple projections at different angles around the subject using radiation beams (see Yu [0059]), where an image estimate is determined according to an image reconstruction algorithm, such as a filtered back projection algorithm (see Yu [0060]), and further iteratively update the image estimate according to an objective function to output a final image (see Yu [0066]-[0068]), and that final images may be used to reconstruct a 3D image based on a 3D reconstruction method, the 3D reconstruction method may include a multi-planar reconstruction (MPR) algorithm (see Yu [0056]). At the time of filing, one of ordinary skill in the art would have found it obvious to apply the teachings of Yu to the teachings of Poole such that the volumetric imaging data obtained from the CT scanner is generated based on multiple projections at different angles around the patient using radiation beams, and that the detected and highlighted abnormal regions on the rendered MPR view are generated based on filtered back projection image reconstruction algorithm. This modification is rationalized as an application of a known technique to a known method ready for improvement to yield predictable results. In this instance, Poole discloses a base method for generating training data and training a detector to detect abnormal voxels in medical volumetric imaging data obtained from a CT scanner, where the imaging data is representative of a symmetrical brain anatomical structure, and that detected abnormal regions are displayed to a clinician, where a list of detections is displayed that is ranked by probability, and clicking on a listing of an abnormal region navigates to an MPR (multi-planar reformatting) view of the abnormal region, and each detected abnormal region is highlighted by colour in the MPR view (see Poole [00156]). Yu teaches known techniques for reconstructing an image in an imaging system, where the imaging device may generate or provide imaging data via scanning a subject with a CT scanner, where the scan data representing an intensity distribution of energy detected at a plurality of detector elements is obtained from an imaging system performing a scan on a subject, the subject may include a specific portion of a body, such as a head, and the scan may be generated based on multiple projections at different angles around the subject using radiation beams, and an image estimate is determined according to an image reconstruction algorithm, such as a filtered back projection algorithm, and further iteratively update the image estimate according to an objective function to output a final image, and that final images may be used to reconstruct a 3D image based on a 3D reconstruction method, the 3D reconstruction method may include a multi-planar reconstruction (MPR) algorithm. One of ordinary skill in the art would have recognized that by applying Yu’s techniques would allow for the method of Poole to generate the volumetric imaging data obtained from the CT scanner based on multiple projections at different angles around the patient using radiation beams, and that the detected and highlighted abnormal regions on the rendered MPR view are generated based on filtered back projection image reconstruction algorithm, predictably leading to an improved method for generating volumetric imaging data and reconstructing images of the abnormal regions from obtained CT scans as taught by Yu. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to TIMOTHY WING HO CHOI whose telephone number is (571)270-3814. The examiner can normally be reached 9:00 AM to 5:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, VINCENT RUDOLPH can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TIMOTHY CHOI/Examiner, Art Unit 2671 /VINCENT RUDOLPH/Supervisory Patent Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Feb 21, 2023
Application Filed
Jan 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12497051
APPARATUSES, SYSTEMS, AND METHODS FOR DETERMINING VEHICLE OPERATOR DISTRACTIONS AT PARTICULAR GEOGRAPHIC LOCATIONS
2y 5m to grant Granted Dec 16, 2025
Patent 12488569
UNPAIRED IMAGE-TO-IMAGE TRANSLATION USING A GENERATIVE ADVERSARIAL NETWORK (GAN)
2y 5m to grant Granted Dec 02, 2025
Patent 12475992
SYSTEM AND METHOD FOR NAVIGATING A TOMOSYNTHESIS STACK INCLUDING AUTOMATIC FOCUSING
2y 5m to grant Granted Nov 18, 2025
Patent 12469300
SYSTEMS, DEVICES, AND METHODS FOR VEHICLE CAMERA CALIBRATION
2y 5m to grant Granted Nov 11, 2025
Patent 12469190
X-RAY TOMOGRAPHIC RECONSTRUCTION METHOD AND ASSOCIATED DEVICE
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
95%
With Interview (+35.1%)
3y 2m
Median Time to Grant
Low
PTA Risk
Based on 331 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month