Prosecution Insights
Last updated: April 19, 2026
Application No. 18/354,031

SYSTEM FOR MEDICAL DATA ANALYSIS

Final Rejection §101§102§103§112
Filed
Jul 18, 2023
Examiner
TRAN, DUY ANH
Art Unit
2674
Tech Center
2600 — Communications
Assignee
Siemens Healthineers AG
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
104 granted / 128 resolved
+19.3% vs TC avg
Strong +18% interview lift
Without
With
+17.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
29 currently pending
Career history
157
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
42.0%
+2.0% vs TC avg
§102
26.7%
-13.3% vs TC avg
§112
11.3%
-28.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 128 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This Action is in response to Applicant’s response filed on 11/24/2025. Claims 1-20 are still pending in the present application. This Action is made FINAL. Response to Amendment Drawing Objection: The amended claims filed on 11/24/2025 overcomes the Drawing Objection in the previous office action. Claim Interpretation: The amended claims filed on 11/24/2025 overcomes the Claim Interpretation in the previous office action. Claim Rejections - 35 USC § 112 : The amended claims filed on 11/24/2025 overcomes the Claim Rejections 112(b) in the previous office action. Applicant Arguments: Claim Rejections - 35 USC § 101: Applicant respectfully submits that the recited features, under their broadest reasonable interpretation, "cannot practically be performed in the mind." For example, the features recited in claim 1 relate to "determining a number of clusters based on first medical image data, third medical image data, first analysis data related to the first medical image data, third analysis data, or a combination thereof" and "training a first number of data analysis tools based on the first medical image data and the first analysis data related to the first medical image data" cannot reasonably be performed by a human in their mind or on pen and paper. The human mind is simply not equipped to perform clustering and training of data analysis tools as recited in these claims. Accordingly, Applicant respectfully submits that the Examiner has erred in characterizing the claims as being directed to a mental process. … In the present Application, according to the background section (e.g., Specification at paragraph [0005]), improving a single universal detector with ongoing clinical use may be challenging. Rare types of findings may have minimal impact on the model which could cause false negatives. Id. The claims reflect the technical improvements discussed in the disclosure by reciting details of how multiple different tools adapted to a specific purpose respectively may be created. See Specification at, e.g., paragraph [0015]. During the application phase of the system, the first number of tools may analyze the image data faster and/or with fewer false negatives as opposed to a single tool. (Remarks Pages 2-4) Claim Rejections - 35 USC § 102: In regards to Argument 1, With respect to independent claims 1, 19 and 20: In the present application, applicant argues that “Sorenson fails to disclose at least determining a number of clusters based on first medical image data, third medical image data, first analysis data related to the first medical image data, third analysis data, or a combination thereof, and, and automatically training a first number of data analysis tools based on the first medical image data and the first analysis data related to the first medical image data, wherein at least one of the first number of data analysis tools is trained for each determined cluster.”. “At most, Sorenson describes a variety of image processing tools (e.g., vessel analysis tools) that can be accessed by a user or implemented as image processing engines. See Sorenson at, e.g., paragraph [0100]. However, nowhere does Sorenson describe training at least one data analysis tool for each determined cluster. Sorenson merely describes when any one of image processing engines 113-115 is invoked, it may further invoke one or more image processing tools 107 of image processing system, not to train a first number of data analysis tools based on the first medical image data and the first analysis data. See Sorenson at, e.g., Paragraph [0078]. Consequently, Sorenson fails to disclose "each and every element" as set forth in these claims” (Remark Pages 5-6) In regards to Argument 2, With respect to claim 10, Applicant/s state/s Sorenson fails disclose at least wherein the clusters correspond to different pathologies. Sorenson at most describes applying various tools to analyze a volume of interest, but fails to disclose, teach or suggest determining a number of clusters corresponding to different pathologies. (Remark Page 7) In regards to Argument 3, With respect to claim 11, Applicant/s state/s Sorenson fails disclose at least determine the number of clusters by performing a K-means algorithm. Sorenson at most describes applying various tools to analyze a volume of interest, but fails to disclose, teach or suggest determining the number of clusters by performing a K-means algorithm. (Remark Page 7) In regards to Argument 4, With respect to claim 12, Applicant/s state/s , Sorenson fails disclose at least determining descriptors for images in the first or third medical image data; and grouping the descriptors into the number of clusters. Sorenson at most describes applying various tools to analyze a volume of interest, but fails to disclose, teach or suggest determining descriptors for images in the first or third medical image data; and grouping the descriptors into the number of clusters. (Remark Page 7) In regards to Argument 5, With respect to claim 13, Applicant/s state/s Sorenson fails disclose at least automatically training the first or second number of data analysis tools when the number of descriptors in any one cluster exceeds a threshold value. Sorenson merely describes allowing the user to adjust thresholds for making the findings calculations, or producing derived images, not for determining whether the first or second number of data analysis tools should be trained. (Remark Page 8) Examiner’s Response: Claim Rejections - 35 USC § 101: With respect to claims 1 and 9, Applicant argues that the amended claims 1 and 19 and 20 to include patent-eligible subject matter which the human mind is simply not equipped to perform clustering and training of data analysis tools as recited in these claims and the claims reflect the technical improvements discussed in the disclosure by reciting details of how multiple different tools adapted to a specific purpose respectively may be created.. After reviewing the amendments and argument filed on 11/24/2025 , the Examiner has withdrawn the previous 101 rejection for the following reason: The claims recites steps and features that an additional element (or combination of elements) integrate the alleged abstract idea into a practical application because the claims improve the functioning of a computer or technical field. See MPEP 2106.04(d)(1) and 2106.05(a). The claims reflect this improvement in the technical field of medical data analysis. Thus, the claims as a whole integrate the alleged judicial exception into a practical application. Claim Rejections - 35 USC § 102: In response to Argument 1, With respect to independent claims 1, 19 and 20, the Examiner respectfully disagree. First, Sorenson discloses the engines and/or the e-suites can machine learn or be trained using machine tearing algorithms based on prior findings periodically such that as the engines/e-suites process more studies, the engines/e-suites can detect findings more accurately is interpreted as “training at least one data analysis tool for each determined cluster”. (Paragraph 67). Furthermore, Sorenson discloses Image processing server 110 can have a developer platform for one or more engine developers to update, change, train, machine-learn, or any combination thereof any of the engines on image processing server 110. The engines can be improved to detect the findings on a developer platform, for example, via training using machine-learning algorithms or via modifying a containerized version of a predict method for a given engine. … a COPD engine can machine learn based on the same COPD engine data (interpreted as cluster) is read as “training at least one data analysis tool for each determined cluster”. (Paragraph 76) Also, Sorenson teaches image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, such as a PACS or EMR, or other clinical or information system. The following are examples of medical image processing tools present in a current leading semi-automated image viewing and advanced visualization system that may be included and/or further automated, or converted to engines, as part of the image processing system shows that “image process engines 113-115 is read as “data analysis tool”. (Paragraph 100) Second, The Examiner is interpreted under Broadest Reasonable Interpretation of the claim limitation” determining a number of clusters” base on only “a first image data” or “a third image data”. Sorenson discloses the machine learning module can correlate image data from the medical image data source (read as first image data or third image data) to a workflow based on in-image analysis and metadata. … The correlation of the medical data (is read as “number of clusters”) to a machine learning module or collection of machine learning can be done based on pattern extraction, feature extraction or image processing which result of a medical image classification (clusterization) is interpreted as “determining a number of clusters based on first medical image data, third medical image data, first analysis data related to the first medical image data, third analysis data, or a combination thereof”. (Paragraph 183). Also, Sorenson discloses the medical image data can be sent via a network to the Processing Server where the auto-categorization module can categorize each of the images sent to the processing server …the auto-categorization module can categorize the images (read as “determine the number of clusters”) based on rules, training based on user, machine learning, DICOM Headers, in-image analysis, analysis of pixel attributes, landmarks within the images, characterization methods, statistical methods, or any combination thereof. The tracking module can track the images based on categories, for example, modality, orientation (e.g., axial, coronal, sagittal, off axis, short axis, 3 chamber view, or any combination thereof), anatomies (organs, vessels, bones, or any combination thereof), body section (e.g., head, next, chest, abdomen, pelvis, extremities, or any combination thereof), sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof is interpreted as “determining a number of clusters based on first medical image data, third medical image data, first analysis data related to the first medical image data, third analysis data, or a combination thereof”. (Paragraph 214-215). The Examiner states that in light of MPEP 2111, the Examiner has interpreted the claims properly. Specifically, during patent prosecution, the pending claims must be “given their broadest reasonable interpretation assistant with the specification.” The Examiner has interpreted the claim language in reference to the specification. Because applicant has the opportunity to amend the claims during prosecution, given a claim in its broadest reasonable interpretation will reduce the possibility that the claim, once issued will be interpreted more or broadly than is justified. Although the cited reference is different from the invention disclosed, the language of Applicant's claims is sufficiently broad to reasonably read on the cited reference. A broad reading does not constitute “teaching away.” Further, it has been held that nonpreferred embodiments failing to assert discovery beyond that known in the art does not constitute a “teaching away” unless such disclosure criticizes, discredits, or otherwise discourages the solution claimed. In re Susi, 440 F.2d 442, 169 USPQ 423 (CCPA 1971), In re Gurley, 27 F.3d 551, 554, 31 USPQ2d 1130, 1132 (Fed. Cir. 1994), In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004), (see MPEP §2124). Disclosed examples and preferred embodiments do not constitute a teaching away from a broader disclosure or nonpreferred embodiments. In re Susi, 440 F.2d 442, 169 USPQ 423 (CCPA 1971). “A known or obvious composition does not become patentable simply because it has been described as somewhat inferior to some other product for the same use.” In re Gurley, 27 F.3d 551, 554, 31 USPQ2d 1130, 1132 (Fed. Cir. 1994) (The invention was directed to an epoxy impregnated fiber-reinforced printed circuit material. The applied prior art reference taught a printed circuit material similar to that of the claims but impregnated with polyester-imide resin instead of epoxy. The reference, however, disclosed that epoxy was known for this use, but that epoxy impregnated circuit boards have “relatively acceptable dimensional stability” and “some degree of flexibility,” but are inferior to circuit boards impregnated with polyester-imide resins. The court upheld the rejection concluding that applicant’s argument that the reference teaches away from using epoxy was insufficient to overcome the rejection since “Gurley asserted no discovery beyond what was known in the art.” 27 F.3d at 554, 31 USPQ2d at 1132.). Furthermore, “[t]he prior art’s mere disclosure of more than one alternative does not constitute a teaching away from any of these alternatives because such disclosure does not criticize, discredit, or otherwise discourage the solution claimed….” In re Fulton, 391 F.3d 1195, 1201, 73 USPQ2d 1141, 1146 (Fed. Cir. 2004). (MPEP §2124). In response to Argument 2, With respect to dependent claim 10, the Examiner respectfully disagree. Sorenson discloses a machine learning module that is part of an artificial intelligence findings system includes an image identification engine that can extract features from a new medical image being analyzed to match this data to data present in the archive with the same characteristic (disease) … the similar data can pertain to anatomical structures such as body parts, anatomic anomalies and anatomical features is interpreted as “determining a number of clusters corresponding to different pathologies.” (Paragraph 184). Also, Sorenson discloses an engine developer can train a lung nodule detection engine to detect lung nodules (is interpreted as “pathologies) in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules … a COPD engine can machine learn based on the same COPD engine data (read as cluster), based on another COPD engine data, or any combination thereof. “determining a number of clusters corresponding to different pathologies”) (Paragraph 76). Therefore, it is clearly stated that Sorenson teaches/discloses determining a number of clusters corresponding to different pathologies. In response to Argument 3, With respect to dependent claim 11, have been fully considered but are moot in view of the new ground(s) rejection in view of Yerebakan Halid et al (EP – 3869453; Yerebakan). In response to Argument 4, With respect to dependent claim 12, the Examiner respectfully disagree. Sorenson discloses the machine learning module can categorize the image data based on in-image analysis and/or metadata (e.g., DICOM headers or tags) (is interpreted as “grouping the descriptors into the number of clusters”). The machine learning module can identify any image information from the image data such as the modality, orientation (e.g., axial, coronal, sagittal, off axis, short axis, 3 chamber view, or any combination thereof), anatomies … study/series description, scanning protocol is read as “determining descriptors for images in the first or third medical image data”. (Paragraph 197) Furthermore, Sorenson discloses the medical image data can be sent via a network to the Processing Server where the auto-categorization module can categorize each of the images sent to the processing server …the auto-categorization module can categorize the images (read as “determine the number of clusters”) based on rules, training based on user, machine learning, DICOM Headers, in-image analysis, analysis of pixel attributes, landmarks within the images, characterization methods, statistical methods, or any combination thereof. The tracking module can track the images based on categories, for example, modality, orientation (e.g., axial, coronal, sagittal, off axis, short axis, 3 chamber view, or any combination thereof), anatomies (organs, vessels, bones, or any combination thereof), body section (e.g., head, next, chest, abdomen, pelvis, extremities, or any combination thereof), sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof. (Paragraph 214-215). Therefore, it is clearly stated that Sorenson teaches/discloses determining descriptors for images in the first or third medical image data; and grouping the descriptors into the number of clusters. In response to Argument 5, With respect to dependent claim 13, have been fully considered but are moot in view of the new ground(s) rejection in view of Yerebakan Halid et al (EP – 3869453; Yerebakan). Claim Status Claim(s) 1-10, 12 and 14-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sorenson et al (U.S. 20180137244 A1; Sorenson). Claim(s) 11 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sorenson et al (U.S. 20180137244 A1; Sorenson), in view of Yerebakan Halid et al (EP – 3869453 A1; Yerebakan). Examiner Noted: See the PDF of EP- 3869453 A1 provided by Examiner. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-10, 12 and 14-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sorenson et al (U.S. 20180137244 A1; Sorenson). Regarding claim 1, Sorenson discloses A system for medical data analysis (Fig.1: a medical data review system, Fig.2 ), comprising: a non-transitory memory for storing machine-readable instructions; and a processing circuit in communication with the non-transitory memory, the processing circuit being operative with the machine-readable instructions (Paragraph 77: “Referring to FIG. 2, image processing server 110 includes memory 201 (e.g., dynamic random access memory or DRAM) hosting one or more image processing engines 113-115, which may be installed in and loaded from persistent storage device 202 (e.g., hard disks), and executed by one or more processors (not shown). “) to perform steps including determining a number of clusters based on first medical image data, third medical image data, first analysis data related to the first medical image data, third analysis data, or a combination thereof, ( Paragraph 214-215: “The medical image data can be sent via a network to the Processing Server where the auto-categorization module can categorize each of the images sent to the processing serve …the auto-categorization module can categorize the images based on rules, training based on user, machine learning, DICOM Headers, in-image analysis, … sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof.”; it show that the each of categories is interpreted as “ clusters” data”; Paragraphs 83-85; Paragraphs 100-118: “a variety of image processing tools can be accessed by a user using the diagnostic image processing features of the medical data review system. Alternatively, such image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, such as a PACS or EMR, or other clinical or information system. … Vessel Analysis tools may include a comprehensive vascular analysis package for CT and MR angiography capable of a broad range of vascular analysis tasks, … Calcium scoring tools may include identification of coronary calcium with Agatston, volume and mineral mass algorithms.”, it shows that the set of the diagnostic image processing features is input to each image processing tools as vessel analysis tools or calcium scoring tools is read as “cluster”; Paragraph 183: “the machine learning module can correlate image data from the medical image data source to a workflow based on in-image analysis and metadata. … The correlation of the medical data to a machine learning module or collection of machine learning can be done based on pattern extraction, feature extraction or image processing which result of a medical image classification (clusterization)”) and automatically training a first number of data analysis tools (Figs. 1-2: image processing engines 113-114-115) based on the first medical image data and the first analysis data related to the first medical image data, (Paragraphs 65-66: “The engine or e-suites can detect findings (e.g., a disease, an indication, a feature, an object, a shape, a texture, a measurement, insurance fraud, or any combination thereof). The one or more engines and/or one or more e-suites can detect findings from studies (e.g., clinical reports, images, patient data, image data, metadata, or any combination thereof) based on metadata, known methods of in-image analysis, or any combination thereof. … the engines and/or the e-suites can machine learn or be trained using machine tearing algorithms based on prior findings periodically such that as the engines/e-suites process more studies, the engines/e-suites can detect findings more accurately.”; Paragraph 70: “Tools, engines, e-suites, training tools, coding tools, or any combination thereof can be displayed and used via image processing server 110 or in a 2D and/or 3D medical imaging software application, or medical data review system, … The second user or group can use the machine learning/training tools and the feedback from this usage can be applied to train the first engine to detect findings with higher accuracy. The first engine can be updated by image processing server 110 and stored in the application store 109. The processing of image data by engines and updating of the engines can occur at image processing server 110, the image processing application store 109, or any combination thereof.” Paragraph 76: “an engine developer can train a lung nodule detection engine to detect lung nodules in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules (e.g., geometric shapes, textures, other combination of features resulting in detection of lung nodules, or any combination thereof)) wherein at least one of the first number of data analysis tools (Figs. 1-2: image processing engines 113-114-115) is trained for each determined cluster. (Paragraph 76: “an engine developer can train a lung nodule detection engine to detect lung nodules in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules … a COPD engine can machine learn based on the same COPD engine data, based on another COPD engine data, or any combination thereof.”, it show that “the same COPD engine data” is interpreted as “determine cluster”; Paragraph 208: “Other machine learning approaches for in-image analysis and metadata can be implemented such as decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering,”; Paragraphs 100-118: “a variety of image processing tools can be accessed by a user using the diagnostic image processing features of the medical data review system. Alternatively, such image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, … medical image processing tools present in a current leading semi-automated image viewing and advanced visualization system that may be included and/or further automated, or converted to engines, … Vessel Analysis tools may include a comprehensive vascular analysis package for CT and MR angiography capable of a broad range of vascular analysis tasks, … Calcium scoring tools may include identification of coronary calcium with Agatston, volume and mineral mass algorithms.”, it shows that the set of the diagnostic image processing features is input to each image processing tools as vessel analysis tools or calcium scoring tools is read as “determine cluster”) Regarding claim 2, Sorenson discloses the processing circuit is operative with the machine-readable instructions to perform: selecting at least one of the first number of data analysis tools; and executing the selected at least one data analysis tool to, based on second medical image data, output second analysis data. (Figs. 1-3: medical data source 105, image processing engine as “113. lung nodule -114. bone fracture – 115. vessel detect; Paragraph 69: “The engines or e-suites of image processing server 110 can process studies depending on which engines are selected by the user via a graphical user interface (GUI) or website (local or on the internet) of image processing server 110.” ; Paragraph 92: “the user can select engines that detect specific features of lung nodules, for example, an engine for texture, and engine for nodule shape, an engine for intensity, or any combination thereof. Such engines can be run in parallel, in series, or any combination thereof.”) Regarding claim 3, Sorenson discloses the processing circuit is operative with the machine-readable instructions to enable selection of the at least one data analysis tool via a user interface. (Figs. 1-3: medical data source 105, image processing engine as “113. lung nodule -114. bone fracture – 115. vessel detect; Paragraph 69: “The engines or e-suites of image processing server 110 can process studies depending on which engines are selected by the user via a graphical user interface (GUI) or website (local or on the internet) of image processing server 110.”) Regarding claim 4, Sorenson discloses the user interface displays the first medical image data, the second medical image data, third medical image data, or a combination thereof. (Figs. 1-3: medical data source 105; Paragraphs 165, 168, Paragraph 202: “The image processing server can receive image data from the medical image data source. The image processing server (e.g., an engine (not shown) that is part of the image processing server) can analyze and send the image data to the client device to be displayed over a network (e.g., WAN or LAN). The display of the image data and application settings/preferences on the client can be a default preference based on the user's default preferences or the image processing server/client application's default preference.”) Regarding claim 5, Sorenson discloses the user interface applies a user operated data analysis tool to the first or third medical image data to generate the first analysis data or third analysis data. (Figs. 1-3: medical data source 105; image processing tools 107; Paragraph 70: “A medical imaging software application is a client application that accesses the output of the image processing tools 107 of image processing system 106. … The processing of image data by engines and updating of the engines can occur at image processing server 110, the image processing application store 109, or any combination thereof.”; Paragraph 202: “The image processing server can receive image data from the medical image data source. The image processing server (e.g., an engine (not shown) that is part of the image processing server) can analyze and send the image data to the client device to be displayed over a network (e.g., WAN or LAN).”) Regarding claim 6, Sorenson discloses the user interface displays the first analysis data, the second analysis data, the third analysis data, or a combination thereof. (Figs. 1-3: medical data source 105; image processing tools 107; Paragraph 70: “A medical imaging software application is a client application that accesses the output of the image processing tools 107 of image processing system 106. … The processing of image data by engines and updating of the engines can occur at image processing server 110, the image processing application store 109, or any combination thereof.”; Paragraph 202: “The image processing server can receive image data from the medical image data source. The image processing server (e.g., an engine (not shown) that is part of the image processing server) can analyze and send the image data to the client device to be displayed over a network (e.g., WAN or LAN).”) Regarding claim 7, Sorenson discloses the processing circuit is operative with the machine-readable instructions to perform: automatically generating a second number of data analysis tools based on third medical image data and third analysis data related to the third medical image data. (Figs. 1-3: medical data source 105, image processing engine as “113. lung nodule -114. bone fracture – 115. vessel detect; Paragraph 60: “each of image processing engines or modules 113-115 may be configured to perform a specific image processing operation on medical images, such as, for example, lung nodule detection, bone fracture detection, organ identification and segmentation, blood clot detection, image body part categorization, chronic obstructive pulmonary disease (COPD) detection, or soft tissue characterization. An image processing engine can perform such a detection based on the shape, texture, sphericity measurement, color, or other features obtained from the medical images or which are derived or implied by the clinical content.”) Regarding claim 8, Sorenson discloses the processing circuit is operative with the machine-readable instructions to perform: updating the first number of data analysis tools based on third medical image data and third analysis data related to the third medical image data. (Figs. 1-3: medical data source 105; image processing tools 107; image processing engine as “113. lung nodule – 114. bone fracture- 115. vessel detect; Paragraphs 69-70: “A medical imaging software application is a client application that accesses the output of the image processing tools 107 of image processing system 106. … The processing of image data by engines and updating of the engines can occur at image processing server 110, the image processing application store 109, or any combination thereof.”; Paragraph 72: “Image processing server 110 can have a GUI for one or more users or groups to train, code, develop, upload, delete, track, purchase, update, or process data on engines or e-suites. “;Paragraphs 95-96) Regarding claim 9, Sorenson discloses the processing circuit is operative with the machine-readable instructions to select at least one of the first and second number of data analysis tools. (Figs. 1-3: medical data source 105, image processing engine as “113. lung nodule -114. bone fracture – 115. vessel detect; Paragraph 69: “The engines or e-suites of image processing server 110 can process studies depending on which engines are selected by the user via a graphical user interface (GUI) or website (local or on the internet) of image processing server 110.”) Regarding claim 10, Sorenson discloses the clusters correspond to different pathologies. (Paragraph 76: “an engine developer can train a lung nodule detection engine to detect lung nodules in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules … a COPD engine can machine learn based on the same COPD engine data based on another COPD engine data, or any combination thereof.”; Paragraphs 183-184: “ a machine learning module that is part of an artificial intelligence findings system includes an image identification engine that can extract features from a new medical image being analyzed to match this data to data present in the archive with the same characteristic (disease) … the similar data can pertain to anatomical structures such as body parts, anatomic anomalies and anatomical features”) Regarding claim 12, Sorenson discloses the processing circuit is operative with the machine-readable instructions determine the number of clusters by: determining descriptors for images in the first or third medical image data; and grouping the descriptors into the number of clusters. (Paragraph 197: “the machine learning module can categorize the image data based on in-image analysis and/or metadata (e.g., DICOM headers or tags).The machine learning module can identify any image information from the image data such as the modality, orientation (e.g., axial, coronal, sagittal, off axis, short axis, 3 chamber view, or any combination thereof), anatomies … study/series description, scanning protocol”); Paragraph 214-215: “the medical image data can be sent via a network to the Processing Server where the auto-categorization module can categorize each of the images sent to the processing server …the auto-categorization module can categorize the images based on rules, training based on user, machine learning, DICOM Headers, in-image analysis, analysis of pixel attributes, landmarks within the images, characterization methods, statistical methods, or any combination thereof. The tracking module can track the images based on categories, for example, modality, orientation (e.g., axial, coronal, sagittal, off axis, short axis, 3 chamber view, or any combination thereof), anatomies (organs, vessels, bones, or any combination thereof), body section (e.g., head, next, chest, abdomen, pelvis, extremities, or any combination thereof), sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof.”) Regarding claim 14, Sorenson discloses the processing circuit is operative with the machine-readable instructions to generate a corresponding data analysis tool by using the descriptors, corresponding images, analysis data corresponding to the images, or a combination thereof, within one cluster. (Paragraph 83; Paragraphs 100-118: “a variety of image processing tools can be accessed by a user using the diagnostic image processing features of the medical data review system. Alternatively, such image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, such as a PACS or EMR, or other clinical or information system. … Vessel Analysis tools may include a comprehensive vascular analysis package for CT and MR angiography capable of a broad range of vascular analysis tasks, … Calcium scoring tools may include identification of coronary calcium with Agatston, volume and mineral mass algorithms. … Lobular decomposition tools identify tree-like structures within a volume of interest, e.g. a scan region containing a vascular bed … Segmentation, analysis & tracking tools support analysis and characterization of masses and structures, such as solitary pulmonary nodules or other potential lesions.”, it show that each image processing tools as vessel analysis tools or calcium scoring tools is read as “cluster; Paragraph 197: “The machine learning module can identify any image information from the image data such as the modality, orientation … sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description”) Regarding claim 15, Sorenson discloses at least one of the first or second number of data analysis tools comprises a neural network, wherein the processing circuit is operative with the machine-readable instructions to generate the corresponding data analysis tool by training the neural network with the first medical image data, the third medical image data or corresponding descriptors as input data and the first medical image data or the third analysis data as desired output data. (Paragraphs 46-47: “a supervised or unsupervised machine learned engine can be used to monitor and learn the effectiveness of various engines in various situations when a first set of medical images associated with a particular clinical study is received from a medical data source, one or more image processing engines are invoked to process (e.g., recognizing shapes, features, trends in the images or other data or measurements) the medical images (or data, used synonymously in this application) according to a predetermined or machine learned suggested order for performing the engine operations that is configured for the particular type of imaging study.”; Paragraph 208: “Other machine learning approaches for in-image analysis and metadata can be implemented such as decision tree learning, association rule learning, artificial neural networks, deep learning … convolutional neural network based on deep learning framework and naïve Bayes classifier, or any combination thereof.”) Regarding claim 16, Sorenson discloses comprising at least a first and a second client device each connected to the processing circuit via a network, the first client device comprising a first user interface and the second client device comprising a second user interface. (Paragraph 54: “Referring to FIG. 1, medical data review system 100 includes one or more client devices 101-102 communicatively coupled to medical image processing server 110 over network 103. Client devices 101-102 can be a desktop, laptop, mobile device, workstation, etc.”) Regarding claim 17, Sorenson discloses the first user interface applies the at least one data analysis tool to the first medical image data to generate the first analysis data. (Paragraphs 59-60: “The image processing engines 113-115 can be uploaded and listed in a Web server 109, in this example, an application store, to allow a user of clients 101-102 to purchase, select, and download one or more image processing engines as part of client applications 111-112 respectively. The selected image processing engines can be configured to a variety of configurations (e.g., in series, in parallel, or both) to perform a sequence of one or more image processing operations. … each of image processing engines or modules 113-115 may be configured to perform a specific image processing operation on medical images, such as, for example, lung nodule detection, bone fracture detection,”) Regarding claim 18, Sorenson discloses wherein the second user interface controls the selection of the at least one data analysis tool after the first number of data analysis tools are generated, the first medical image data and the first analysis data are received by the processing circuit via the network. . (Paragraphs 59-60: “The image processing engines 113-115 can be uploaded and listed in a Web server 109, in this example, an application store, to allow a user of clients 101-102 to purchase, select, and download one or more image processing engines as part of client applications 111-112 respectively. The selected image processing engines can be configured to a variety of configurations (e.g., in series, in parallel, or both) to perform a sequence of one or more image processing operations. … each of image processing engines or modules 113-115 may be configured to perform a specific image processing operation on medical images, such as, for example, lung nodule detection, bone fracture detection,”) Regarding claim 19, A computer-implemented method of medical data analysis, (Fig.1: a medical data review system, Paragraph 77: “Referring to FIG. 2, image processing server 110 includes memory 201 (e.g., dynamic random access memory or DRAM) hosting one or more image processing engines 113-115, which may be installed in and loaded from persistent storage device 202 (e.g., hard disks), and executed by one or more processors (not shown). “) comprising: determining a number of clusters based on first medical image data, third medical image data, first analysis data related to the first medical image data, third analysis data, or a combination thereof, ( Paragraph 214-215: “The medical image data can be sent via a network to the Processing Server where the auto-categorization module can categorize each of the images sent to the processing serve …the auto-categorization module can categorize the images based on rules, training based on user, machine learning, DICOM Headers, in-image analysis, … sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof.”; it show that the each of categories is interpreted as “ clusters” data”; Paragraphs 83-85; Paragraphs 100-118: “a variety of image processing tools can be accessed by a user using the diagnostic image processing features of the medical data review system. Alternatively, such image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, such as a PACS or EMR, or other clinical or information system. … Vessel Analysis tools may include a comprehensive vascular analysis package for CT and MR angiography capable of a broad range of vascular analysis tasks, … Calcium scoring tools may include identification of coronary calcium with Agatston, volume and mineral mass algorithms.”, it shows that the set of the diagnostic image processing features is input to each image processing tools as vessel analysis tools or calcium scoring tools is read as “cluster”; Paragraph 183: “the machine learning module can correlate image data from the medical image data source to a workflow based on in-image analysis and metadata. … The correlation of the medical data to a machine learning module or collection of machine learning can be done based on pattern extraction, feature extraction or image processing which result of a medical image classification (clusterization)”) and automatically training a first number of data analysis tools (Figs. 1-2: image processing engines 113-114-115) based on the first medical image data and the first analysis data related to the first medical image data, (Paragraphs 65-66: “The engine or e-suites can detect findings (e.g., a disease, an indication, a feature, an object, a shape, a texture, a measurement, insurance fraud, or any combination thereof). The one or more engines and/or one or more e-suites can detect findings from studies (e.g., clinical reports, images, patient data, image data, metadata, or any combination thereof) based on metadata, known methods of in-image analysis, or any combination thereof. … the engines and/or the e-suites can machine learn or be trained using machine tearing algorithms based on prior findings periodically such that as the engines/e-suites process more studies, the engines/e-suites can detect findings more accurately.”; Paragraph 70: “Tools, engines, e-suites, training tools, coding tools, or any combination thereof can be displayed and used via image processing server 110 or in a 2D and/or 3D medical imaging software application, or medical data review system, … The second user or group can use the machine learning/training tools and the feedback from this usage can be applied to train the first engine to detect findings with higher accuracy. The first engine can be updated by image processing server 110 and stored in the application store 109. The processing of image data by engines and updating of the engines can occur at image processing server 110, the image processing application store 109, or any combination thereof.” Paragraph 76: “an engine developer can train a lung nodule detection engine to detect lung nodules in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules (e.g., geometric shapes, textures, other combination of features resulting in detection of lung nodules, or any combination thereof)) wherein at least one of the first number of data analysis tools (Figs. 1-2: image processing engines 113-114-115) is trained for each determined cluster. (Paragraph 76: “an engine developer can train a lung nodule detection engine to detect lung nodules in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules … a COPD engine can machine learn based on the same COPD engine data, based on another COPD engine data, or any combination thereof.”, it show that “the same COPD engine data” is interpreted as “determine cluster”; Paragraph 208: “Other machine learning approaches for in-image analysis and metadata can be implemented such as decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering,”; Paragraphs 100-118: “a variety of image processing tools can be accessed by a user using the diagnostic image processing features of the medical data review system. Alternatively, such image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, … medical image processing tools present in a current leading semi-automated image viewing and advanced visualization system that may be included and/or further automated, or converted to engines, … Vessel Analysis tools may include a comprehensive vascular analysis package for CT and MR angiography capable of a broad range of vascular analysis tasks, … Calcium scoring tools may include identification of coronary calcium with Agatston, volume and mineral mass algorithms.”, it shows that the set of the diagnostic image processing features is input to each image processing tools as vessel analysis tools or calcium scoring tools is read as “determine cluster”) Regarding claim 20, One or more non-transitory computer-readable media comprising computer-readable instructions, that when executed by one or more processing units, (Fig.1: a medical data review system, Paragraph 77: “Referring to FIG. 2, image processing server 110 includes memory 201 (e.g., dynamic random access memory or DRAM) hosting one or more image processing engines 113-115, which may be installed in and loaded from persistent storage device 202 (e.g., hard disks), and executed by one or more processors (not shown). “) cause the one or more processing units to perform steps comprising: determining a number of clusters based on first medical image data, third medical image data, first analysis data related to the first medical image data, third analysis data, or a combination thereof, ( Paragraph 214-215: “The medical image data can be sent via a network to the Processing Server where the auto-categorization module can categorize each of the images sent to the processing serve …the auto-categorization module can categorize the images based on rules, training based on user, machine learning, DICOM Headers, in-image analysis, … sorting information (e.g., 2D, 2.5D, 3D, 4D), study/series description, scanning protocol, sequences, options, flow data, or any combination thereof.”; it show that the each of categories is interpreted as “ clusters” data”; Paragraphs 83-85; Paragraphs 100-118: “a variety of image processing tools can be accessed by a user using the diagnostic image processing features of the medical data review system. Alternatively, such image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, such as a PACS or EMR, or other clinical or information system. … Vessel Analysis tools may include a comprehensive vascular analysis package for CT and MR angiography capable of a broad range of vascular analysis tasks, … Calcium scoring tools may include identification of coronary calcium with Agatston, volume and mineral mass algorithms.”, it shows that the set of the diagnostic image processing features is input to each image processing tools as vessel analysis tools or calcium scoring tools is read as “cluster”; Paragraph 183: “the machine learning module can correlate image data from the medical image data source to a workflow based on in-image analysis and metadata. … The correlation of the medical data to a machine learning module or collection of machine learning can be done based on pattern extraction, feature extraction or image processing which result of a medical image classification (clusterization)”) and automatically training a first number of data analysis tools (Figs. 1-2: image processing engines 113-114-115) based on the first medical image data and the first analysis data related to the first medical image data, (Paragraphs 65-66: “The engine or e-suites can detect findings (e.g., a disease, an indication, a feature, an object, a shape, a texture, a measurement, insurance fraud, or any combination thereof). The one or more engines and/or one or more e-suites can detect findings from studies (e.g., clinical reports, images, patient data, image data, metadata, or any combination thereof) based on metadata, known methods of in-image analysis, or any combination thereof. … the engines and/or the e-suites can machine learn or be trained using machine tearing algorithms based on prior findings periodically such that as the engines/e-suites process more studies, the engines/e-suites can detect findings more accurately.”; Paragraph 70: “Tools, engines, e-suites, training tools, coding tools, or any combination thereof can be displayed and used via image processing server 110 or in a 2D and/or 3D medical imaging software application, or medical data review system, … The second user or group can use the machine learning/training tools and the feedback from this usage can be applied to train the first engine to detect findings with higher accuracy. The first engine can be updated by image processing server 110 and stored in the application store 109. The processing of image data by engines and updating of the engines can occur at image processing server 110, the image processing application store 109, or any combination thereof.” Paragraph 76: “an engine developer can train a lung nodule detection engine to detect lung nodules in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules (e.g., geometric shapes, textures, other combination of features resulting in detection of lung nodules, or any combination thereof)) wherein at least one of the first number of data analysis tools (Figs. 1-2: image processing engines 113-114-115) is trained for each determined cluster. (Paragraph 76: “an engine developer can train a lung nodule detection engine to detect lung nodules in studies on the developer platform of the application store 109 or a data analytic system (not shown) by training the engine based on various features of detecting lung nodules … a COPD engine can machine learn based on the same COPD engine data, based on another COPD engine data, or any combination thereof.”, it show that “the same COPD engine data” is interpreted as “determine cluster”; Paragraph 208: “Other machine learning approaches for in-image analysis and metadata can be implemented such as decision tree learning, association rule learning, artificial neural networks, deep learning, inductive logic programming, support vector machines, clustering,”; Paragraphs 100-118: “a variety of image processing tools can be accessed by a user using the diagnostic image processing features of the medical data review system. Alternatively, such image processing tools can be implemented as image processing engines 113-115 which are then evoked in other third party systems, … medical image processing tools present in a current leading semi-automated image viewing and advanced visualization system that may be included and/or further automated, or converted to engines, … Vessel Analysis tools may include a comprehensive vascular analysis package for CT and MR angiography capable of a broad range of vascular analysis tasks, … Calcium scoring tools may include identification of coronary calcium with Agatston, volume and mineral mass algorithms.”, it shows that the set of the diagnostic image processing features is input to each image processing tools as vessel analysis tools or calcium scoring tools is read as “determine cluster”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 11 and 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Sorenson et al (U.S. 20180137244 A1; Sorenson), in view of Yerebakan Halid et al (EP – 3869453; Yerebakan). Regarding claim 11, Sorenson discloses all the claims invention except wherein the processing circuit is operative with the machine-readable instructions to determine the number of clusters by performing a K-means algorithm. Yerebakan discloses the processing circuit is operative with the machine-readable instructions to determine the number of clusters by performing a K-means algorithm. (Paragraphs 44-45 : “At step 204 of the medical image processing method 200, a clustering algorithm is applied to the plurality of sets of data generated in step 202 to generate a plurality of groups, based on a similarity between image descriptors of the plurality of sets of data. Each of the plurality of groups includes one or more of the plurality of sets of data. … the clustering algorithm uses an iterative refinement process, such as a "k-means" algorithm”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Sorenson by the clustering processing that is taught by Yerebakan, to make the invention that processing medical image data by grouping together findings using clustering techniques; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving recognizing similar findings that may be distributed throughout the medical image data as well as reducing time-consuming task of sorting and grouping findings in the medical image data. (Yerebakan: Paragraphs 15-16) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Regarding claim 13, Sorenson discloses the processing circuit is operative with the machine-readable instructions to automatically train the first or second number of data analysis tools (Paragraphs 65-66: “The engine or e-suites can detect findings (e.g., a disease, an indication, a feature, an object, a shape, a texture, a measurement, insurance fraud, or any combination thereof). The one or more engines and/or one or more e-suites can detect findings from studies (e.g., clinical reports, images, patient data, image data, metadata, or any combination thereof) based on metadata, known methods of in-image analysis, or any combination thereof. … the engines and/or the e-suites can machine learn or be trained using machine tearing algorithms based on prior findings periodically such that as the engines/e-suites process more studies, the engines/e-suites can detect findings more accurately.”) However, Sorenson does not disclose automatically train the first or second number of data analysis tools when the number of descriptors in any one cluster exceeds a threshold value. Yerebakan discloses the processing circuit is operative with the machine-readable instructions to automatically train the first or second number of data analysis tools (Paragraphs 85-86 : “ By generating and visually indicating groups of image patches with similar features, identification of these different conditions may be facilitated … mitigate the need to cap the number of medical abnormality candidates that, for example, an automated algorithm may generate. Because image patches having similar features are grouped together, potentially irrelevant image patches (e.g. image patches that do not indicate an illness or injury) have a chance of being grouped together, and thus easily identified.”) when the number of descriptors in any one cluster exceeds a threshold value. (Paragraph 52 : “the number of clusters can be set by the clustering algorithm in dependence of the characteristics of the data set. Factors that may influence the number of clusters include but are not limited to: the distance threshold between two descriptors for the descriptors to be treated as similar for the purposes of clustering, the degree of similarity between two descriptors, and the total number of findings within the image.”; Paragraph 16: “complex and time-consuming task of sorting and grouping findings and additionally mitigates the need to impose a threshold on the number of medical abnormality candidates. Grouping findings according to similarity between image descriptors enables different types of medical abnormality candidates to be grouped together”) Therefore, it would been obvious to one having ordinary skill in the art before the effective filling date of the claimed invention to modify the invention of Sorenson by the clustering processing that is taught by Yerebakan, to make the invention that processing medical image data by grouping together findings using clustering techniques; thus, one of ordinary skilled in the art would have been motivated to combine the references since this will improving recognizing similar findings that may be distributed throughout the medical image data as well as reducing time-consuming task of sorting and grouping findings in the medical image data. (Yerebakan: Paragraphs 15-16) Thus, the claimed subject matter would have been obvious to a person having ordinary skill in the art before the effective filling date of the claimed invention. Relevant Prior Art Directed to State of Art Namer Yelin et al (U.S. 20120197619 A1), “System and Method for Generating a Patient-Specific Digital Image-Based Model of An Anatomical Structure”, teaches about The method may comprise receiving medical image data and metadata of a specific patient. A patient-specific digital image-based model of an anatomical structure may be generated based on the medical image data and the metadata. A computerized simulation of an image-guided procedure may be performed using the digital image-based model and the metadata. Campanatti , Jr et al (U.S 20140087342 A1), “Training and Testing System for Advanced Image Processing”, teaches about medical image processing training including at least one medical image associated with a medical image processing training course (MIPTC) is displayed in a first display area. An instruction is displayed in a second display area, where the instruction requests a user to perform a quantitative determination on at least a portion of a body part within the medical image displayed in the first display area. In response to a user action from the user, the requested determination is performed on the displayed medical image. It is determined automatically without user intervention at least one quantitative value representing a result of the user action. The quantitative value is compared to a predefined model answer. Krishna et al (U.S. 20150227702 A1), “Multi-Factor Brain Analysis Via Medical Imaging Decision Support System and Methods”, teaches about A medical imaging decision support system is provided that can conduct, and help medical professionals conduct multi-factor brain analysis. Data for disparate processing modes (for example, EEG, MRI, etc.) can be input to the system, processed in parallel in a cloud environment, and the results can be rendered in a thin client (for example, browser) for a user's rapid multi-modal evaluation of a brain. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Duy A Tran whose telephone number is (571)272-4887. The examiner can normally be reached Monday-Friday 8:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ONEAL R MISTRY can be reached at (313)-446-4912. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DUY TRAN/ Examiner, Art Unit 2674 /ONEAL R MISTRY/ Supervisory Patent Examiner, Art Unit 2674
Read full office action

Prosecution Timeline

Jul 18, 2023
Application Filed
Sep 25, 2025
Non-Final Rejection — §101, §102, §103
Nov 24, 2025
Response Filed
Mar 02, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12573024
IMAGE AUGMENTATION FOR MACHINE LEARNING BASED DEFECT EXAMINATION
2y 5m to grant Granted Mar 10, 2026
Patent 12561934
AUTOMATIC ORIENTATION CORRECTION FOR CAPTURED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12548284
METHOD FOR ANALYZING ONE OR MORE ELEMENT(S) OF ONE OR MORE PHOTOGRAPHED OBJECT(S) IN ORDER TO DETECT ONE OR MORE MODIFICATION(S), AND ASSOCIATED ANALYSIS DEVICE
2y 5m to grant Granted Feb 10, 2026
Patent 12530798
LEARNED FORENSIC SOURCE SYSTEM FOR IDENTIFICATION OF IMAGE CAPTURE DEVICE MODELS AND FORENSIC SIMILARITY OF DIGITAL IMAGES
2y 5m to grant Granted Jan 20, 2026
Patent 12505539
CELL BODY SEGMENTATION USING MACHINE LEARNING
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+17.5%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 128 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month