DETAILED ACTION
Notices to Applicant
This communication is a final rejection. Claims 1-6, 9, and 11, as filed 01/26/2026, are currently pending and have been considered below.
Priority is generally acknowledged to PCT/EP2023/063975 (05/24/2023) and foreign priority to EPO 22176636.3 (06/01/2022).
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon and the rationale supporting the rejection would be the same under either status.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 11 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claim(s) do not fall within at least one of the four categories of patent eligible subject matter because the claim includes signals per se. See MPEP 2106.03(I): “Non-limiting examples of claims that are not directed to any of the statutory categories include:… Transitory forms of signal transmission (often referred to as “signals per se”), such as a propagating electrical or electromagnetic signal or carrier wave.” Claim 11 is directed to a computer software product or a non-transitory computer-readable medium. The computer software product does not have any structure in the specification that would prevent it from being a signal. This rejection could be overcome by removing the computer software product or clarifying that it is non-transitory.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 7-9, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Oberoi (“Content Based Image Retrieval System for Medical Databases (CBIR-MD) – Lucratively tested on Endoscopy, Dental and Skull Images” cited in IDS dated 11/25/2024) in view of Kearney (US20220012815A1) and Choe (Choe J, Hwang HJ, Seo JB, Lee SM, Yun J, Kim MJ, Jeong J, Lee Y, Jin K, Park R, Kim J, Jeon H, Kim N, Yi J, Yu D, Kim B. Content-based Image Retrieval by Using Deep Learning for Interstitial Lung Disease Diagnosis with Chest CT. Radiology. 2022 Jan;302(1):187-197).
Regarding claim 1, Oberoi discloses: A computer-implemented method for improving usability of dental imaging data (the title and figure 1: "Data Flow Diagram of Proposed Retrieval System" on page 301: the method of figure 1 corresponds to the method of claim 1; page 302, section "4.1 Dataset for the Experiment" discloses the computer implementation of the method for dental images), which method comprises:
--processing imaging data of a plurality of different images of at least one dental imaging modality of a patient by a data-driven logic to produce (“In proposed system like other CBIR system, images are represented by appropriate feature vector in feature space. Such feature vector gives meaningful information of image properties,” page 301; dental and skull images in 4.1 on page 302);
--presenting imaging data comprising the plurality of different images (page 301, figure 1 in combination with section "3.3 Retrieval process" on pages 301-302: step 1 "Input query medical image" in combination with step 3 "Format/ collect the medical images from the medical databases at a point" correspond to providing imaging data comprising a plurality of different images; figure 8 on page 303 discloses the retrieved images being different images of at least one dental imaging modality of a patient);
--presenting semantic information for each of the plurality of different images, wherein the (page 301, figure 1 in combination with section "3.3 Retrieval process" on pages 301-302: step 2 of feature extraction corresponds to providing metadata for each of the images of claim 1; page 301, first paragraph of section "3. Proposed System": the features of the feature vector give meaningful information of image properties because in images represent anatomical features recorded in the image, the features comprising meaningful information about the images correspond to metadata related to a specific image comprising information related to one or more anatomical features recorded in that image of claim 1),
--receiving a selection from a user, the selection being from the imaging data of one or more of the plurality of different images (page 301, section "3.3 Retrieval process": step 1 of inputting query medical image is a selection for the query);
--retrieving in response to the selection from the imaging data, at least one related image (section "3.2 Metrics for Similarity Comparison" and page 301, section "3.3 Retrieval process" on pages 301-302, steps 6-9, features of images from the databases are compared to features of the query image and the most similar images are retrieved);
--wherein the retrieval is performed by comparing (section "3.2 Metrics for Similarity Comparison" and page 301, section "3.3 Retrieval process" on pages 301-302: step 6 "Compare features of the query medical image with medical images from the database by Euclidean Distance (ED) /Canberra Distance (CD) technique."); and
--presenting the retrieved at least one related image to the user (Step 9: Display the corresponding medical images no page 302).
Oberoi does not expressly disclose a data-driven logic trained with a training dataset to produce semantic information identifying specific dental anatomical structures. Kearney teaches training a machine learning model to process dental images to produce semantic information identifying specific dental anatomical structures (“It is often useful to extract semantically meaningful text-based descriptions from dental images. Dentists create verbose textual diagnostic and treatment descriptions during patient examination that aid in anatomical and physiological information ingestion, summary, and transfer. Usually dentists manually input this information into a computer interface. This process is time consuming and prone to human error,” [0397]; tooth anatomy throughout such as tooth number in [0447]; “The generator translates an input image 3102 into diagnostic predictions, e.g., “healthy with attachment loss on an individual site,” or “carious lesion detected invasive into the pulp on the mesial side of tooth number 11,” or orthodontic information regarding a patient,” [0398]; [0109]; [0114]; building the model in [0746] and [0765]). Kearney further teaches:
--wherein the semantic information comprises data automatically annotated by the data-driven logic (identifying anatomy like tooth numbers or other dental features in [0112]-[0115]; this identification is performed by a model, “for example, the system 800 may be used to label anatomical features such as the cementum enamel junction (CEJ), bony points on the maxilla or mandible that are relevant to the diagnosis of periodontal disease, gingival margin, junctional epithelium, or other anatomical feature,” [0194]);
One of ordinary skill in the art would have been motivated before the expected filing date to expand the image retrieval of Oberoi, Kearney, and Choe with the semantic information and dental images of Kearney because this would “aid in anatomical and physiological information ingestion, summary, and transfer” (Kearney [0397]).
Oberoi does not expressly disclose but Choe teaches training with a training dataset comprising images and their corresponding semantic information and being configured to perform semantic processing on the imaging data to produce semantic classifications used in retrieval (“We developed an automated CBIR that incorporates the deep learning–based pattern classification,” page 187; he CBIR calculated similarity from the Euclidian distance among the feature vectors of high-resolution CT images to be compared, considering the extent of six different regional disease patterns and their locations and distributions,” page 188). Choe further teaches:
--wherein the retrieval is performed by comparing semantic information of at least one of the selected images to semantic information of the at least one related image (retrieval by “comparing the extent and distribution of different regional disease patterns quantified by a deep learning algorithm,” Abstract; these disease patterns, such as honeycombing, are semantic vocabulary rather than low level features and they were derived from confirmed diagnoses, page 188).
One of ordinary skill in the art would have been motivated before the expected filing date to expand the image retrieval of Oberoi and Kearney with the deep learning-based image retrieval of Choe because replacing low-level feature comparison with trained semantic classifications would produce clinically superior results (Choe Abstract).
Regarding claim 2, Oberoi does not expressly disclose that the selection in the Retrieval Process 303 involves a user selection, but it teaches elsewhere a software system where: wherein the selection of the images is performed in response to a user input (“the user manipulates GUI tools to create a query”, page 300).
It can be seen that each element is taught by either portion of Oberoi, Kearney, or Choe. The user interface input of Oberoi’s WebMIRS does not affect the normal functioning of the elements of the claim which are taught by Oberoi’s Retrieval Process 3.3, Kearney, or Choe. Because the elements do not affect the normal functioning of each other, the results of their combination would have been predictable. Therefore, before the effective filing date of the claimed invention, it would have been obvious to combine the teachings of the two portions of Oberoi with Kearney and Choe since the result is merely a combination of old elements, and, since the elements do not affect the normal functioning of each other, the results of the combination would have been predictable.
Regarding claim 3, Oberoi does not expressly disclose that the selection in the Retrieval Process 303 involves diagnostic parameters or the claimed matching, but it teaches elsewhere a software system where: wherein the user input provides one or more diagnostic parameters, and the selection is performed by matching any one or more of the diagnostic parameters with the semantic information (“the user manipulates GUI tools to create a query such as, “Search for all records for people over the age of 65 who reported chronic back pain. Return the age, race, sex and age at pain onset for these people.” In response, the system return values for these four fields of all matching records along with a display of the associated x-ray images,” page 300; this teaching is viewed in light of the semantic information teaches described with respect to claim 1.).
The motivation to combine is the same as in claim 2.
Regarding claim 4, Oberoi discloses: wherein at least one of the selected images and at least one of the retrieved images are provided at an interface, such as a human machine interface in the form of a plurality of image views (outputting retrieved images in Fig. 1).
Regarding claim 5, Oberoi discloses: wherein the image views displayed at the human machine interface are arranged in a display arrangement dependent upon any one or more of the diagnostic parameters and/or the semantic information of retrieved images (sorting images in Step 8 on page 302; sorting images by FD/ED in Figure 8 and FC/CD in Figure 9).
Regarding claim 9, Oberoi does not expressly disclose but Kearney teaches: wherein the semantic information comprises a hierarchical structure and/or parallel structure (hierarchical structure with decision blocks and “if statements” [0667]-[0690]; parallel structures: “The machine learning model 6300 may include a plurality of models 6302, 6304, 6306. In the illustrated embodiment, these models may include a CNN 6302, a plurality of LSTM 6304, one or more transformer neural networks 6306, and a tree-based algorithm 6308 (random forest, XGBoost, gradient boost, or the like). More or fewer models of the same or different types may also be used,” [0764]; “The outputs of each of the models 6302-6308 may be input to a fully connected layer 6318. The outputs of each of the models 6302-6308 may be in the form of feature vectors that are concatenated with one another and input the fully connected layer 6318. The feature vectors may include intermediate results that are not human intelligible. The fully connected layer 6318 may produce an output 6210 that includes one or more values representing each of a dental readiness score, dental readiness error, dental readiness durability, dental emergency likelihood, prognosis, and alternative treatments,” [0769]).
The motivation to combine is the same as in claim 1.
Regarding claim 11, the claim is substantially similar to claim 1 and is rejected with the same reasoning.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Oberoi (“Content Based Image Retrieval System for Medical Databases (CBIR-MD) – Lucratively tested on Endoscopy, Dental and Skull Images” cited in IDS dated 11/25/2024) in view of Kearney (US20220012815A1), Choe (Choe J, Hwang HJ, Seo JB, Lee SM, Yun J, Kim MJ, Jeong J, Lee Y, Jin K, Park R, Kim J, Jeon H, Kim N, Yi J, Yu D, Kim B. Content-based Image Retrieval by Using Deep Learning for Interstitial Lung Disease Diagnosis with Chest CT. Radiology. 2022 Jan;302(1):187-197), and Long (“Content-Based Image Retrieval in Medicine: Retrospective Assessment, State of the Art, and Future Directions” cited in IDS dated 11/25/2024).
Regarding claim 6, Oberoi does not expressly disclose but Long teaches: updating a part of the display arrangement, or an image view in the display arrangement; removing, in response to the update, at least one of the image views; and/or retrieving, in response to the update, yet another image from the imaging data to provide at least one new image view in the display arrangement (In addition, SPIRS provides (1) “basic” user feedback on each returned image, namely, a figure of dissimilarity to the query image; and (2) a “data exploration” capability, which takes query results as a beginning point to initiate new and related queries; using a given query result, that is, a vertebral shape returned by a query, the entire spine containing that shape may be displayed; then the user may select a vertebra in that same spine and use its shape as a new query. It should be noted that SPIRS, like CervigramFinder, operates on local, region-of-interest data in the image. The system characteristics of SPIRS indicate that it is for research, teaching, and learning on 2D data; it accepts as input, and creates output “hybrid” data (both text and image). In this regard, SPIRS allows the user to specify as a query a vertebral shape and some text (such as age, race, gender, presence/absence of back or neck pain, and vertebra tags such as “C5”, to indicate the class of vertebrae being searched for). It then returns such text, along with the associated image data,” page 6)
It can be seen that each element is taught by either Oberoi, Kearney, Choe or Long. The user query modifications of Long do not affect the normal functioning of the elements of the claim which are taught by the Oberoi, Kearney, or Choe. Because the elements do not affect the normal functioning of each other, the results of their combination would have been predictable. Therefore, before the effective filing date of the claimed invention, it would have been obvious to combine the teachings of the Oberoi, Kearney, Choe and Long since the result is merely a combination of old elements, and, since the elements do not affect the normal functioning of each other, the results of the combination would have been predictable.
Additionally, one of ordinary skill in the art would have been motivated before the expected filing date to expand the image retrieval of Oberoi, Kearney, and Choe with the query modification of Long because this would help the user fine-tune the search results and thus make the results more useful (see Long page 5 “improvement of usability of the system”).
Response to arguments
Applicant's arguments filed 01/26/2026 have been fully considered and are discussed below.
The 101 subject matter eligibility rejections of claims 1-6 and 9 are withdrawn in view of claim amendments and arguments. While the claimed invention arguably still recites mental processes, namely, comparing images based on shared anatomical features to find related images. However in Step 2A Prong Two, the claimed invention improves the functioning of a computer or another technology by including a specific implementation of content-based image retrieval (CBIR) using semantic information from dental images. This stands in contrast to other types of CBIR which use low-level features and demonstrates that the invention goes beyond merely automating the mental process of comparing and retrieving images. The rejection of claim 11 is maintained because the BRI of the claim encompasses signals per se and thus the claim fails under Step 1 of the 101 analysis.
Regarding the prior art rejections, Applicant argues, “Oberoi never teaches nor suggests replacing Oberoi's feature-vector similarity engine with Kearney's semantic-label comparison. There is no articulated motivation to modify the type of metadata in Oberoi (numeric transforms) into semantic anatomical metadata nor to use such semantics as the retrieval key. Applicant respectfully submits that the examiner must clearly articulate a rationale for substituting semantics for feature vectors. Such a substitution is not a simple "design choice"-it changes the nature of what is compared and how retrieval works. In sum, semantic anatomical information processing represents a fundamentally different technical approach from Oberoi’s low-level mathematical feature extraction.” Remarks page 7. This argument is moot in view of the new grounds of rejection containing Choe. In brief, Choe shows a clinical CBIR system that took a trained model of semantic disease classification and used it to retrieve images instead of feature vectors, and thereby achieved higher diagnostic accuracy.
Conclusion
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office Action (See MPEP 706.07(a)). Accordingly, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOSHUA BLANCHETTE whose telephone number is (571)272-2299. The examiner can normally be reached on Monday - Thursday 7:30AM - 6:00PM, EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Shahid Merchant, can be reached on (571) 270-1360. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JOSHUA B BLANCHETTE/ Primary Examiner, Art Unit 3624