Prosecution Insights
Last updated: April 19, 2026
Application No. 18/267,800

CONTINUAL-LEARNING AND TRANSFER-LEARNING BASED ON-SITE ADAPTATION OF IMAGE CLASSIFICATION AND OBJECT LOCALIZATION MODULES

Final Rejection §103
Filed
Jun 16, 2023
Examiner
SCHWARTZ, RAPHAEL M
Art Unit
2671
Tech Center
2600 — Communications
Assignee
Koninklijke Philips N V
OA Round
2 (Final)
67%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
98%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
227 granted / 338 resolved
+5.2% vs TC avg
Strong +31% interview lift
Without
With
+31.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
24 currently pending
Career history
362
Total Applications
across all art units

Statute-Specific Performance

§101
7.8%
-32.2% vs TC avg
§103
48.9%
+8.9% vs TC avg
§102
7.5%
-32.5% vs TC avg
§112
19.3%
-20.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§103
DETAILED ACTION Response to Amendment Applicant’s response to the last Office Action, filed on 02/02/2026 has been entered and made of record. Examiner maintains the current prior art of record; accordingly, this action is made Final. Response to Arguments Applicant's arguments filed on 02/02/2026 have been fully considered but they are not persuasive. Applicant remarked, “The bounding box described in Hillen is not for indicating a spatial location corresponding to a predicted class label. As disclosed in paragraph [0028] of Hillen above, the user can "add detected features to the radiograph in case the users suggests that the algorithm missed a detected feature, or he can delete the detected features of the algorithm, e.g. 208". In contrast, the claim recites receiving the user input to indicate a spatial location that corresponds to the predicted class label. The addition in Hillen is not for a predicted class label, and the deletion is not indicating a spatial location.” Examiner disagrees with Applicant’s characterization of the Hillen reference here and notes that ¶ 0028, 0029, 0030 and 0031 clearly teach an automated machine learning system generating a prediction that includes a spatial location, and the user adding or deleting features for use in training the prediction system. Features in the dental imaging are detected and colored with a bounding box overlaid on the feature’s location of the image based on the automated prediction confidence. These are detected spatial locations corresponding to the predicted class labels (¶ 0028). The user is then able to interact with these spatial location predictions by adding or deleting the individual predictions (each of which are clearly described as spatial location predictions). ¶ 0030 also further describes this interaction, describing that the system can generate annotations in the images from the collected image database to be presented to users (e.g. dentists, radiologists, other experts or non-experts) to annotate or mark the region where a feature of interest (e.g. carious lesion which the identifier should be capable of identifying) is to be found. The annotator can mark these regions either using a drawing a bounding box close around the feature of interest, by setting a point into the center of the feature of interest or by drawing an outline around the feature of interest. All these inputs are saved in the image information database 406 and can serve the trainer as training material. Applicant argues that remaining independent claims are allowable for similar reasons and that the dependent claims are allowable by virtue of their dependence on allowable claims, but directs no independent arguments to these claims. Please see detailed response to arguments above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hillen (USPGPub 2019/0313963; provided by Applicant). Regarding claim 1, Hillen discloses a computer-implemented method of training a machine learning module to provide classification and localization information for an image study, comprising: (Hillen teaches a method of acquiring dental x-ray imaging and providing classification and localization via machine learning techniques.) receiving a current image study; (¶ 0023 teaches acquiring an x-ray scan.) applying the machine learning module to the current image study to generate a classification result including a prediction for one or more class labels for the current image study using a classification module of the machine learning module; (See ¶ 0028 and 0029 which teach the machine learning dental analysis system which performs pathology classification.) receiving, via a user interface, a user input indicating a spatial location corresponding to a predicted class label; and (¶ 0028 teaches a user interacting with the graphical user interface system to add or delete location-based features after the machine-learning system performs its analysis.) training a localization module of the machine learning module using the user input indicating the spatial location corresponding to the predicted class label. (¶ 0028 and 0029 teach using the user feedback of adding or deleting location-based features to further train the machine-learning system. Also see ¶ 0030.) Hillen does not expressly disclose that all of its above-cited teachings on x-ray machine-learning and user feedback are expressly disclosed as occurring in the same embodiment. That is, despite the reference being clear that these functions are disclosed, there is no express disclosure that the details are all found in the same embodiment. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the various teachings to provide a single system capable of the variety of tasks which are disclosed. In view of these teachings, this cannot be considered a non-obvious improvement over the prior art. Using known engineering design, no “fundamental” operating principle of the teachings are changed; they continue to perform the same functions as originally taught prior to being combined. Regarding claim 2, Hillen discloses the method of claim 1, further comprising determining whether one of the classification module and the localization module for the predicted class label meets predetermined performance requirements. (¶ 0028 and 0041 teach determining whether the classification for a class label meets a certainty score.) Regarding claim 3, Hillen discloses the method of claim 2, wherein, when the localization module for the predicted class label meets predetermined performance requirements, applying the machine learning module to the current image study includes providing a visual representation of a spatial location of the predicted class label. (See colored bounding boxes presented to the user in ¶ 0028 when certainty scores are met.) Regarding claim 4, Hillen discloses the method of claim 1, wherein the classification module identifies class labels indicating a presence of one of a particular anatomy, pathology, organ and object in the current image study. (See ¶ 0028-0029 as in the rejection of claim 1) Regarding claim 5, Hillen discloses the method of claim 1, wherein the user input indicates the spatial location corresponding to the predicted class label includes a bounding box drawn over a relevant portion of the current image study. (See ¶ 0028 and 0030.) Regarding claim 6, Hillen discloses the method of claim 1, wherein the method of claim 3, wherein the user input includes a user edit to one of the classification result and the visual representation of the spatial location of the predicted class label. (¶ 0028 teaches a user interacting with the graphical user interface system to add or delete location-based features after the machine-learning system performs its analysis.) Regarding claim 7, Hillen discloses the method of claim 6, further comprising training the classification module of the machine learning module using the user edit. (As above, ¶ 0028 and 0029 teach using the user feedback of adding or deleted location-based features to further train the machine-learning system.) Regarding claim 8, Hillen discloses the method of claim 6,wherein the user edit includes one of an addition of a class label and a removal of the predicted class label from the classification result. (As above, ¶ 0028 teaches a user interacting with the graphical user interface system to add or delete location-based features after the machine-learning system performs its analysis.) Regarding claim 9, Hillen discloses the method of claim 1, wherein training the localization module of the machine learning module includes transfer learning to share module components including one or more convolutional layers. (¶ 0029 and 0033 teach providing an already trained deep learning neural network to detect features and classify. The network includes transfer learning to share convolutional layers with future neural networks, by exposing the multilayer neural networks to feedback training.) Regarding claim 10, Hillen discloses the method of claim 1, wherein the current image study is an X-ray image study. (See rejection of claim 1.) Claims 11-19 are the system claims corresponding to the method of claims 1-19. Hillen discloses a system at ¶ 0023. Remaining limitations are rejected similarly. See detailed analysis above. Claim 20 is the computer readable medium claim corresponding to the method of claims 1. Hillen discloses a computer readable medium at ¶ 0044. Remaining limitations are rejected similarly. See detailed analysis above. Conclusion Based on these facts, THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Raphael Schwartz whose telephone number is (571)270-3822. The examiner can normally be reached Monday to Friday 9am-5pm CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAPHAEL SCHWARTZ/ Examiner, Art Unit 2671
Read full office action

Prosecution Timeline

Jun 16, 2023
Application Filed
Sep 27, 2025
Non-Final Rejection — §103
Feb 02, 2026
Response Filed
Feb 21, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597128
ASSESSMENT OF SKIN TOXICITY IN AN IN VITRO TISSUE SAMPLES USING DEEP LEARNING
2y 5m to grant Granted Apr 07, 2026
Patent 12592063
MACHINE LEARNING OF SPATIO-TEMPORAL MANIFOLDS FOR SOURCE-FREE VIDEO DOMAIN ADAPTATION
2y 5m to grant Granted Mar 31, 2026
Patent 12579642
Methods, Systems, and Apparatuses for Quantitative Analysis of Heterogeneous Biomarker Distribution
2y 5m to grant Granted Mar 17, 2026
Patent 12548289
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER READABLE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12548179
FUNCTIONAL EVALUATION SYSTEM OF HIPPOCAMPUS AND DATA CREATION METHOD
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
67%
Grant Probability
98%
With Interview (+31.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month