Prosecution Insights
Last updated: April 19, 2026
Application No. 18/180,719

METHOD AND SYSTEM FOR RELATIONAL GENERAL CONTINUAL LEARNING WITH MULTIPLE MEMORIES IN ARTIFICIAL NEURAL NETWORKS

Non-Final OA §103§112
Filed
Mar 08, 2023
Examiner
THOMPSON, KYLE ALLMAN
Art Unit
2125
Tech Center
2100 — Computer Architecture & Software
Assignee
Navinfo Europe B V
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
4y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+28.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
22 currently pending
Career history
28
Total Applications
across all art units

Statute-Specific Performance

§101
40.5%
+0.5% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
7.2%
-32.8% vs TC avg
§112
8.5%
-31.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority The present application claims foreign priority based on Netherlands Patent Application No. NL2033155, filed 9/27/2022. A certified copy of Netherlands Patent Application No. NL2033155 in English has been received on (5/5/2023), as required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 03/08/2023 and 04/23/2024 are in compliance with provisions 37 CFR 1.97. Accordingly, the information disclosure statement is begin considered by the examiner. Drawings New corrected drawings in compliance with 37 CFR 1.121(d) are required in this application because claims 5 & 8 are not depicted in the drawings, specifically: "multiplying the elemental knowledge distillation loss by a first pre-defined weight to calculate a weighted elemental knowledge distillation loss; and calculating a combination of the task loss and the weighted elemental knowledge distillation loss.". The corrected drawings are required in reply to the Office action to avoid abandonment of the application. The requirement for corrected drawings will not be held in abeyance. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 2, and 7 are rejected under 34 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), the phrase "such as" renders the claim indefinite because it is unclear whether the limitations following the phrase are part of the claimed invention. See MPEP § 2173.05(d). Claims 3 – 6 and 8 – 10 are rejected as being dependent on a rejected independent claim. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 – 3, 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (NPL: Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks) in view of O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) further in view of Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) Regarding claim 1, Wang teaches providing at least one plastic model configured to learn on samples from a current stream of tasks and/or on samples stored [in the memory buffer]; (See e.g. [P3055:S5.1:C2], One major limitation of most KD methods, e.g., [1], [29] is that they assume the training samples [learn on samples] of the original networks (teachers) or of target networks (students) [plastic model] to be available) (Examiner’s notes, herein after the examiner has made the mapping of plastic models to student models based upon applicant’s specification provided on 03/08/2023, see [0006, 0025 – 0026]) providing at least one stable model configured to maintain an exponentially moving average of the at least one plastic model; (See e.g. [P3060:S8.3:C2], the teacher’s [stable model] weights are updated using the exponential moving average (EMA) of the student’s weights. [exponentially moving average of the at least one plastic model]) distilling knowledge of individual representations from the at least one stable model into the at least one plastic model by transferring elemental similarities from the at least one stable model into the at least one plastic model, using an elemental knowledge distillation loss such as a mean squared error loss; (See e.g. [P3049:S2:C2], the knowledge is transferred from the teacher model [stable model] to the student [into the at least one plastic model] by minimizing the difference between the logits (the inputs to the final softmax) produced by the teacher model and those produced by the student model.) (See e.g. [P3056:S5.2:C2], the teacher network is first compressed to create a student via network pruning [130], and layer-wise distillation losses [using an elemental knowledge distillation loss] are then applied to reduce the estimation error on given limited samples) (See e.g. [P3063:S10:C1], Existing KD losses are mostly dependent on euclidean loss (e.g., l1), and have their own limitations. [216] has shown that algorithms that regularize with euclidean distance, (e.g., MSE loss [a mean squared error loss]) are easily confused by random features) Wang does not teach and transferring relations between the individual representations from the at least one stable model into the at least one plastic model by enforcing relational similarities between the at least one stable model and the at least one plastic model, using a relational similarity loss such as a cross-correlation-based relational similarity loss. O’Neill teaches and transferring relations between the individual representations from the at least one stable model into the at least one plastic model by enforcing relational similarities between the at least one stable model and the at least one plastic model, using a relational similarity loss such as a cross-correlation-based relational similarity loss. (See e.g. [P2:S2], Knowledge Distillation (KD) transfers the logits of an already trained network [transferring relations between the individual representations from the at least one stable model] (Hinton et al., 2015) and uses them as soft targets to optimize a student network. [the at least one plastic model] The student network is typically smaller than the teacher network and benefits from the additional information soft targets provide.) (See e.g. [P4:S3.1 & Figure 1], To bring it all together, in Figure 1 we show the proposed framework of Self-Distilled Pruning with cross-correlation loss (SDP-CC) [using a relational similarity loss such as a cross-correlation-based relational similarity loss], where I is the identity matrix.) (See e.g. [P5:S3.3], One explanation for faster convergence is that optimizing for soft targets translates to maximizing the margin of class boundaries given the implicit class similarities provided by teacher logits. [enforcing relational similarities between the at least one stable model]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang and O’Neill before them, to include O’Neill’s relational similarity loss such as a cross-correlation-based relational similarity loss which would allow Wang’s model to find objective similarities. One would have been motivated to make such a combination in order to improve test accuracy, as suggested by O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) (P3:S2) Wang and O’Neill do not teach providing a memory buffer for storing data samples Nikovski teaches providing a memory buffer for storing data samples (See e.g. [P548:S3:C2], considering all possible partitions of a memory buffer of size N of past data readings) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill and Nikovski before them, to include Nikovski’s memory buffer which would allow Wang and O’Neill’s model improve responsiveness and system speed. One would have been motivated to make such a combination in order to reduce computational complexity, as suggested by Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) Regarding claim 2, Wang, O’Neill and Nikovski teach the method of claim 1. Wang further teaches comprising the step of training the at least one plastic model by calculating a task loss, such as a cross-entropy loss, on samples [selected from a current stream of tasks and a stream from samples stored in the memory buffer.] (See e.g. [P3053:S4.2:C2], In such a setting, the student is encouraged to learn [training the at least one plastic model] the softened output of the assembled teachers’ logits via the cross-entropy loss as done in representative works [calculating a task loss, such as a cross-entropy loss, on samples]) Wang and O’Neill do not teach selected from a current stream of tasks and a stream from samples stored in the memory buffer. Nikovski teaches samples selected from a current stream of tasks and a stream from samples stored in the memory buffer. (See e.g. [P547:S2:C2], If we denote with xt the d-dimensional data vector from the sensor data stream at time [samples selected from a current stream of tasks] instant t, the problem of abrupt change detection is to determine whether such a change has occurred at or before the current time t.) (See e.g. [P548:S3:C2], considering all possible partitions of a memory buffer of size N of past data readings [a stream from samples stored in the memory buffer]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill and Nikovski before them, to include Nikovski’s memory buffer which would allow Wang and O’Neill’s model improve responsiveness and system speed. One would have been motivated to make such a combination in order to reduce computational complexity, as suggested by Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) Regarding claim 3, Wang, O’Neill and Nikovski teach the method of claim 1. Wang further teaches at comprising the step of calculating the elemental knowledge distillation loss on samples [selected from the memory buffer.] (See e.g. [P3056:S5.2:C2], the teacher network is first compressed to create a student via network pruning [130], and layer-wise distillation losses are then applied to reduce the estimation error on given limited samples.) Wang and O’Neill do not teach samples selected from the memory buffer Nikovski teaches samples selected from the memory buffer (See e.g. [P548 - 549:S3:C2], considering all possible partitions of a memory buffer of size N of past data readings…such that xN is the latest recorded reading. [samples selected]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill and Nikovski before them, to include Nikovski’s memory buffer which would allow Wang and O’Neill’s model improve responsiveness and system speed. One would have been motivated to make such a combination in order to reduce computational complexity, as suggested by Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) Regarding claim 7, Wang, O’Neill and Nikovski teach the method of claim 1. Wang does not teach transferring relational similarities in both the memory samples and the current samples from the at least one stable model to the at least one plastic model, using a relational similarity loss such as a cross-correlation-based relational similarity loss. O’Neill teaches transferring relational similarities in [both the memory samples and the current] samples from the at least one stable model to the at least one plastic model, using a relational similarity loss such as a cross-correlation-based relational similarity loss. (See e.g. [P2:S2], Knowledge Distillation (KD) transfers the logits of an already trained network [transferring relations between the individual representations from the at least one stable model] (Hinton et al., 2015) and uses them as soft targets to optimize a student network. [the at least one plastic model] The student network is typically smaller than the teacher network and benefits from the additional information soft targets provide.) (See e.g. [P4:S3.1 & Figure 1], To bring it all together, in Figure 1 we show the proposed framework of Self-Distilled Pruning with cross-correlation loss (SDP-CC) [using a relational similarity loss such as a cross-correlation-based relational similarity loss], where I is the identity matrix.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang and O’Neill before them, to include O’Neill’s relational similarity loss such as a cross-correlation-based relational similarity loss which would allow Wang’s model to find objective similarities. One would have been motivated to make such a combination in order to improve test accuracy, as suggested by O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) (P3:S2) Wang and O’Neill do not teach both the memory samples and the current samples. Nikovski teaches both the memory samples and the current samples. (See e.g. [P550:S4:C2], All change detection algorithms process the data points [samples] on-line, one by one as they arrive, and their objective is to decide whether a change has occurred at or before [memory] the current point in time.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill and Nikovski before them, to include Nikovski’s memory buffer which would allow Wang and O’Neill’s model improve responsiveness and system speed. One would have been motivated to make such a combination in order to reduce computational complexity, as suggested by Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) Regarding claim 9, Wang, O’Neill and Nikovski teach the method of claim 1. Wang further teaches wherein when the computer program is loaded and executed by a computer, the computer program causes the computer to carry out the steps of the computer-implemented method according to claim 1. (See e.g. [P3058:C2], two-stage KD requires high computation and storage costs.) Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Wang (NPL: Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks) in view of Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) further in view of O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) further in view of SEO (US 20210200477 A1) Regarding claim 4, Wang, O’Neill and Nikovski teach the method of claim 1. Wang do not teach calculating the relational similarity loss on samples selected from a current stream of tasks and a stream from samples stored in the memory buffer. SEO teaches calculating the relational similarity [loss] on samples selected from a current stream of tasks and a stream from samples stored in the memory [buffer.] (See e.g. [0009], calculating distance information including a first similarity between the data stored in the data buffer [samples stored in the memory] and the first physical stream [current stream of tasks]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang and SEO before them, to include SEO’s calculating similarity from current data stream and stored data which would allow Wang’s model to improve model generalization. One would have been motivated to make such a combination in order to improve the performance and lifetime of the storage, as suggested by SEO (US 20210200477 A1) (0104) Wang and SEO do not teach calculating the relational similarity loss O’Neill teaches calculating the relational similarity loss (See e.g. [P4:S3.1 & Figure 1], To bring it all together, in Figure 1 we show the proposed framework of Self-Distilled Pruning with cross-correlation loss (SDP-CC) [using a relational similarity loss such as a cross-correlation-based relational similarity loss], where I is the identity matrix.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, SEO and O’Neill before them, to include O’Neill’s relational similarity loss such as a cross-correlation-based relational similarity loss which would allow Wang and SEO’s model to find objective similarities. One would have been motivated to make such a combination in order to improve test accuracy, as suggested by O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) (P3:S2) Wang, O’Neill and SEO do not teach samples stored in the memory buffer. Nikovski teaches samples stored in the memory buffer. (See e.g. [P548:S3:C2], considering all possible partitions of a memory buffer of size N of past data readings) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill, SEO and Nikovski before them, to include Nikovski’s memory buffer which would allow Wang, SEO and O’Neill’s model improve responsiveness and system speed. One would have been motivated to make such a combination in order to reduce computational complexity, as suggested by Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) Claims 5 and 8 are rejected under 35 U.S.C. 103 as being unpatentable over Wang (NPL: Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks) in view of Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) further in view of O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) further in view of Lin (US 20210027470 A1) Regarding claim 5, Wang, O’Neill and Nikovski teach the method of claim 1. Wang further teaches the elemental knowledge distillation loss (See e.g. [P3056:S5.2:C2], the teacher network is first compressed to create a student via network pruning [130], and layer-wise distillation losses [elemental knowledge distillation loss] are then applied to reduce the estimation error on given limited samples.) the task loss (See e.g. [P3053:S4.2:C2], In such a setting, the student is encouraged to learn the softened output of the assembled teachers’ logits via the cross-entropy loss as done in representative works [the task loss]) Wang, O’Neill and Nikovski do not teach multiplying by a first pre-defined weight to calculate a weight and calculating a combination of weights Lin teaches multiplying the [elemental knowledge distillation] loss by a first pre-defined weight to calculate a weighted [elemental knowledge distillation] loss (See e.g. [0119], determining the combined loss includes applying a weight [first pre-defined weight] to the perceptual loss to generate a weighted perceptual loss) and calculating a combination of the [task] loss and the weighted [elemental knowledge distillation] loss. (See e.g. [0119], and combining the L1 loss and the weighted perceptual loss [weighted loss] to generate the combined loss. [calculating a combination]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill, Nikovski and Lin before them, to include Lin’s calculation of loss and weight assignment which would allow Wang, O’Neill and Nikovski’s model to be improved in accuracy and speed. One would have been motivated to make such a combination in order to improve the system’s accuracy, as suggested by Lin (US 20210027470 A1) (0027) Regarding claim 8, Wang, O’Neill and Nikovski teach the method of claim 1. Wang further teaches first total loss (See e.g. [P3053:S4.2:C2], where m is the total number of teachers, H is the cross entropy loss (Examiner’s notes: by using the total number of teaches and calculating the entropy loss from the total, that would be a total loss of the total teachers)) Wang and Nikovski does not teach relational similarity loss and relational knowledge distillation loss O’Neill teaches relational similarity loss and relational knowledge distillation loss (See e.g. [P4:S3.1 & Figure 1], To bring it all together, in Figure 1 we show the proposed framework of Self-Distilled Pruning with cross-correlation loss (SDP-CC) [using a relational similarity loss such as a cross-correlation-based relational similarity loss], where I is the identity matrix.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang and O’Neill before them, to include O’Neill’s relational similarity loss such as a cross-correlation-based relational similarity loss which would allow Wang’s model to find objective similarities. One would have been motivated to make such a combination in order to improve test accuracy, as suggested by O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) (P3:S2) Wang, O’Neill and Nikovski does not teach multiplying the relational similarity loss by a second pre-defined weight to calculate a weighted relational knowledge distillation loss; and calculating a combination of the first total loss and the weighted relational knowledge distillation loss. Lin teaches multiplying the [relational similarity] loss by a second pre-defined weight to calculate a weighted [relational knowledge distillation] loss; (See e.g. [0083], the image composition system 106 applies a relatively higher or lower weight to the perceptual loss.) (See e.g. [0119], determining the combined loss includes applying a weight [second pre-defined weight] to the perceptual loss to generate a weighted perceptual loss) and calculating a combination of the [first total] loss and the [weighted relational knowledge distillation] loss. (See e.g. [0119], and combining the L1 loss and the weighted perceptual loss [weighted loss] to generate the combined loss. [calculating a combination]) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill, Nikovski and Lin before them, to include Lin’s calculation of loss and weight assignment which would allow Wang, O’Neill and Nikovski’s model to be improved in accuracy and speed. One would have been motivated to make such a combination in order to improve the system’s accuracy, as suggested by Lin (US 20210027470 A1) (0027) Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Wang (NPL: Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks) in view of Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) further in view of O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) further in view of Kobayashi (NPL: Improvements of Dark Experience Replay and Reservoir Sampling towards Better Balance between Consolidation and Plasticity) Regarding claim 6, Wang, O’Neill and Nikovski teach the method of claim 1. Wang and O’Neill do not teach comprising the steps of: providing the memory buffer as a bounded memory buffer; and Nikovski teaches comprising the steps of: providing the memory buffer as a bounded memory buffer; and (See e.g. [P548:S3:C2], considering all possible partitions of a memory buffer of size N [providing the memory buffer as a bounded memory buffer] of past data readings) updating the [bounded] memory buffer using reservoir sampling. (See e.g. [P20:S5.3], which is an advantage of the RS buffer [using reservoir sampling], an appropriate balance should be achieved by moderately updating the buffer [updating the [bounded] memory buffer] with new data and actively excluding unnecessary data) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill and Nikovski before them, to include Nikovski’s memory buffer which would allow Wang and O’Neill’s model improve responsiveness and system speed. One would have been motivated to make such a combination in order to reduce computational complexity, as suggested by Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Wang (NPL: Knowledge Distillation and Student-Teacher Learning for Visual Intelligence: A Review and New Outlooks) in view of Nikovski (NPL: Memory-Based Algorithms for Abrupt Change Detection in Sensor Data Streams) further in view of O’Neill (NPL: DEEP NEURAL COMPRESSION VIA CONCURRENT PRUNING AND SELF-DISTILLATION) further in view of RADOVIC (US 20220383075 A1) Regarding claim 10, Wang, O’Neill and Nikovski teach the method of claim 1. Wang, O’Neill and Nikovski do not teach enabling the autonomous vehicle to continually adapt and acquire knowledge from an environment surrounding the autonomous vehicle. RADOVIC teaches enabling the autonomous vehicle to continually adapt and acquire knowledge from an environment surrounding the autonomous vehicle. (See e.g. [0004], Agents can include other data processes that control operation of other computing devices or data objects operating in a same space. In a practical non-limiting example relating to autonomous vehicles, it is desirable that the vehicle is able to detect the current position of agents in its environment [acquire knowledge from an environment], such as pedestrians and other vehicles.) (See e.g. [0185], The approach is adapted to control the model to generate realistic position estimates for the agent at the desired time, based on an assumption of modelling a continuous [enabling the autonomous vehicle to continually adapt] temporal process.) Accordingly, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, having the teaching of Wang, O’Neill, Nikovski and RADOVIC before them, to include RADOVIC’s autonomous vehicle which would allow Wang, O’Neill and Nikovski’s model to be implemented into vehicles and real world use. One would have been motivated to make such a combination in order to improve accuracy in the real world environment along with improve prediction accuracy, as suggested by RADOVIC (US 20220383075 A1) (0193) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE ALLMAN THOMPSON whose telephone number is (571)272-3671. The examiner can normally be reached Monday - Thursday, 6 a.m. - 3 p.m. ET.. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /K.A.T./Examiner, Art Unit 2125 /KAMRAN AFSHAR/Supervisory Patent Examiner, Art Unit 2125
Read full office action

Prosecution Timeline

Mar 08, 2023
Application Filed
Dec 23, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547932
MACHINE LEARNING-ASSISTED MULTI-DOMAIN PLANNING
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month