Prosecution Insights
Last updated: April 19, 2026
Application No. 18/628,702

METHOD FOR PREDICTING TRAJECTORIES OF ROAD USERS

Non-Final OA §101§103§112
Filed
Apr 06, 2024
Examiner
WELLS, HEATH E
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Aptiv Technologies AG
OA Round
1 (Non-Final)
75%
Grant Probability
Favorable
1-2
OA Rounds
3y 5m
To Grant
93%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
58 granted / 77 resolved
+13.3% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
46 currently pending
Career history
123
Total Applications
across all art units

Statute-Specific Performance

§101
17.8%
-22.2% vs TC avg
§103
62.8%
+22.8% vs TC avg
§102
2.4%
-37.6% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 77 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged that application claims priority to foreign application with application number EP 23170748.0 dated 28 April 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 13 May 2024 has been considered and placed in the application file. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claim(s) 4 and 12 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. According to MPEP 2143.03 (I), “If a claim is subject to more than one interpretation, at least one of which would render the claim unpatentable over the prior art, the examiner should reject the claim as indefinite under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph (see MPEP § 2175) and should reject the claim over the prior art based on the interpretation of the claim that renders the prior art applicable. (Ex parte Ionescu, 222 USPQ 537 (Bd. Pat. App. & Inter. 1984)” Claim(s) 4 and 12 recite “and/or.” It is unclear if the limitations are to be disjunctive or conjunctive. However, for searching for limitations, the disjunctive interpretation has been used. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. The USPTO “Interim Guidelines for Examination of Patent Applications for Patent Subject Matter Eligibility” (Official Gazette notice of 23 February 2010), Annex IV, reads as follows: The USPTO recognizes that applicants may have claims directed to computer readable media that cover signals per se, which the USPTO must reject under 35 U.S.C. § 101 as covering both non-statutory subject matter and statutory subject matter. In an effort to assist the patent community in overcoming a rejection or potential rejection under 35 U.S.C. § 101 in this situation, the USPTO suggests the following approach. A claim drawn to such a computer readable medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. § 101 by adding the limitation "non-transitory" to the claim. Cf. Animals - Patentability, 1077 Off. Gaz. Pat. Office 24 (April 21, 1987) (suggesting that applicants add the limitation "non-human" to a claim covering a multi-cellular organism to avoid a rejection under 35 U.S.C. § 101). Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. The limited situations in which such an amendment could raise issues of new matter occur, for example, when the specification does not support a non-transitory embodiment because a signal per se is the only viable embodiment such that the amended claim is impermissibly broadened beyond the supporting disclosure. See, e.g., Gentry Gallery, Inc. v. Berkline Corp., 134 F.3d 1473(Fed. Cir. 1998). Claim(s) 12-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter as follows. Claim 12 defines a “computer system” embodying functional descriptive material. However, the claim does not define a non-transitory computer-readable medium or memory and is thus non-statutory for that reason (i.e., “examination the pending claims must be interpreted as broadly as their terms reasonably allow). The broadest reasonable interpretation of a claim drawn to a computer readable medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable media, particularly when the specification is silent. See MPEP 2111.01. Dependent claims 13-14 are also rejected as depending on claim 1, also reciting a “computer system” embodying functional descriptive material. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. § 101 as covering non-statutory subject matter. See In see Official Gazette Notice 1351 OG212, February 23,2010). That is, the scope of the presently claimed “computer program product ” typically covers forms of non-transitory tangible media and transitory propagating signals per se. The examiner suggests amending the claim to embody the program on a “computer readable medium” and adding the limitation ”non-transitory ” to the claim or equivalent in order to make the claim statutory. Any amendment to the claim should be commensurate with its corresponding disclosure. 1st Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-4 and 9-15 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2023 0085296 A1, (Liu et al.). PNG media_image1.png 452 686 media_image1.png Greyscale Claim 1 [AltContent: textbox (Liu et al. Fig. 4, showing a system for computing trajectories )] Regarding Claim 1, Liu et al. teach a computer implemented method for predicting respective trajectories of a plurality of road users ("predicting trajectories, and, more particularly, to predicting trajectories of multiple vehicles within an area using graphs and multiple decoding models," paragraph [0002]), the method comprising: determining trajectory characteristics of the road users with respect to a host vehicle via a perception system of the host vehicle ("FIG. 3 illustrates one embodiment of the prediction system 170 using various inputs 310 to predict trajectories of vehicles simultaneously through modeling," paragraph [0033] where the perception system is called a prediction system), wherein the trajectory characteristics are provided as a joint vector describing respective dynamics of each of the road users for a predefined number of time steps ("the prediction system 170 also aggregates map information to capture road geometries and generate a single map feature for the scene that further improves trajectory prediction. As explained subsequently, updated geographic map features are concatenated (450) to form an input vector for each vehicle that is processed by decoder 470," paragraph [0038]); encoding the joint vector of the trajectory characteristics via a machine learning algorithm including an attention algorithm which models interactions of the road users ("one or more of the modules described herein can include artificial intelligence elements, e.g., neural network, fuzzy logic or other machine learning algorithms," paragraph [0081] and "the node-attention block 510 and the neighbor attention block 520 in FIG. 5 are used for graphing and extracting features of vehicles in an area through transformer encoding. In various implementations, the prediction system 170 learns and aggregates spatiotemporal interactions, relevant information (e.g., position, speed, etc.), or long-term dependencies across vehicles through self-attention transformation," [paragraph [0039]); fusing, via the machine learning algorithm, the encoded trajectory characteristics and encoded static environment data obtained for the host vehicle, wherein the fusing provides fused encoded features ("The prediction system 170 concatenates these outputs for inputting to the decoder 470," paragraph [0045] where concatenating is fusing); and decoding the fused encoded features via the machine learning algorithm in order to predict the respective trajectory of each of the road users for a predetermined number of future time steps ("FIG. 6 illustrates one embodiment of the prediction system 170 using decoding of extracted features to predict trajectories of multiple vehicles simultaneously," paragraph [0046]). It is recognized that the citations and evidence provided above are derived from potentially different embodiments of a single reference. Nevertheless, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains to employ combinations and sub-combinations of these complementary embodiments, because Liu et al. explicitly motivates doing so at least in paragraphs [0020], [0023] and [0093] including “Aspects herein can be embodied in other forms without departing from the spirit or essential attributes thereof.” and otherwise motivating experimentation and optimization. The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of system claim 12 while noting that the rejection above cites to both device and method disclosures. Claim 12 is mapped below for clarity of the record and to specify any new limitations not included in claim 1. Claim 2 Regarding claim 2, Liu et al. teach the method according to claim 1, wherein modelling interactions of the road users by the attention algorithm includes: for each of the road users, modelling respective interactions with other road users ("the prediction system 170 acquires characteristics, prior trajectories, and spatiotemporal ( e.g., motion, path, etc.) interactions of multiple vehicles," paragraph [0052]), fusing the modelled interactions for all road users ("the prediction system 170 separately processes geographic map and vehicle data from multiple vehicles to extract related features, thereby improving trajectory estimates for complex road geometries. As previously explained, trajectory predictions are also improved by using the encoded features of neighboring vehicles that capture spatiotemporal interactions," paragraph [0053] where extracting related features teaches fusing), and concatenating the modelled interactions for each of the road users with the result of fusing the modelled interactions for all road users ("The prediction system 170 concatenates these outputs for inputting to the decoder 470," paragraph [0045] where concatenating is fusing). Claim 3 Regarding claim 3, Liu et al. teach the method according to claim 2, wherein modelling the respective interactions includes: providing the trajectory characteristics of the road users to a stacked plurality of attention blocks ("Self-attention transformation also involves computing a representation of the sequence and extracting the node feature of the vehicle 100. In one approach, the transformation relates input vectors and input pairs of the learning model to an output that is a weighted sum of the input vectors and the input pairs," paragraph [0054] where input and output pairs teaches a stacked plurality), wherein each attention block includes a multi-head attention algorithm and at least one feedforward layer ("Similarly, a cross-attention block can be used by the prediction system 170 to efficiently learn and extract relevant information from node features and neighbor features based on self-attention transformation," paragraph [0055] where a cross attention block is within the interpretation of a feedforward layer), and the multi-head attention algorithm includes determining a similarity of queries derived from the trajectory characteristics and predetermined key values ("In addition, vehicle nodes and geographic map nodes go through separate graph computations with similar operators for feature updates to capture spatiotemporal interactions," paragraph [0056] where spatiotemporal interactions include trajectory characteristics and key values). Claim 4 Regarding claim 4, Liu et al. teach the method according to claim 1, wherein static environment data are determined via the perception system of the host vehicle and/or a predetermined map ("HD map information describes detailed geometries for the driving environment (e.g., drivable area, intersection layout, lane attributes, etc.). Operator preferences may include driving style ( e.g., aggressive), safety attributes, degrees of control, an operator behavior model, and so on," paragraph [0033]), and the static environment data is encoded via the machine learning algorithm in order to obtain the encoded static environment data ("the prediction system 170 also aggregates map information to capture road geometries and generate a single map feature for the scene that further improves trajectory prediction. As explained subsequently, updated geographic map features are concatenated (450) to form an input vector for each vehicle that is processed by decoder 470," paragraph [0038]). Claim 9 Regarding claim 9, Liu et al. teach the method according to claim 4, wherein the static environment data is provided by a static grid map which includes a rasterization of a region of interest in the environment of the host vehicle ("using an encoding model, a graph having geographic map and vehicle features associated with a plurality of vehicles in an area according to characteristics, prior trajectories, and spatiotemporal interactions," paragraph [0007] where a rasterization is within the interpretation of a graph having a geographic map), and allocating the output of the at least one attention algorithm to the respective dynamic grid maps includes a respective rasterization which is related to the rasterization of the static grid map ("the prediction system learns and aggregates spatiotemporal interactions, information ( e.g., position, speed, etc.), or long-term dependencies across vehicles through self-attention transformation that relates different positions of a node feature ( e.g., intersection length, vehicle speed, etc.) within a sequence," paragraph [0021]). Claim 10 Regarding claim 10, Liu et al. teach the method according to claim 9, wherein the result of decoding the fused features is provided with respect to the rasterization of the static grid map for a plurality of time steps ("the historical data 250 is estimated trajectories from the previous time-step for multiple vehicles in the driving environment. In one approach, the prediction system 170 fuses the environmental information 240 and the historical data 250 for graphing and trajectory predictions to improve accuracy and performance," paragraph [0029]). Claim 11 Regarding claim 11, Liu et al. teach the method according to claim 1, wherein the trajectory characteristics include a current position, a current velocity and an object class of each road user ("the automated driving module(s) 160 can use such data to generate one or more driving scene models. The automated driving module(s) 160 can determine position and velocity of the vehicle 100," paragraph [0082]). Claim 12 Regarding claim 12, Liu et al. teach a computer system ("predicting trajectories, and, more particularly, to predicting trajectories of multiple vehicles within an area using graphs and multiple decoding models," paragraph [0002]), the computer system being configured: to receive trajectory characteristics of road users provided by a perception system of a host vehicle("FIG. 3 illustrates one embodiment of the prediction system 170 using various inputs 310 to predict trajectories of vehicles simultaneously through modeling," paragraph [0033] where the perception system is called a prediction system); to receive static environment data provided by the perception system of the host vehicle and/or by a predetermined map; to determine trajectory characteristics of the road users with respect to the host vehicle via the perception system of the host vehicle ("the prediction system 170 also aggregates map information to capture road geometries and generate a single map feature for the scene that further improves trajectory prediction. As explained subsequently, updated geographic map features are concatenated (450) to form an input vector for each vehicle that is processed by decoder 470," paragraph [0038]), wherein the trajectory characteristics are provided as a joint vector describing respective dynamics of each of the road users for a predefined number of time steps ("the historical data 250 is estimated trajectories from the previous time-step for multiple vehicles in the driving environment. In one approach, the prediction system 170 fuses the environmental information 240 and the historical data 250 for graphing and trajectory predictions to improve accuracy and performance," paragraph [0029]); to encode the joint vector of the trajectory characteristics via a machine learning algorithm including an attention algorithm which models interactions of the road users ("one or more of the modules described herein can include artificial intelligence elements, e.g., neural network, fuzzy logic or other machine learning algorithms," paragraph [0081] and "the node-attention block 510 and the neighbor attention block 520 in FIG. 5 are used for graphing and extracting features of vehicles in an area through transformer encoding. In various implementations, the prediction system 170 learns and aggregates spatiotemporal interactions, relevant information (e.g., position, speed, etc.), or long-term dependencies across vehicles through self-attention transformation," [paragraph [0039]); to fuse, via the machine learning algorithm, the encoded trajectory characteristics and encoded static environment data obtained for the host vehicle, wherein the fusing provides fused encoded features ("The prediction system 170 concatenates these outputs for inputting to the decoder 470," paragraph [0045] where concatenating is fusing); and to decode the fused encoded features via the machine learning algorithm in order to predict the respective trajectory of each of the road users for a predetermined number of future time steps ("FIG. 6 illustrates one embodiment of the prediction system 170 using decoding of extracted features to predict trajectories of multiple vehicles simultaneously," paragraph [0046]). Claim 13 Regarding claim 13, Liu et al. teach the computer system according to claim 12, wherein: the machine learning algorithm includes a respective encoder for encoding the joint vector of the trajectory characteristics and for encoding the static environment data, a concatenation of the encoded trajectory characteristics and the encoded static environment data in order to obtain fused encoded features ("The prediction system uses the prior trajectories with operator preferences ( e.g., aggressiveness) in an encoder that computes a graph having a geographic map and vehicle features extracted separately for the multiple vehicles," paragraph [0005]) and a decoder for decoding the fused encoded features in order to predict the respective trajectory of each of the road users for a predetermined number of future time steps ("decoder models estimate trajectories for the multiple vehicles while accounting for the spatiotemporal interactions," paragraph [0006]). Claim 14 Regarding claim 14, Liu et al. teach a vehicle including the perception system and the computer system of claim 12 ("FIG. 1, an example of a vehicle 100 is illustrated. As used herein, a "vehicle" is any form of motorized transport. In one or more implementations, the vehicle 100 is an automobile," paragraph [0023]). Claim 15 Regarding claim 15, Liu et al. teach a non-transitory computer readable medium comprising instructions for carrying out the computer implemented method of claim 1 ("the prediction system may be used by road-side units (RSU), a cloud computer, consumer electronics (CE), mobile devices, robots, drones, and so on that benefit from the functionality discussed herein associ-ated with predicting trajectories of multiple vehicles in an area simultaneously using graphs. ," paragraph [0023]). 2nd Claim Rejections - 35 USC § 103 Claims 5-8 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2023 0085296 A1, (Liu et al.) in view of US Patent Publication 2023 0166764 A1, (Johnson et al.). Claim 5 Regarding Claim 5, Liu et al. teach the method according to claim 4, as noted above. [AltContent: textbox (Johnson et al. Fig. 8, showing stacked levels of simulations)] PNG media_image2.png 669 679 media_image2.png Greyscale Liu et al. do not explicitly teach all of stacked level corresponding to a predetermined scaling. However, Johnson et al. teach wherein encoding the static environment data via the machine learning algorithm includes encoding the static environment data at a plurality of stacked levels, each level corresponding to a predetermined scaling ("predicting an impact of the ego agent with a metric which is particularly well suited for easy use, such as a metric which is configured for any or all of: comparison among multiple different types of agents, the linear scaling of another metric in the determination of a reward function, representing a common goal among agents, and/or any other benefits," paragraph [0020]), the attention algorithm includes a plurality of stacked levels, each level corresponding to a respective level for encoding the static environment data ("predicting an impact of the ego agent with a metric which is particularly well suited for easy use, such as a metric which is configured for any or all of: comparison among multiple different types of agents, the linear scaling of another metric in the determination of a reward function, representing a common goal among agents, and/or any other benefits," paragraph [0020] where a comparison among multiple teaches stacked levels), encoding the trajectory characteristics of the road users includes embedding the trajectory characteristics for each level differently in relation to the scaling of the corresponding level for encoding the static environment data ("predicting an impact of the ego agent with a metric which is particularly well suited for easy use, such as a metric which is configured for any or all of: comparison among multiple different types of agents, the linear scaling of another metric in the determination of a reward function, representing a common goal among agents, and/or any other benefits," paragraph [0020] where a comparison among multiple teaches embedding characteristics differently). Therefore, taking the teachings of Liu et al. and Johnson et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Systems and Methods for Predicting Trajectories of Multiple Vehicles” as taught by Liu et al. to use stacked levels of scaled information as taught by Johnson et al. The suggestion/motivation for doing so would have been that, “the technology confers the benefit of enabling a prediction of how other agents on the road are driving and how they might react to future actions of the ego agent.” as noted by the Johnson et al. disclosure in paragraph [0018], which also motivates combination because the combination would predictably have a higher efficiency as there is a reasonable expectation that the stacked levels will better predict actions as other vehicles approach the autonomous vehicle; and/or because doing so merely combines prior art elements according to known methods to yield predictable results. Claim 6 Regarding claim 6, Liu et al. teach the method according to claim 5, wherein the output of the at least one attention algorithm is allocated to respective dynamic grid maps having different resolutions for each level ("The outputs of the graph learning may be two updated feature sets, one for vehicles and one for maps, that the prediction system 170 concatenates and inputs to a decoder," paragraph [0056]). Claim 7 Regarding claim 7, Liu et al. teach the method according to claim 5, wherein the allocated output of the at least one attention algorithm is concatenated with the encoded static environment data on each level ("The outputs of the graph learning may be two updated feature sets, one for vehicles and one for maps, that the prediction system 170 concatenates and inputs to a decoder," paragraph [0056]). Claim 8 Regarding claim 8, Liu et al. teach the method according to claim 7, wherein the static environment data is encoded iteratively at the stacked each levels ("the one or more environment sensors 122 can be configured to sense obstacles in at least a portion of the external environment of the vehicle 100 and/or data about such obstacles. Such obstacles may be stationary objects and/or dynamic objects," paragraph [0070]), and an output of a respective encoding of the static environment data on each level is concatenated with the allocated output of the at least one attention algorithm on the respective level ("The outputs of the graph learning may be two updated feature sets, one for vehicles and one for maps, that the prediction system 170 concatenates and inputs to a decoder," paragraph [0056]). Reference Cited The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Non Patent Publication “Context-Aware Scene Prediction Network (CASPNet),” by Schafer, et al. discloses jointly learning and predicting the motion of all road users in a scene, using a novel convolutional neural network (CNN) and recurrent neural network (RNN) based architecture. Moreover, by exploiting grid-based input and output data structures, the computational cost is independent of the number of road users and multi-modal predictions become inherent properties of our proposed method. International Patent Publication WO 2022 214414 A1 to Schnieder et al. discloses a method for predicting and planning trajectories, said method comprising the steps of: processing a first machine learning model (IntCNN) (V3) which receives as input the hybrid scene representation (HSRV) and has been trained or is trained by means of reference predictions to determine interactions between the static (stat) and dynamic (dyn) environment features, wherein a function of the first machine learning model (IntCNN) is applied to the first layer (A, B, C), the second layer (D, E) and the third layer (F, G, H), and an integration (M) of the rigid static environment features (stat_1), the state-changing static environment features (stat_2) and the dynamic environment features (dyn) is generated, and the integration (M) is output from the machine learning model (IntCNN). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Heath E. Wells/Examiner, Art Unit 2664 Date: 21 January 2026
Read full office action

Prosecution Timeline

Apr 06, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602755
DEEP LEARNING-BASED HIGH RESOLUTION IMAGE INPAINTING
2y 5m to grant Granted Apr 14, 2026
Patent 12597226
METHOD AND SYSTEM FOR AUTOMATED PLANT IMAGE LABELING
2y 5m to grant Granted Apr 07, 2026
Patent 12591979
IMAGE GENERATION METHOD AND DEVICE
2y 5m to grant Granted Mar 31, 2026
Patent 12588876
TARGET AREA DETERMINATION METHOD AND MEDICAL IMAGING SYSTEM
2y 5m to grant Granted Mar 31, 2026
Patent 12586363
GENERATION OF PLURAL IMAGES HAVING M-BIT DEPTH PER PIXEL BY CLIPPING M-BIT SEGMENTS FROM MUTUALLY DIFFERENT POSITIONS IN IMAGE HAVING N-BIT DEPTH PER PIXEL
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
75%
Grant Probability
93%
With Interview (+18.1%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 77 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month