Prosecution Insights
Last updated: April 19, 2026
Application No. 18/295,665

RECOMMENDING BACKGROUNDS BASED ON USER INTENT

Non-Final OA §101§103
Filed
Apr 04, 2023
Examiner
TRAN, AMY NMN
Art Unit
2126
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
1 (Non-Final)
36%
Grant Probability
At Risk
1-2
OA Rounds
5y 2m
To Grant
84%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
10 granted / 28 resolved
-19.3% vs TC avg
Strong +48% interview lift
Without
With
+47.9%
Interview Lift
resolved cases with interview
Typical timeline
5y 2m
Avg Prosecution
24 currently pending
Career history
52
Total Applications
across all art units

Statute-Specific Performance

§101
32.5%
-7.5% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
6.0%
-34.0% vs TC avg
§112
15.6%
-24.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 28 resolved cases

Office Action

§101 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 01/14/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under U.S.C 101 for containing an abstract idea without significantly more. Regarding claim 1: Step 1 – Is the claim to a process, machine, manufacture or composition of matter? Yes, the claim is a process. Step 2A – Prong 1 – Does the claim recite an abstract idea, law of nature, or natural phenomenon? Yes, the claim recites an abstract idea. determining one or more candidate background embeddings based on a similarity between the intent embedding and a plurality of candidate background embeddings in embedding space - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) identifying one or more recommended background images based on one or more background classes corresponding to the one or more candidate background embeddings - This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Step 2A – Prong 2 – Does the claim recite additional elements that integrate the judicial exception into a practical application? No, there are no additional elements that integrate the judicial exception into a practical application. The additional elements: obtaining a design context – This limitation is directed to insignificant extra-solution activity (see MPEP 2106.05(g)). generating, [by an embedding generator], an intent embedding based on the design context; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. by an embedding generator – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Step 2B – Does the claim recite additional elements that amount to significantly more than the judicial exception? No, there are no additional elements that amount to significantly more than the judicial exception. The additional elements are: obtaining a design context – This limitation is directed to receiving or transmitting data over a network. The courts have recognized receiving or transmitting data over a network as well understood, routine, and conventional functions when they are claimed in a merely generic manner (e.g., at a high level of generality) or as insignificant extra-solution activity (see MPEP 2106.05(d) II.). generating, [by an embedding generator], an intent embedding based on the design context; Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. by an embedding generator – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Regarding claim 2, Claim 2 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: determining an intent from the design context This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 3, Claim 3 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 2 which includes an abstract idea (see rejection for claim 2). The additional limitations: generating the intent embedding from the intent. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 4, Claim 4 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein determining one or more candidate background embeddings based on a similarity between the intent embedding and a plurality of candidate background embeddings in embedding space, further comprises: This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) calculating a distance metric between the intent embedding and the plurality of candidate background embeddings in the embedding space; and This limitation is directed to mathematical calculation (see MPEP 2106.04(a)(2) l. C.) selecting the one or more candidate background embeddings based on the distance metric. This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 5, Claim 5 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein identifying one or more recommended background images based on one or more background classes corresponding to the one or more candidate background embeddings, further comprises: This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) searching an image library using the one or more background classes. This limitation is directed to the abstract idea of a mental process (including an observation, evaluation, judgement, opinion) which can be performed in the human mind, or by a human using pen and paper (see MPEP 2106.04(a)(2) Ill. C.) Regarding claim 6, Claim 6 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: wherein the embedding generator is a transformer network and – This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). wherein the embedding generator is trained using a triplet loss on a training dataset comprising a plurality of sets of background class and query pairs. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. Regarding claim 7, Claim 7 is rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. The claim is dependent on claim 1 which includes an abstract idea (see rejection for claim 1). The additional limitations: presenting, [via a user interface], the one or more recommended background images by adding it to the design context. Adding the words "apply it" (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea [see MPEP 2106.05(f)] and therefore fails to integrate the exception into a practical application. via a user interface This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Independent claims 8 and 15 are analogous claims, therefore the same rejection and rationale apply to them. In addition, claims 8 and 15 recite additional elements analyzed under Step 2A- Prong Two and Step 2B: Claim 8: A non-transitory computer-readable medium storing executable instructions, which when executed by a processing device, cause the processing device to perform operations comprising This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Claim 15: a memory component; and This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). a processing device coupled to the memory component, the processing device to perform operations comprising: This limitation is directed to a computer merely used as a tool to perform an existing process (see MPEP 2106.05(f) (2)). Dependent Claims 9 and 16, as described above, are analogous claims to claim 2, therefore the same rejection and rationale apply to them. Dependent Claims 10 and 17, as described above, are analogous claims to claim 3, therefore the same rejection and rationale apply to them. Dependent Claims 11 and 18, as described above, are analogous claims to claim 4, therefore the same rejection and rationale apply to them. Dependent Claims 12 and 19, as described above, are analogous claims to claim 5, therefore the same rejection and rationale apply to them. Dependent Claims 13 and 20, as described above, are analogous claims to claim 6, therefore the same rejection and rationale apply to them. Dependent Claim 14, as described above, is analogous claims to claim 7, therefore the same rejection and rationale apply to them. Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amount to significantly more than the abstract idea itself. Therefore, the claim(s) are rejected under 35 U.S.C 101 as being directed to non-statutory subject matter. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-5, 7-12 and 14-19 are rejected under 35 U.S.C. 103 as being unpatentable over Mei et al. (US 9,411,830 B2) (hereafter referred to as “Mei”) in view of Zhang et al. (US 11,138,285 B2) (hereafter referred to as “Zhang”). Regarding Claim 1, Mei teaches a method comprising: obtaining a design context; (Mei, Col. 4, Lines 10-14: “mobile device 112 receives a natural sentence input via a microphone and voice processor to initiate a voice query, as shown at 118. For example, a mobile device 112 receives a sentence like "find an image with a lake, the sky, and a tree," as illustrated at 118.”) determining one or more candidate background embeddings based on a similarity between the intent embedding and a plurality of candidate background embeddings in embedding space; and (Mei, Col. 11, Lines 44-50: “For example, image search module 538, which can be executed by processor 504, can identify image search results based on vector matching of one or more image patches that make up the composite visual query. Image search module 538 can make results of the image search available to be displayed on the screen of mobile device 112.”, Col. 13, Lines 31-35: “In at least one implementation, a clustering-based approach based on visual features and a similarity metric is used to identify candidate images for a given entity by exploiting a known image database and results from image search engines.”) [Examiner’s note: the “embeddings” here is being interpreted as the vector matching of one or more image patches] identifying one or more recommended background images based on one or more background classes corresponding to the one or more candidate background embeddings. (Mei, Col. 16, Lines 21-38: “the interactive multi-modal image search tool selects the centers of a predetermined number of images from the top clusters ( e.g., the top 10) as candidate images for this entity. For example, potential candidate images showing different subjects may have tags that match an entity. While the potential candidate images may be collected by searching for a certain tag, the interactive multi-modal image search tool can cluster these potential candidate images into groups according to their appearance to identify representative images of the different subjects presented in the images. The interactive multi-modal image search tool can rank the groups, for example, according to the number of images in the respective groups, such that the group with the largest number of images is ranked first. In addition, in some instances, the interactive multi-modal image search tool retains a predetermined number, e.g., the top ten or the top five, groups deemed most representative. In some instances the number of groups retained is user configurable.”) Mei fails to teach: generating, by an embedding generator, an intent embedding based on the design context; However, Zhang explicitly teaches: generating, by an embedding generator, an intent embedding based on the design context; (Zhang, Col. 19, Lines 18-22: “In block 1206, the intent-processing functionality 108 maps, using the machine-trained intent encoder component 104, the input expression into an input expression intent vector (IEIV), the IEIV corresponding to a distributed representation of the intent within an intent vector space.”) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Mei and Zhang. Mei teaches a facility for visual search on a mobile device that takes advantage of multi-modal input including touch input on the mobile device. Zhang teaches intent encoder trained using search logs. One of ordinary skill would have motivation to combine Mei and Zhang to improve semantic normalization which help placing the input queries near each other in vector space so the system behaves consistently even when phrasing changes. Regarding Claim 2, the combination of Mei and Zhang explicitly discloses all the limitation from Claim 1 (as shown in the rejection above). Mei in view of Zhang further discloses: determining an intent from the design context. (Mei, Col. 4, Lines 12-23: “a mobile device 112 receives a sentence like "find an image with a lake, the sky, and a tree," as illustrated at 118. The system employs a speech recognition (SR) engine 120 to transfer the speech received at 118 to a piece of text. The system then employs entity extraction engine 122 to extract entities, which are nouns, from the text. As a result, the tool recognizes "lake," "sky," and "tree" as three entities from lexicon 124. An image clustering engine 126 identifies candidate images from an image database 128 that correspond to each of the three entities and that can be used as respective image patches to represent the recognized entities.”) Regarding Claim 3, the combination of Mei and Zhang explicitly discloses all the limitation from Claim 2 (as shown in the rejection above). Mei in view of Zhang further discloses: generating the intent embedding from the intent. (Zhang, Col. 19, Lines 14-25: “the intent-processing functionality 108 receives an input expression submitted by a current user via an input device, the current user submitting the input expression with an intent to accomplish an objective. In block 1206, the intent-processing functionality 108 maps, using the machine-trained intent encoder component 104, the input expression into an input expression intent vector (IEIV), the IEIV corresponding to a distributed representation of the intent within an intent vector space. In block 1208, the intent-processing functionality 108 uses an information retrieval (IR) engine to process the input expression, to produce an IR result based, at least in part, on the IEIV.”) [Examiner’s note: “an intent embedding” is being interpreted as “an input expression intent vector”] Regarding Claim 4, the combination of Mei and Zhang explicitly discloses all the limitation from Claim 1 (as shown in the rejection above). Mei in view of Zhang further discloses: wherein determining one or more candidate background embeddings based on a similarity between the intent embedding and a plurality of candidate background embeddings in embedding space, further comprises: calculating a distance metric between the intent embedding and the plurality of candidate background embeddings in the embedding space; and (Zhang, Col. 6, Lines 35-43: “the neighbor search component 118 finds any neighbor query intent vectors (NEIVs) within a specified distance of a given intent vector (which serves as a search key), such as the intent vector associated with the input query Iq1. The neighbor search component 118 can determine the distance between the two intent vectors in intent vector space using cosine similarity or some other distance metric. The cosine similarity between any two vectors (A,B) is defined as (A·B)/(IIAII IIBII).”) selecting the one or more candidate background embeddings based on the distance metric. (Mei, Col. 3, Lines 38-46: “The mobile interactive multi-modal image search tool described herein provides a context-aware approach to image search that takes into consideration the spatial relationship among separate images, which are treated as image patches, e.g., small sub-images that represent visual words. The mobile interactive multi-modal image search tool presents an interface for a new search mode that enables users to formulate a composite query image by selecting particular candidate images”, Col. 14, Lines 52-57: “in one implementation, the interactive multi-modal image search tool weights visual words based on relative distance of their respective image patches from the center of the image, with image patches that are closer to the center being more heavily weighted than those that are farther from the center.”, Col. 16, Lines 21-24: “At block 814, the interactive multi-modal image search tool selects the centers of a predetermined number of images from the top clusters ( e.g., the top 10) as candidate images for this entity”) Regarding Claim 5, the combination of Mei and Zhang explicitly discloses all the limitation from Claim 1 (as shown in the rejection above). Mei in view of Zhang further discloses: wherein identifying one or more recommended background images based on one or more background classes corresponding to the one or more candidate background embeddings, further comprises: searching an image library using the one or more background classes. (Mei, Col. 4, Lines 20-23: “An image clustering engine 126 identifies candidate images from an image database 128 that correspond to each of the three entities and that can be used as respective image patches to represent the recognized entities.”, Col. 4, Lines 35-38: “The interactive multi-modal image search tool exploits the composite visual query to search for relevant images from image database 128 or in some instances from other sources such as the Internet.”) [Examiner’s note: “an image library” is being interpreted as the “image database”] Regarding Claim 7, the combination of Mei and Zhang explicitly discloses all the limitation from Claim 1 (as shown in the rejection above). Mei in view of Zhang further discloses: presenting, via a user interface, the one or more recommended background images by adding it to the design context. (Mei, Col. 5, Lines 59- 67: “Meanwhile, candidate images for the entities can be presented on the screen of mobile device 112 as shown at 208. In the example shown, candidate images for one entity, "tree," are presented in a single horizontal ribbon format, from which a particular image is being selected by dragging onto a canvas area 210 of the screen of mobile device 112. Meanwhile, particular candidate images for the entities "lake" and "sky" have already been selected via dragging onto a canvas area 210 of the screen of mobile device 112.”) Referring to Independent claim 8 and 15, they are rejected on the same basis as independent claim 1 since they are analogous claims. In addition, Claim 8 recites additional limitation: A non-transitory computer-readable medium storing executable instructions, which when executed by a processing device, cause the processing device to perform operations comprising: (Mei, Col. 9, Lines 10-15: “An operating system (OS) 512, a browser application 514, a global positioning system (GPS) module 516, a compass module 518, an interactive multi-modal image search tool 520, and any number of other applications 522 are stored in memory 510 as computer-readable instructions, and are executed, at least in part, on processor 504.”) Claim 15 recites additional limitation: a memory component; and a processing device coupled to the memory component, the processing device to perform operations comprising: (Mei, Col. 9, Lines 10-15: “An operating system (OS) 512, a browser application 514, a global positioning system (GPS) module 516, a compass module 518, an interactive multi-modal image search tool 520, and any number of other applications 522 are stored in memory 510 as computer-readable instructions, and are executed, at least in part, on processor 504.”) Referring to dependent claim 9 and 16, they are rejected on the same basis as dependent claim 2 since they are analogous claims. Referring to dependent claim 10 and 17, they are rejected on the same basis as dependent claim 3 since they are analogous claims. Referring to dependent claim 11 and 18, they are rejected on the same basis as dependent claim 4 since they are analogous claims. Referring to dependent claim 12 and 19, they are rejected on the same basis as dependent claim 5 since they are analogous claims. Referring to dependent claim 14, they are rejected on the same basis as dependent claim 7 since they are analogous claims Claim(s) 6, 13 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Mei et al. (US 9,411,830 B2) (hereafter referred to as “Mei”) in view of Zhang et al. (US 11,138,285 B2) (hereafter referred to as “Zhang”) and further in view of ZhangNPL et al. (“Composed query image retrieval based on triangle area triple loss function and combining CNN with transformer”) (hereafter referred to as “ZhangNPL”) Regarding Claim 6, the combination of Mei and Zhang discloses all the limitation of Claim 1 (as shown in the rejection above). Mei in view of Zhang fails to disclose: wherein the embedding generator is a transformer network and wherein the embedding generator is trained using a triplet loss on a training dataset comprising a plurality of sets of background class and query pairs. However, ZhangNPL explicitly discloses: wherein the embedding generator is a transformer network and wherein the embedding generator is trained using a triplet loss on a training dataset comprising a plurality of sets of background class and query pairs. (ZhangNPL, Pg. 2, ¶[1]: “We combine CNN with Transformer to capture local and edge feature information of reference images, which can reduce the loss of information. Specifically, the local feature information of reference images is extracted by CNN. Meanwhile, the edge feature information of reference images is focused through Transformer.”, Pg. 10, ¶[3]: “As shown in Tables 7 and 8, “Ours(Ed)” refers to training our network model by Triplet Loss Function, Euclidean distance as sample distance measurement. “Ours(Cd)” refers to training our network model by Triplet Loss Function, Cosine distance as sample distance measurement. “Ours” refers to training our network model by Triangle Area Triplet Loss Function.”,) It would have obvious to one of ordinary skill in the art before the effective filing date of the present application to combine Mei, Zhang and ZhangNPL. Mei teaches a facility for visual search on a mobile device that takes advantage of multi-modal input including touch input on the mobile device. Zhang teaches intent encoder trained using search logs. ZhangNPL teaches combining CNN with Transformer to simultaneously extract local and edge features of reference images, which can effectively reduce the loss of reference images information. One of ordinary skill would have motivation to combine Mei, Zhang and ZhangNPL to reduce the loss of information. Referring to dependent claims 13 and 20, they are rejected on the same basis as dependent claim 6 since they are analogous claims. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMY TRAN whose telephone number is (571)270-0693. The examiner can normally be reached Monday - Friday 7:30 am - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Yi can be reached at (571) 270-7519. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /AMY TRAN/ Examiner, Art Unit 2126 /DAVID YI/Supervisory Patent Examiner, Art Unit 2126
Read full office action

Prosecution Timeline

Apr 04, 2023
Application Filed
Feb 19, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602582
DYNAMIC DISTRIBUTED TRAINING OF MACHINE LEARNING MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12468932
IDENTIFYING RELATED MESSAGES IN A NATURAL LANGUAGE INTERACTION
2y 5m to grant Granted Nov 11, 2025
Patent 12462185
SCENE GRAMMAR BASED REINFORCEMENT LEARNING IN AGENT TRAINING
2y 5m to grant Granted Nov 04, 2025
Patent 12423589
TRAINING DECISION TREE-BASED PREDICTIVE MODELS
2y 5m to grant Granted Sep 23, 2025
Patent 12288074
GENERATING AND PROVIDING PROPOSED DIGITAL ACTIONS IN HIGH-DIMENSIONAL ACTION SPACES USING REINFORCEMENT LEARNING MODELS
2y 5m to grant Granted Apr 29, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
36%
Grant Probability
84%
With Interview (+47.9%)
5y 2m
Median Time to Grant
Low
PTA Risk
Based on 28 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month