Prosecution Insights
Last updated: April 19, 2026
Application No. 17/319,776

INTENT-INFORMED RECOMMENDATIONS USING CLUSTERS OF FEATURES WITH MACHINE LEARNING

Final Rejection §103§112
Filed
May 13, 2021
Examiner
RAHMAN, IBRAHIM
Art Unit
2122
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
4 (Final)
0%
Grant Probability
At Risk
5-6
OA Rounds
3y 3m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 10 resolved
-55.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
29 currently pending
Career history
39
Total Applications
across all art units

Statute-Specific Performance

§101
35.8%
-4.2% vs TC avg
§103
28.7%
-11.3% vs TC avg
§102
22.1%
-17.9% vs TC avg
§112
13.2%
-26.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 10 resolved cases

Office Action

§103 §112
Detailed Action This action is in response to the amendment filed 08/13/2025 for application 17/319,776 filed 08/13/2025, in which: Claims 21 and 23-40 were previously pending. Claims 21, 29, 36 are independent claims. Claims 21, 29, 36 are currently amended. Claim 22 was previously cancelled. Claims 27, 30, 35, and 37 are now cancelled. Claims 21, 23-26, 28-29, 31-34, 36, and 38-40 are currently pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Regarding the Claim Objections: Applicant's amendments to Claim 29 has overcome the previous objection due to a grammatical error. Thus, the objection for Claims 29 has been withdrawn. Regarding the 35 U.S.C. § 112 Rejections: Applicant's amendments by cancelling Claims 30 and 37 have overcome the previous rejections under 35 U.S.C. 112(d). Thus, the rejections for Claims 30 and 37 have been withdrawn. Regarding the 35 U.S.C. § 101 Rejections: Applicant’s arguments, see Pages 17-20, filed 08/13/2025, with respect to Claims 21-40 have been fully considered and are persuasive. The 35 U.S.C. § 101 Rejections of all currently pending claims have been withdrawn. Response to Arguments Regarding the 35 USC § 103 Rejections: Applicant's arguments regarding the 35 U.S.C. § 103 rejections of the previous office action have been fully considered, but are unpersuasive. Applicant traverses the 103 rejections (Page 12-16) based on the amended claims, as Minh/Dragos do not disclose all limitations from the independent claim. Applicant's arguments with respect to the amended claims not being disclosed within Minh/Dragos have been fully considered but they are not persuasive; the rejections to all Claims (including Claim 21, analogous independent Claims, and all dependent Claims) are maintained and updated as necessitated by Claim amendments. More details below within the examiner responses and office action. Applicant asserts (Page 12-14), Minh in view of Dragos fails to disclose clustering, by the intent determination module and using a second machine learning network … Applicant furthers their assertion by noting that the limitation in incorrectly interpreted as Minh discloses distinctly a different process for generating dynamic conversational responses based on intent clusters. Minh merely discloses the first machine learning model generates intent clusters. The second machine learning model selects a subset of the intent clusters generated by the first machine learning model. However, the second machine learning model does not perform any clustering. Thus, in Minh, the second machine learning model is trained to classify on the features labeled with a "known intent cluster" generated by the first machine learning model. Amended claim 21 recites that (emphasis added) using a second machine learning network which is unlike Minh. The two claimed machine learning networks are thus configured entirely differently from those in Minh. Minh does not use the second machine learning network to perform any clustering independently of the first machine learning network. Moreover, as claimed, the first machine learning network is not performing any clustering and the second machine learning network is not performing any feature encoding or extraction. In order for Minh to function, the first model must cluster the features so that the second model can then select from among the previously generated clusters. Here, by contrast, the second machine learning network is performing clustering independently of the first machine learning network, which is used to encode and extract features from layers of the network for use by the second machine learning network during the clustering. This is an entirely different arrangement from Minh, and therefore patentably distinct over Minh. Dragos is cited for allegedly disclosing using the intent value to process a transaction, but nevertheless fails to cure the factual deficiencies of Minh discussed above. Applicant’s arguments with respect to clustering limitation within the independent claims have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Applicant asserts (Page 14-15), Minh in view of Dragos fails to disclose or reasonably suggest extracting, by the intent determination module, a second feature of the plurality of features from the first machine learning network … Minh fails to disclose or reasonably suggest the limitation at least because Minh inputs to, and outputs from, the second machine learning model based on the second user action, but does not use the first machine learning network to perform any extraction of features independently of the second machine learning network, as claimed. As noted above, Dragos does not cure the deficiencies of Minh. Examiner respectfully disagrees. Minh details the extraction via the processing the behavioral user data to classify the data within the dataset by encoding to determine and extract features which lead to generating intent clusters. Dragos does not need to cure factual deficiencies for this limitation within Claim 21 as it is covered within the Minh reference. Therefore, for the reasons given above and in the updated rejections below, the rejection to all Claims (including Claim 21, analogous independent Claims, and all dependent Claims) are maintained and updated as necessitated by Claim amendments. Applicant asserts (Page 15-16), Minh in view of Dragos fails to disclose or reasonably suggest determining, using the intent value, at least one of … Dragos’s down-ranking based on “query refinements” disclosed by Dragos is different than the Claim limitation. Dragos also fails to disclose using the intent value directly to cure the deficiencies of Minh. The applicant requests reconsideration and withdrawal of rejections under 35 U.S.C. § 103. Examiner respectfully disagrees. As noted by Dragos, [0025] “In effect, down-rank the results that, while relevant to the context-free query string, are irrelevant given the currently determined intentions of the user”. The down ranking process is relevant to the location/placement of the queried results as the most relevant (based on intent) are shown at the top of the list. The locations are determined by using the intent value directly as the down ranking is done on the intent data results and are considered relevancy based on the intent value (which means the determination of the location is done by using the intent value as the systems goal is to implement the intent-aware search to display based on the determinations). Therefore, for the reasons given above and in the updated rejections below, the rejection to all Claims (including Claim 21, analogous independent Claims, and all dependent Claims) are maintained and updated as necessitated by Claim amendments. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 21, 29, and 36 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The terms “imminent” and “distant” in claims 21, 29, and 36 are relative terms which renders the claim indefinite. The terms “imminent” and “distant” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. As the dependent claims do not cure the deficiencies of the independent claims, all claims are rendered indefinite. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 21, 23-26, 28-29, 31-34, 36, and 38-40 are rejected under 35 U.S.C. 103 as being unpatentable over Minh et al. (US PG PUB 2022/0094648 A1), in view of Dragos et al., US PG PUB 2009/0228439 A1, in view of Lin et al. “Discovering New Intents via Constrained Deep Adaptive Clustering with Cluster Refinement”. Regarding Claim 21: Minh teaches: A method for generating intent-informed recommendations, the method comprising: (Minh, [0063], “A method for generating dynamic conversational responses using machine learning models using intent clusters, the method comprising”. Where conversational responses of Minh are being interpreted as analogous with the recommendations in the claim). encoding, by an intent determination module, a first feature of a plurality of features into a first machine learning network by propagating a first set of behavioral data through a plurality of layers in the first machine learning network, the first machine learning network configured to classify the first set of behavioral data into the plurality of features; (Minh, Fig. 2: 200/204; [0053], “… (FIG . 2 )) determines a feature input based on the first user action …”; [0040], “Machine learning model 202 may take inputs 204 and provide outputs 206. The inputs may include multiple datasets such as a training dataset and a test dataset … the plurality of datasets (e.g. , inputs 204) may include data subsets related to user data …”. As noted in the specification, encoding can be achieved... by propagating a first set of behavioral data through a plurality of layers in the first machine learning network and will be interpreted as training, behavioral data in the specification is defined as a set of data representing a sequence of events describing interactions between a user and at least one resource, and feature in the specification is defined as individual measurable property, pattern, or characteristic within a set of input data. Figure 2 shows the determination of a first feature input based on the first user action (204: behavioral data) by classifying the behavioral data (via 202) into features (206); thus, the first machine learning network (202) is used to classify the input data (behavioral user data) into the plurality of features for better prediction results). extracting, by the intent determination module, a second feature of the plurality of features from the first machine learning network by applying a second set of behavioral data to the first machine learning network, the first machine learning network further configured to classify the second set of behavioral data into the plurality of features (Minh, Fig. 2: 200 & [0060] “In response to receiving the second user action, the system may determine a second feature input for the second machine learning model based on the second user action. The system may input the second feature input into the second machine learning model. The system may receive a different output from the second machine learning model”. As noted in the specification, behavioral data is defined as a set of data representing a sequence of events describing interactions between a user and at least one resource which leads to interpreting a “(s)econd user action” as a second set of behavioral data. (E)xtracting a second feature and determining a second feature are being interpreted as analogous in this scenario; where features are extracted and encoded via the first machine learning network for prediction optimization. The second feature is based on the second user action where the user action input data (behavioral data) is processed through the first machine learning model. As more behavioral data becomes available the input data is applied through the first machine learning network to generate and classify the data into features including the second set of behavioral data), wherein the first machine learning network is an auto-encoder network trained to extract the second feature of the plurality of features from the first set of behavioral data; (Minh, [0023], “… to generate the first feature input , the system may use a Bidirectional Encoder BERT language model for performing natural language processing … including Semi-supervised Sequence Learning …”; [0045], “… Model 350 may use … autoencoders with bottleneck layers for nonlinear dimensionality reduction”. Autoencoder networks such as BERT (a bidirectional unsupervised network which is interpreted by the examiner as a type of auto-encoder network as the BERT model reduces dimensionality via encoding with the bidirectional transformer) are trained to extract features). clustering, by the intent determination module … , the first feature encoded in the first machine learning network and the second feature extracted from the first machine learning network into at least one cluster to determine an intent value, the intent value representing a number of times the first feature is encoded into a node of the first machine learning network, … ; (Minh, Fig. 2: 200, [0060] “In response to receiving the second user action, the system may determine a second feature input for the second machine learning model based on the second user action. The system may input the second feature input into the second machine learning model. The system may receive a different output from the second machine learning model”; Fig. 4: 404; [0032], “… system 100 may include a second machine learning model , wherein the second machine learning model is trained to select a subset of the plurality of intent clusters from the plurality of intent clusters based on a first feature input , and wherein each intent cluster of the plurality of intent clusters corresponds to a respective intent of a user following the first user action” Fig.4: 404 shows a step/block within the flowchart for determining an intent value after receiving a user action to generate dynamic conversational responses using machine learning models based on intent clusters. The first machine learning model (network) is used to generate and classify/extract/encode features for intent clusters; where the first feature, second feature, n features are generated based on behavioral data being propagated through the first network. The second machine learning network does not extract new features but selects a subset of intent clusters (interpreted by the examiner as a form of clustering) for responses. The clustering is based on the first feature and corresponds with the second feature (extracted from the first machine learning network which corresponds to the respective intent of a user following the first user action); thus, clustering the first feature and the second feature into at least one cluster as the cluster is a first feature intent cluster with respects to the second feature (intent of a user following the first action) to provide contextual information as it is used as an input just like the first feature into the second machine learning network). receiving, by a recommendation module, a set of raw user interaction data representing a sequence of events describing interactions on a web page between a user and at least one e-commerce resource; (Minh, FIG. 1&4; [0047] “At step 402, process 400 … the system may receive one or more user inputs to a user interface…”; [0020] “… the information (e.g. , a user action) may include conversation details such as information about a current session, including a channel or platform, e.g., desktop web, iOS, mobile…”. FIG. 4 is a flowchart for generating dynamic conversational responses, where step 402 denotes receiving a user action within the system. The user actions received by the system represent the raw user interaction data which is representing a sequence of events as the system records sequential inputs from the user within the system (thus, a sequence of events describing interactions between the e-commerce resource and the user). E-commerce is interpreted by the examiner as transactions conducted electronically on the internet. The mobile application (which can be accessed on a web page) shown in FIG. 1 captures raw user interaction data and is depicting transactions (dynamic conversational responses/recommendations) online). generating, by the recommendation module, at least one recommendation of the at least one e-commerce resource based on the set of raw user interaction data and the intent value … (Minh, [0014] “FIG. 5 shows a flowchart of the steps involved in generating dynamic conversational responses using machine learning models based on intent clusters, in accordance with one or more embodiments”; [0021] “In some embodiments , the information (e.g. , a user action) may include user account information such as … recent transactions … online spending … Where “conversational responses” of Minh are being interpreted as analogous with the recommendations in the claim. The generated recommendations of Minh are based on raw user actions within the system (including the first user action noted in Fig. 5). E-commerce is interpreted by the examiner as transactions conducted electronically on the internet. The mobile application shown in Fig. 1 captures raw user interaction data and is depicting transactions (conversational responses/recommendations) online). Minh fails to explicitly disclose the separate machine learning network utilizing a semi-supervised learning network for generating specific intent clusters based on intent strength via supervised learning. However, Lin teaches: … wherein the second machine learning network includes a semi-supervised learning network seeded with the first set of behavioral data encoded in the first machine learning network, and wherein the clustering includes clustering the first feature and the second feature into an imminent intent cluster using a supervised clustering technique or into a distant intent cluster using an unsupervised clustering technique; (Lin, Page 8363, Column 2, Paragraph 6, “We compare our method with both unsupervised and semi-supervised clustering methods”; Fig. 2: Steps 2.1-2.3, & ‘Clustering Layer’; Page 8354, Table 3. Lin teaches a machine learning network that includes a semi-supervised learning network that is seeded with the labeled and unlabeled data (shown in Fig. 2) of natural language data sets. Table 3 shows the clustering methods applied for both clustering techniques for (supervised and unsupervised); thus, the clustering process method of CDAC cluster into an imminent or distant cluster). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the second machine learning network of Lin to perform the clustering of Minh’s dataset and extracted features in a explicit second machine learning model with specific machine learning restrictions for performing the clustering. One having ordinary skill in the art would have been motivated to implement this change before the effective filing date of the claimed invention, as this leads to discovering new intents, handling large amounts of clusters, yielding improvement in classification, and replicates real life scenarios based on behavior. (Lin, Page 8361, Column 2, Paragraph 3, “… our work focuses on clustering problems … we try to discover new intents, the test set usually contains both known and unknown classes, and the samples could be easily misclassified as known classes. Our setting is not only closer to reality but also more challenging”; Page 8367, Column 1, Paragraph 3, “’… we propose an end-to-end clustering method that uses limited labeled data to guide the clustering process for discovering new intents and further refine the cluster results by forcing the model to learn from the high confidence assignments. Extensive experiments show that our method not only yields significant improvements compared with strong baselines but is also insensitive to the number of clusters …). Minh/Lin fail to explicitly teach: Determining (using the intent value) whether to process a transaction and further determining a location or time to present a recommendation within the GUI and presenting. However, Dragos teaches: determining whether to process a transaction, based on the intent value, (Dragos, [0012] “FIG. 3 illustrates an[d] example search system that employs intent-based processing”; [0030] “The intent inference engine 202 analyzes the inputs 204-210 and automatically produces output 212 that can be employed to refine or modify searches with a user's determined intent”. The intent based processing of Dragos determines whether to process a transaction (the examiner interprets a transaction as an executed task). The transaction is being interpreted as refining the query (executing a task) based on using the intent value. Thus, employing intent-based processing to determine whether to refine a query (process a transaction)). and further determining, using the intent value, at least one of: (1) a location where the at least one recommendation is to be presented within a graphical user interface of the web page, or (2) a time the at least one recommendation is to be presented within the graphical user interface of the web page; and (Dragos, [0023] “By modifying search capabilities with the user's inferred intent, search results 150 can be presented that are closer to the user's goals and thus provide a more efficient search experience”; [0025] “In effect, down-rank the results that, while relevant to the context-free query string, are irrelevant given the currently determined intentions of the user”. The recommendations within Dragos are the search query results that are presented to the user within the GUI on a web page. The recommendations are presented to the user within a ranked list. A location is interpreted by the examiner as particular place or position. The ranking list and the rearrangement of the list based on user intent is using the intent value to determine the location of where the results should be listed within the graphical user interface of the web page. The top of list being most relevant; thus, being recommended first)). causing, by the recommendation module, the at least one recommendation to be presented at the location and/or the time in relation to the at least one e-commerce resource via the graphical user interface of the web page. (Dragos, [0023] “By modifying search capabilities with the user's inferred intent, search results 150 can be presented that are closer to the user's goals and thus provide a more efficient search experience”; FIG. 2: 210; [0029] “At 210, substantially any data the user interacts with can be used for intent …”. The ranked list is presenting the recommendations at a location (listed ranked results within the UI shown in FIG. 8) and the dynamic results are presented at a time in relation (real/dynamic time) based on context resources (including e-commerce resources via FIG. 2: 210). The context resources are used to determine user intent and causes the recommendations to be presented in specific locations within the ranked list (most likely to click to least likely to click based on intent)). It would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to utilize the intent based presentation taught by Dragos with the systems/methods within Minh/Lin. One having ordinary skill in the art would have been motivated to implement this change before the effective filing date of the claimed invention, as this leads to improvement the users perception of the recommendations (Dragos, [0025], “… the system 100 allows using the determined intent information to improve the perceived relevance of the results”). Regarding Claim 23: Minh/Lin/Dragos teach the method of Claim 21 (and thus the rejection of Claim 21 is incorporated). Minh further teaches: determining, by the recommendation module, an audience in relation to the intent value, the audience representing at least one user who generated at least a portion of the raw user interaction data; (Minh, Fig. 4 & Fig. 5 which show a flowchart of steps in generating dynamic conversational responses using machine learning models based on intent clusters and the intent clusters innately will represent at least one user who generated at least a portion of the raw user interaction data). and causing, by the recommendation module, the at least one recommendation to be presented to the audience in relation to the at least one e-commerce resource via a graphical user interface (Minh, [0010] “FIG. 1 shows an illustrative user interface for presenting dynamic conversational responses using machine learning models based on intent clusters, in accordance with one or more embodiments”). Regarding Claim 24: Minh/Lin/Dragos teach the method of Claim 21 (and thus the rejection of Claim 21 is incorporated). Minh further teaches: wherein encoding the first feature further includes propagating n events in the first set of behavioral data into at least one of the layers of the machine learning network with m neurons, where m < n, thereby reducing a dimensionality of the first set of behavioral data (Minh, Fig. 3 & [0045] “A bottleneck layer (e.g., block 358) is a layer that contains few neural units compared to the previous layers. Model 350 may use a bottleneck layer to obtain a representation of the input with reduced dimensionality. An example of this is the use of autoencoders with bottleneck layers for nonlinear dimensionality reduction”). Regarding Claim 25: Minh/Lin/Dragos teach the method of Claim 21 (and thus the rejection of Claim 21 is incorporated). Minh further teaches: wherein generating the at least one recommendation includes applying a collaborative filter weighted by the intent value to the raw user interaction data (Minh, Fig. 4: 404 & [0046] “FIG. 4 shows a flowchart of the steps involved in generating dynamic conversational responses using machine learning models based on intent clusters, in accordance with one or more embodiments” & [0060] “In response to receiving the second user action, the system may determine a second feature input for the second machine learning model based on the second user action…The system may select, based on the different output, a different dynamic conversational response from the plurality of dynamic conversational responses that corresponds to a different subset of the plurality of intent clusters”. The use of intent clusters will innately apply a collaborative filter weighted by the intent value to the raw user interaction data as the conversational response will consider the intent value and the raw user interaction data). Regarding Claim 26: Minh/Lin/Dragos teach the method of Claim 21 (and thus the rejection of Claim 21 is incorporated). Minh further teaches: wherein the first machine learning network includes an unsupervised auto-encoder neural network. (Minh, [0045] “Model 350 may use a bottleneck layer to obtain a representation of the input with reduced dimensionality. An example of this is the use of autoencoders with bottleneck layers for nonlinear dimensionality reduction” & [0004] “… methods and systems may include a first machine learning model, wherein the first machine learning model is trained to cluster a plurality of specific intents into a plurality of intent clusters through unsupervised hierarchical clustering”). Regarding Claim 28: Minh/Lin/Dragos teach the method of Claim 21 (and thus the rejection of Claim 21 is incorporated). Minh further teaches: wherein the first feature represents one or more interactions between at least one member of a first group of users and at least one e-commerce resource, and wherein the second feature represents one or more interactions between at least one member of a second group of users and the at least one e-commerce resource. (Minh, Fig. 2: 200). Regarding Claims 29 and 31-34: Claims 29 and 31-34 incorporate substantively all the limitations of Claims 21 and 23-26 in a system (Minh, [0006], “To overcome these technical challenges, the methods and systems disclosed herein are powered through multiple machine learning models that determine intent clusters”) and further recites no new limitations; thus, Claims 29 and 31-34 are rejected for reasons set forth in the rejections of Claims 21 and 23-26, respectively. Regarding Claims 36 and 38-40: Claim 36 incorporates substantively all the limitations of Claims 21 and 28 in a computer program product (Minh, [0036], “Additionally , the devices in system 200 may run an application (or another suitable program)”) and further recites including one or more non-transitory machine-readable mediums having instructions encoded thereon that when executed by at least one processor cause a process to be carried out (Minh, [0037], “Each of these devices may also include electronic storages . The electronic storages may include non - transitory storage media that electronically stores information”); thus, Claim 36 is rejected for reasons set forth in the rejections of Claims 21 and 28. Claims 38 and 39 recite further limitations of the computer program product from Independent Claim 36 and incorporates substantively all the limitations of Claims 23 and 24 and further recites no new limitations; thus, Claims 38 and 39 are rejected for reasons set forth in the rejections of Claims 23 and 24, respectively. Claim 40 recite further limitations of the computer program product from Independent Claim 36 and incorporates substantively all the limitations of Claims 25 and 26 and further recites no new limitations; thus, Claim 40 is rejected for reasons set forth in the rejections of 25 and 26. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to IBRAHIM RAHMAN whose telephone number is (703)756-1646. The examiner can normally be reached M-F 8am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kakali Chaki can be reached at (571) 272-3719. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /I.R./Examiner, Art Unit 2122 /KAKALI CHAKI/Supervisory Patent Examiner, Art Unit 2122
Read full office action

Prosecution Timeline

May 13, 2021
Application Filed
Jun 14, 2024
Non-Final Rejection — §103, §112
Sep 10, 2024
Interview Requested
Sep 18, 2024
Applicant Interview (Telephonic)
Sep 18, 2024
Examiner Interview Summary
Sep 20, 2024
Response Filed
Jan 07, 2025
Final Rejection — §103, §112
Feb 26, 2025
Interview Requested
Mar 06, 2025
Examiner Interview Summary
Mar 06, 2025
Applicant Interview (Telephonic)
Apr 08, 2025
Request for Continued Examination
Apr 14, 2025
Response after Non-Final Action
May 13, 2025
Non-Final Rejection — §103, §112
Aug 04, 2025
Interview Requested
Aug 11, 2025
Applicant Interview (Telephonic)
Aug 11, 2025
Examiner Interview Summary
Aug 13, 2025
Response Filed
Nov 26, 2025
Final Rejection — §103, §112
Jan 11, 2026
Interview Requested

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 10 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month