Prosecution Insights
Last updated: April 19, 2026
Application No. 18/955,969

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND CONVERSATION SYSTEM

Non-Final OA §101§102§103
Filed
Nov 21, 2024
Examiner
LEE, JENNIFER V
Art Unit
3688
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Ricoh Company Ltd.
OA Round
1 (Non-Final)
25%
Grant Probability
At Risk
1-2
OA Rounds
4y 3m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
59 granted / 232 resolved
-26.6% vs TC avg
Strong +42% interview lift
Without
With
+41.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
28 currently pending
Career history
260
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
32.6%
-7.4% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 1-12 have been examined in this application. This communication is the first action on the merits. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Step 1. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Step 2A – Prong One. If the claims fall within one of the statutory categories, it must then be determined whether the claims recite an abstract idea, law of nature, or natural phenomenon. Step 2A – Prong Two. If the claims recite an abstract idea, law of nature, or natural phenomenon, it must then be determined whether the claims recite additional elements that integrate the judicial exception into a practical application. If the claims do not recite additional elements that integrate the judicial exception into a practical application, then the claims are directed to a judicial exception. Step 2B. If the claims are directed to a judicial exception, it must be evaluated whether the claims recite additional elements that amount to an inventive concept (i.e. “significantly more”) than the recited judicial exception. In the instant case, claims 1-9, 11, and 12 are directed to a machine and claim 10 is directed to a process. A claim “recites” an abstract idea if there are identifiable limitations that fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106. In the instant case, claim 12, and similarly claims 1, 10, and 11, recites the steps of: a plurality of models including a large-scale model and a task-specific model different from the large-scale model, the task-specific model having been acquired specialized in a specific task; and a history of conversations in which a conversation agent participates; acquire conversation data, the history of conversations being based on the conversation data; select one or more models from among the plurality of models based on the history of conversations and create a response message of the conversation agent using output data of the selected model; and transmit the response message -- these steps set forth mental processes, particularly concepts performed in the human mind or by a human using a pen and paper, including, inter alia, the observation and evaluation of information. Further, the limitations of the claims are not indicative of integration into a practical application. Taking the independent claim elements separately, the additional elements of performing the steps via a memory that stores, a large-scale language model, by machine learning, processing circuitry configured, and a user terminal -- merely implement the abstract idea on a computer environment. Considered in combination, the steps of Applicant’s method add nothing that is not already present when the steps are considered separately. Additionally, taking the dependent claim elements separately, the additional elements of performing the steps via an engine also merely implements the abstract idea on a computer environment. Considered in combination, the steps of Applicant’s method add nothing that is not already present when the steps are considered separately. Thus, claims 1-12 are directed to an abstract idea. Regarding the independent claims, the technical elements of performing the steps via a memory that stores, a large-scale language model, by machine learning, processing circuitry configured, and a user terminal -- merely implement the abstract idea on a computer environment. Additionally, regarding the dependent claims, the technical elements of performing the steps via an engine also merely implements the abstract idea on a computer environment. When considering the elements and combinations of elements, the claim(s) as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not amount to an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of a computer itself; the claims do not move beyond a general link of the use of an abstract idea to a particular technological environment; the claims merely amounts to the application or instructions to apply the abstract idea on a computer; or the claims amounts to nothing more than requiring a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry. The analysis above applies to all statutory categories of invention. Accordingly, claims 1-12 are rejected as ineligible for patenting under 35 USC 101 based upon the same rationale. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 2, 3, 8, and 10-12 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Raux (US PGP 2020/0027443). As per claim 1, Raux teaches an information processing apparatus comprising: a memory that stores: (Raux: [0106]; [0112]-[0113]) a plurality of models including a large-scale language model and a task-specific model different from the large-scale language model, the task-specific model having been acquired by machine learning specialized in a specific task; and (Raux: [0011]-[0012] ((i) a conversational subsystem that is configured to generate outputs that are conversational and task-independent and (ii) a task subsystem that is configured to generate outputs that are task-specific, i.e., that are specific to the particular task.); [0084] (outputs generated by both the conversational machine learning model 146 and the task machine learning model)) a history of conversations in which a conversation agent participates; and (Raux: [0063] (In this case, the conversational neural network 146 can store information from previously processed conversational inputs and access that information when generating later conversational outputs through its internal state.)) processing circuitry configured to select one or more models from among the plurality of models based on the history of conversations and create a response message of the conversation agent using output data of the selected model. (Raux: [0069]-[0071] (In the example configuration of FIG. 2A, the gating subsystem 160 receives the conversational output 148 and determines whether to provide the conversational intent defined by the conversational output 148 as the semantic output 132 or to use the task subsystem 150 to generate a task output to be included in the semantic output 132 instead of or in addition to the conversational intent.)) As per claim 2, Raux teaches wherein the processing circuitry is configured to infer an intention of the conversations based on the history of conversations and select one or more models among the plurality of models based on the intention of the conversations. (Raux: [0063] (In some other implementations, the conversational machine learning model 146 is a recurrent neural network, e.g., an LSTM neural network, gated recurrent unit (GRU) recurrent neural network, or other recurrent neural network, that maintains an internal state and updates the internal state as part of generating each conversational output. In this case, the conversational neural network 146 can store information from previously processed conversational inputs and access that information when generating later conversational outputs through its internal state.); [0069]-[0071] (In some implementations, the gating subsystem 160 determines how to proceed with respect to generating the semantic output 132 by applying a set of rules. For example, when the output includes a score distribution, the set of rules can specify that the gating subsystem 160 select the intent-value combination having the highest score and provide the intent-value combination as the semantic output 132 if sufficient uncertainty exists as to any of the possible values in the updated state or in the semantic input. For example, sufficient uncertainty can be found to exist if any of the confidence scores in the updated state or the semantic input fall below a threshold score. If sufficient uncertainty does not exist, the gating subsystem 160 determines to use a task output generated by the task subsystem as the semantic output.)) As per claim 3, Raux teaches wherein the history of conversations is one of the history of conversations between users and the history of conversations between the user and the conversation agent. (Raux: [0063] (In some other implementations, the conversational machine learning model 146 is a recurrent neural network, e.g., an LSTM neural network, gated recurrent unit (GRU) recurrent neural network, or other recurrent neural network, that maintains an internal state and updates the internal state as part of generating each conversational output. In this case, the conversational neural network 146 can store information from previously processed conversational inputs and access that information when generating later conversational outputs through its internal state.); [0069]-[0071])) As per claim 8, Raux teaches wherein the processing circuitry is configured to acquire voice data acquired in the conversations from a user terminal, and transmit the response message including voice data to the user terminal. (Raux: [0023]-[0024] (During a given conversation with a user 102 of a user device 104, the dialogue system 100 receives a speech input 106 from the user device 104, e.g., over a data communication network, and, in response, generates a speech output 172 and provides the speech output 172 for playback on the user device 104, e.g., also over the data communication network.)) As per claim 10, this claim is substantially similar to claim 1 and is therefore rejected in the same manner as this claim, as set forth above. As per claim 11, this claim is substantially similar to claim 1 and is therefore rejected in the same manner as this claim, as set forth above. Additionally, claim 11 teaches a user terminal, wherein the circuitry of the information processing apparatus is configured to acquire conversation data from the user terminal, the history of conversions being based on the conversation data, and transmit the response message to the user terminal. (Raux: [0023] (During a given conversation with a user 102 of a user device 104, the dialogue system 100 receives a speech input 106 from the user device 104, e.g., over a data communication network, and, in response, generates a speech output 172 and provides the speech output 172 for playback on the user device 104, e.g., also over the data communication network.); [0114]; [0118]) As per claim 12, this claim is substantially similar to claim 11 and is therefore rejected in the same manner as this claim, as set forth above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4-7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Raux (US PGP 2020/0027443) in view of Wu (US PGP 2018/0189857). As per claim 4, Raux teaches the invention of claim 1 as set forth above. Raux does not explicitly disclose the following known technique which is taught by Wu: wherein the task-specific model is a recommendation engine. (Wu: [0021] (According to additional examples natural language processing and machine learning may be utilized in categorizing past user input and determining appropriate products and services as recommendations for a user.); [0022] (For example, a determination may be made from one or more query intention models that a user would prefer to see information related to men's clothing based on a determination that a user is male. A determination may be made to recommend renting one set of movies to a user that is determined to fall into a younger age group, and a determination may be made to recommend renting a second set of movies to a user that is determined to be in an older age group. According to another example a determination may be made that a user would prefer to receive a recommendation to view a specific sporting event (e.g., an Australian team vs. a U.S. team) based on a determined country of origin.)) This known technique is applicable to the method of Raux as they both share characteristics and capabilities, namely, they are directed to conversational models. One of ordinary skill in the art at the time of filing would have recognized that applying the known technique of Wu would have yielded predictable results and resulted in an improved method. It would have been recognized that applying the technique of Wu to the teachings of Raux would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such recommendation features into similar methods. Further, applying the recommendation engine to the task-specific model of Raux would have been recognized by those of ordinary skill in the art as resulting in an improved method that would allow personalized helpful product and service recommendations to potential consumers (Wu: Para [0001]-[0002]; [0014]-[0019]). As per claim 5, Raux teaches the invention of claim 4 as set forth above. Additionally, Raux/Wu teach wherein the conversation agent is a virtual sales person that supports business negotiations and the recommendation engine is a recommendation engine that recommends a product. (Wu: [0019] According to examples one or more signals from user input may be analyzed in determining various traits and characteristics of a user that may be utilized in determining appropriate products and services to surface as recommendations for a user during conversational AI interaction.); [0032] (FIG. 1 illustrates an exemplary schematic diagram of a conversational AI environment 100 for providing product and service recommendations. Conversational AI environment 100 includes user input context 102, user profile 108, user sentiment analysis context 110, product extraction and ranking context 116 and recommendation 126. In user input context 102 a user such as user 104 may provide one or more inputs to a conversational AI system via a computing device such computing device 106. The conversational AI systems described herein may provide conversational responses to user input through chat bots which may be associated with businesses, universities, non-profit organizations, and the like. Chat bots and other conversational AI's may provide text and vocal conversational dialog to a user based on extraction and analysis of content from websites associated with those entities, applications associated with those entities and personalized digital assistants associated with those entities.); [0021] (According to additional examples natural language processing and machine learning may be utilized in categorizing past user input and determining appropriate products and services as recommendations for a user.); [0022] (For example, a determination may be made from one or more query intention models that a user would prefer to see information related to men's clothing based on a determination that a user is male. A determination may be made to recommend renting one set of movies to a user that is determined to fall into a younger age group, and a determination may be made to recommend renting a second set of movies to a user that is determined to be in an older age group. According to another example a determination may be made that a user would prefer to receive a recommendation to view a specific sporting event (e.g., an Australian team vs. a U.S. team) based on a determined country of origin.)) The motivation for applying the known techniques of Wu to the teachings of Raux is the same as that set forth above, in the rejection of Claim 4. As per claim 6, Raux teaches the invention of claim 1 as set forth above. Raux does not explicitly disclose the following known techniques which are taught by Wu: wherein the conversations are performed between a sales person and a customer, and (Wu: [0019] According to examples one or more signals from user input may be analyzed in determining various traits and characteristics of a user that may be utilized in determining appropriate products and services to surface as recommendations for a user during conversational AI interaction.); [0032] (FIG. 1 illustrates an exemplary schematic diagram of a conversational AI environment 100 for providing product and service recommendations. Conversational AI environment 100 includes user input context 102, user profile 108, user sentiment analysis context 110, product extraction and ranking context 116 and recommendation 126. In user input context 102 a user such as user 104 may provide one or more inputs to a conversational AI system via a computing device such computing device 106. The conversational AI systems described herein may provide conversational responses to user input through chat bots which may be associated with businesses, universities, non-profit organizations, and the like. Chat bots and other conversational AI's may provide text and vocal conversational dialog to a user based on extraction and analysis of content from websites associated with those entities, applications associated with those entities and personalized digital assistants associated with those entities.)) wherein the processing circuitry is configured to select the one or more models among the plurality of models based on information on the customer in addition to the history of conversations. (Raux: [0063] (In some other implementations, the conversational machine learning model 146 is a recurrent neural network, e.g., an LSTM neural network, gated recurrent unit (GRU) recurrent neural network, or other recurrent neural network, that maintains an internal state and updates the internal state as part of generating each conversational output. In this case, the conversational neural network 146 can store information from previously processed conversational inputs and access that information when generating later conversational outputs through its internal state.); [0069]-[0071] (In some implementations, the gating subsystem 160 determines how to proceed with respect to generating the semantic output 132 by applying a set of rules. For example, when the output includes a score distribution, the set of rules can specify that the gating subsystem 160 select the intent-value combination having the highest score and provide the intent-value combination as the semantic output 132 if sufficient uncertainty exists as to any of the possible values in the updated state or in the semantic input. For example, sufficient uncertainty can be found to exist if any of the confidence scores in the updated state or the semantic input fall below a threshold score. If sufficient uncertainty does not exist, the gating subsystem 160 determines to use a task output generated by the task subsystem as the semantic output.)) The motivation for applying the known techniques of Wu to the teachings of Raux is the same as that set forth above, in the rejection of Claim 4. As per claim 7, Raux teaches the invention of claim 1 as set forth above. Raux does not explicitly disclose the following known techniques which are taught by Wu: wherein the conversations are performed between a sales person and a customer, and (Wu: [0019] According to examples one or more signals from user input may be analyzed in determining various traits and characteristics of a user that may be utilized in determining appropriate products and services to surface as recommendations for a user during conversational AI interaction.); [0032] (FIG. 1 illustrates an exemplary schematic diagram of a conversational AI environment 100 for providing product and service recommendations. Conversational AI environment 100 includes user input context 102, user profile 108, user sentiment analysis context 110, product extraction and ranking context 116 and recommendation 126. In user input context 102 a user such as user 104 may provide one or more inputs to a conversational AI system via a computing device such computing device 106. The conversational AI systems described herein may provide conversational responses to user input through chat bots which may be associated with businesses, universities, non-profit organizations, and the like. Chat bots and other conversational AI's may provide text and vocal conversational dialog to a user based on extraction and analysis of content from websites associated with those entities, applications associated with those entities and personalized digital assistants associated with those entities.)) wherein the processing circuitry is configured to select, as the one or more models, a recommendation engine that recommends a product other than the product recommended to the customer by the sales person. (Wu: [0023]-[0028]; [0034]-[0036] (User input from user 104 may be provided to one or more computing devices (such as user sentiment analysis server device 114), via network 112, for sentiment analysis regarding one or more products or services that may be recommended to user 104 via a computing device such as computing evice 128 during recommendation context 126. User sentiment analysis server 114 may extract one or more keywords from a user input and/or analyze user input with one or more language models and thereby make determinations regarding a user's positive, neutral or negative sentiment towards one or more products or services which may be recommended to a user based on the extracted input. Such determinations may be made based on a single user input or a back-and-forth dialog received during conversational AI. According to examples user 104 may provide input such as “I'm thirsty but so busy today” and a response to user 104 may be provided by a chat bot such as “It's time to stop for a coffee! How would you like a Pikes Place roast coffee from a Starbucks around the corner at this address: [ADDRESS]?” Further dialog may indicate that user 104 currently has a negative sentiment towards such a product. Continuing from this example, a user may answer “No thanks” and user sentiment analysis may determine that the user is not currently interested in the product offering. Additional dialog may determine that user 104 is interested in other products. For example, user 104 may provide input such as “do you have XYZ brand of coffee” and user sentiment analysis may determine that the user is interested in having XYZ brand of coffee shipped to them and/or nearby locations that sell XYZ brand of coffee provided to them. In the specific example described above with regard to user sentiment for coffee recommendations where user 104 has provided input such as “do you have XYZ brand of coffee” one or more recommendations relating to XYZ brand of coffee may be ranked and subsequently provided to user 104 based on the ranking and a conversational AI response may be provided such as “Of course, I'll recommend a 1000 mL pack of XYZ coffee for you which can be purchased at this link: [LINK].”) The motivation for applying the known techniques of Wu to the teachings of Raux is the same as that set forth above, in the rejection of Claim 4. As per claim 9, Raux teaches the invention of claim 1 as set forth above. Raux does not explicitly disclose the following known techniques which are taught by Wu: wherein the processing circuitry is configured to select the one or more models from among the plurality of models further based on an intention of the conversations, the intention being one of a proposal for a specific product, a proposal for a product related to an issue after presenting a candidate for the issue, and a chat, wherein when the intention is the proposal for the specific product, the response message includes a product to solve an issue of a customer using the task-specific model, when the intention is the proposal for the product related to the issue after presenting the candidate for the issue, the response message includes a product to solve the candidate for the issue using the task-specific model and the large-scale language model, and when the intention is the chat, the response message includes the chat using the large-scale language model. (Wu: [0023]-[0028]; [0032]-[0036] (User input from user 104 may be provided to one or more computing devices (such as user sentiment analysis server device 114), via network 112, for sentiment analysis regarding one or more products or services that may be recommended to user 104 via a computing device such as computing evice 128 during recommendation context 126. User sentiment analysis server 114 may extract one or more keywords from a user input and/or analyze user input with one or more language models and thereby make determinations regarding a user's positive, neutral or negative sentiment towards one or more products or services which may be recommended to a user based on the extracted input. Such determinations may be made based on a single user input or a back-and-forth dialog received during conversational AI. According to examples user 104 may provide input such as “I'm thirsty but so busy today” and a response to user 104 may be provided by a chat bot such as “It's time to stop for a coffee! How would you like a Pikes Place roast coffee from a Starbucks around the corner at this address: [ADDRESS]?” Further dialog may indicate that user 104 currently has a negative sentiment towards such a product. Continuing from this example, a user may answer “No thanks” and user sentiment analysis may determine that the user is not currently interested in the product offering. Additional dialog may determine that user 104 is interested in other products. For example, user 104 may provide input such as “do you have XYZ brand of coffee” and user sentiment analysis may determine that the user is interested in having XYZ brand of coffee shipped to them and/or nearby locations that sell XYZ brand of coffee provided to them. Product extraction and ranking context 116 includes product store 118 and computing devices (such as product ranking server devices 120, 122 and 24) for ranking products and services for recommendation to a user based on extracted and analyzed user input and a match analysis of those products and services to a user based on analysis of current user input and information regarding user characteristics and preferences determined from a user profile such as user profile 108. In the specific example described above with regard to user sentiment for coffee recommendations where user 104 has provided input such as “do you have XYZ brand of coffee” one or more recommendations relating to XYZ brand of coffee may be ranked and subsequently provided to user 104 based on the ranking and a conversational AI response may be provided such as “Of course, I'll recommend a 1000 mL pack of XYZ coffee for you which can be purchased at this link: [LINK].”)) The motivation for applying the known techniques of Wu to the teachings of Raux is the same as that set forth above, in the rejection of Claim 4. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Liu, Yuanxing, et al. "Conversational recommender system and large language model are made for each other in E-commerce pre-sales dialogue." Findings of the Association for Computational Linguistics: EMNLP 2023. 2023. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER V LEE whose telephone number is (571)272-4778. The examiner can normally be reached Monday - Friday 9AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY A. SMITH can be reached at (571)272-6763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNIFER V LEE/Examiner, Art Unit 3688 /Jeffrey A. Smith/Supervisory Patent Examiner, Art Unit 3688
Read full office action

Prosecution Timeline

Nov 21, 2024
Application Filed
Feb 11, 2026
Non-Final Rejection — §101, §102, §103
Apr 06, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602716
SYSTEMS AND METHODS FOR GENERATING RECOMMENDATIONS BASED ON ONLINE HISTORY INFORMATION AND GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12548069
METHOD, SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM FOR AUGMENTED REALITY-BASED FACE AND CLOTHING EFFECT
2y 5m to grant Granted Feb 10, 2026
Patent 12541782
METHOD, SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM FOR PRODUCT OBJECT PUBLISHING AND CONCURRENT IMAGE RECOGNITION
2y 5m to grant Granted Feb 03, 2026
Patent 12461976
METHOD AND SYSTEM FOR CAPTURING DATA FROM REQUESTS TRANSMITTED ON WEBSITES
2y 5m to grant Granted Nov 04, 2025
Patent 12462289
DEVICE, METHOD, AND COMPUTER-READABLE MEDIA FOR RECOMMENDATION NETWORKING BASED ON CONNECTIONS, CHARACTERISTICS, AND ASSETS USING MACHINE LEARNING
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
25%
Grant Probability
67%
With Interview (+41.5%)
4y 3m
Median Time to Grant
Low
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month