Prosecution Insights
Last updated: April 19, 2026
Application No. 18/490,444

FRICTION METRIC FOR RESOLUTION OF CUSTOMER ISSUES

Final Rejection §101§103
Filed
Oct 19, 2023
Examiner
DIVELBISS, MATTHEW H
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Wells Fargo Bank N A
OA Round
4 (Final)
23%
Grant Probability
At Risk
5-6
OA Rounds
4y 1m
To Grant
46%
With Interview

Examiner Intelligence

Grants only 23% of cases
23%
Career Allow Rate
83 granted / 367 resolved
-29.4% vs TC avg
Strong +23% interview lift
Without
With
+23.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
50 currently pending
Career history
417
Total Applications
across all art units

Statute-Specific Performance

§101
37.0%
-3.0% vs TC avg
§103
43.5%
+3.5% vs TC avg
§102
10.2%
-29.8% vs TC avg
§112
6.9%
-33.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 367 resolved cases

Office Action

§101 §103
DETAILED ACTION The following is a Final Office action. In response to Examiner’s communication of 10/21/2025, Applicant, on 1/20/2026, amended claims 1, 8, and 15. Claims 1, 3-8, 10-16, and 18-20 are pending in this application and have been rejected below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s amendments are acknowledged. The 35 USC 101 rejections of claims 1, 3-8, 10-16, and 18-20 regarding abstract ideas are maintained in light of Applicant’s amendments and explanations. Revised 35 USC 103 rejections of claims 1, 3-8, 10-16, and 18-20 are applied in light of Applicant’s amendments and explanations. Claim Rejections - 35 USC§ 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1, 3-8, 10-16, and 18-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for assessing customer friction during interactions with a product or service. Examiner formulates an abstract idea analysis, following the framework described in the MPEP as follows: Step 1: The claims are directed to a statutory category, namely a "method" (claims 1, 3-7) and "system" (claims 8, 10-16, and 18-20). Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1: A method for assessing customer friction during interactions with a product or service, the method comprising: accessing behavioral data sources, including: (i) a first source capturing event-level data marking customer interactions within the product or service; (ii) a second source recording user session replays that offer a granular visualization of user behavior; (iii) a third source providing application performance monitoring; calculating a friction metric by: (i) analyzing data extracted from the behavioral data sources; (ii) assigning weighted values to at least one of events, user behaviors, or performance indicators based on a correlation with an increase in the customer friction… allocates the weighted values based upon a relevance to the customer friction;; (iii) periodically updating … the friction metric to capture real-time changes in customer behavior as a customer engages with the product or service that mirrors shifts in the customer behavior; processing… the behavioral data streams in real-time from the first source, second source, and third source to generate an updated friction metric that mirrors real-time shifts in customer behavior deducing… a quantitative assessment that signifies potential friction issues that may manifest in a foreseeable future launching a root cause analysis when the updated friction metric surpasses a threshold … to identify specific elements causing the customer friction based upon the quantitative assessment.. communicating the report to a response team, and enabling remedial actions based on the specific elements. Independent claims 8 and 15 recite substantially similar claim language. Dependent claims 3-7, 10-14, 16, and 18-20 recite the same or similar abstract idea(s) as independent claims 1, 8, and 15 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea. The limitations in claims 1, 3-8, 10-16, and 18-20 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of: "Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to assessing customer friction during interactions with a product or service and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or "Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including user observation and evaluation by assessing customer friction during interactions with a product or service, which is capable of being performed mentally and/or using pen and paper. Step 2A - Prong 2: Claims 1, 3-8, 10-16, and 18-20 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of: " A computer system for assessing customer friction during interactions with a product or service, comprising: one or more processors; and non-transitory computer readable storage media encoding instructions which, when executed by the one or more processors, causes the computer system to: / A computer program product residing on a non-transitory computer readable storage medium having a plurality of instructions stored thereon, which when executed by a processor, cause the processor to perform operations for assessing customer friction during interactions with a product or service, comprising:" (claim 1, 8, and 15) “defining a machine learning model with a neural network including: an input layer including input neurons configured to receive data from behavioral data sources, with at least one input neuron for each of the behavioral data sources; a hidden layer including hidden neurons configured to process the data, wherein the hidden neurons perform operations that produce outputs based on weighted inputs and bias adjustments; and an output layer including output neurons configured to produce a friction metric as an output value, wherein the friction metric is based on the weighted inputs and the bias adjustments from the hidden layer; wherein each of the input neurons establishes a connection with each of the hidden neurons to form an interconnected architecture,” (claims 1, 8, and 15), “including processing the data using the hidden layer of the machine learning model… wherein the machine learning model … by the machine learning model… using the machine learning model,” (claims 1, 8, and 15) however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "computer system" or the application of machine learning is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application; Step 2B: Claims 1, 3-8, 10-16, and 18-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of assessing customer friction during interactions with a product or service using a "computer system," as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to assessing customer friction during interactions with a product or service. Claims 1, 3-8, 10-16, and 18-20 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more. Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis. For further authority and guidance, see: MPEP § 2106 https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-8, 10-16, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2020/0304364 to Tapia et al. (hereafter referred to as Tapia) in view of U.S. Patent Application Publication Number 2021/0089979 to Beck et al. (hereafter referred to as Beck) and in further view of U.S. Patent Application Publication Number 2023/0351435 to Wright et al. (hereafter referred to as Wright). As per claim 1, Tapia teaches: A method for assessing customer friction during interactions with a product or service, the method comprising: (Paragraph Number [0106] teaches a flow diagram of an example process 1200 for tracking the performance of network devices in relation to small network cells, macro cells, and backhauls of the wireless carrier network. At block 1202, an analytic application may retrieve quality of service metrics for user devices of subscribers as the user devices access a wireless carrier network via one or more small network cells in a geographical area. In various embodiments, the quality of service metrics may include call establishment delays, MOS of call audio quality, records of one-way audio problems, records of call drops, and/or so forth.). defining a machine learning model with a neural network including: (Paragraph Number [0031] teaches once the data from the social media data collections are obtained via data adapters, a data mining algorithm of the data management platform 102 may extract words, terms, phrases, quotes, or ratings that are relevant to the operational conditions or performance status of the nodes, components, and/or services of the wireless carrier network. The data mining algorithm may use both machine learning and non-machine learning techniques such as decision tree learning, association rule learning, artificial neural networks, inductive logic, Support Vector Machines (SVMs), clustering, Bayesian networks, reinforcement learning, representation learning, similarity and metric learning, and sparse dictionary learning to extract the patterns. In one example, the data management platform 102 may discover a pattern of web blog posting that indicate users are dissatisfied with an aspect of a service provided by the wireless carrier network at a particular geographical location. In another example, the data management platform 102 may discover a pattern of message feed postings from multiple users that indicate a specific type of user device has a high error rate when used in conjunction with the wireless carrier network). accessing the behavioral data sources by the input layer of the machine learning model, including: (i) a first source capturing event-level data marking the interactions within the product or service (Paragraph Number [0028] teaches the trouble ticket data source 112 may include data on issues with the components or operations of the wireless carrier network. In some instances, network trouble tickets may be automatically generated by software agents that monitor the health and performance of the wireless carrier network. In other instances, subscriber trouble tickets may be manually inputted by customers and/or customer care representative to describe issues experienced by the customers. In some instances, subscriber trouble tickets may be inputted as a result of a customer interaction with a chatbot or on a social media service such as Twitter® or the like. In some instances, trouble tickets may be generated and inputted by embodiments of the present invention in response to the embodiments detecting a failure. The trouble ticket data source 112 may further include data on the identities of the administrators, resolution reports for the issues, statistics for each type or category of issues reported, statistics on issue resolution rates, and/or so forth). (iii) a third source providing application performance monitoring (Paragraph Number [0029] teaches the alarm data source 114 may include alerts for the wireless carrier network that are generated based on predetermined alert rules by a status monitoring application of the network. An alert rule may specify that an alert is to be triggered when one or more conditions with respect to the operations of the network occurs. The conditions may be specific faults or issues that are detected with components of the network, deviation of actual performance indicators from predetermined threshold performance values, a number of user complaints regarding a network component, network node, or network service reaching or failing to reach a predetermined threshold, and/or so forth. An alert may also be triggered by a subscriber event such as a change in which plan a subscriber is enrolled in, a change in which device a subscriber is using, or performance degradation specific to the subscriber. Paragraph Number [0106] teaches a flow diagram of an example process 1200 for tracking the performance of network devices in relation to small network cells, macro cells, and backhauls of the wireless carrier network. At block 1202, an analytic application may retrieve quality of service metrics for user devices of subscribers as the user devices access a wireless carrier network via one or more small network cells in a geographical area. In various embodiments, the quality of service metrics may include call establishment delays, MOS of call audio quality, records of one-way audio problems, records of call drops, and/or so forth. (See also Paragraph Number [0113])). calculating a friction metric by: (i) analyzing data extracted from the behavioral data sources (Paragraph Number [0106] teaches a flow diagram of an example process 1200 for tracking the performance of network devices in relation to small network cells, macro cells, and backhauls of the wireless carrier network. At block 1202, an analytic application may retrieve quality of service metrics for user devices of subscribers as the user devices access a wireless carrier network via one or more small network cells in a geographical area. In various embodiments, the quality of service metrics may include call establishment delays, MOS of call audio quality, records of one-way audio problems, records of call drops, and/or so forth). including processing the data using the hidden layer of the machine learning model (Paragraph Number [0022] teaches the comprehensive analysis of user device performance data and network performance data of a wireless carrier network on a granular level may enable the discovery of root causes of quality of service issues that are invisible to conventional data analysis techniques. Accordingly, such analysis may pinpoint the root cause of a quality of service issue to a specific device or network component. Further, the use of a machine learning model during the analysis may enable the automatic resolution of customer complaints. Such automatic resolution may reduce issue resolution time while increase issue resolution rate. The techniques described herein may be implemented in a number of ways. Example implementations are provided below with reference to the following FIGS. 1-14). wherein the output layer of the machine learning model (Paragraph Number [0022] teaches the comprehensive analysis of user device performance data and network performance data of a wireless carrier network on a granular level may enable the discovery of root causes of quality of service issues that are invisible to conventional data analysis techniques. Accordingly, such analysis may pinpoint the root cause of a quality of service issue to a specific device or network component. Further, the use of a machine learning model during the analysis may enable the automatic resolution of customer complaints. Such automatic resolution may reduce issue resolution time while increase issue resolution rate. The techniques described herein may be implemented in a number of ways. Example implementations are provided below with reference to the following FIGS. 1-14. Paragraph Number [0057] teaches the model training module 220 may configure the rules engine 224 to modify the algorithm selection rules during retraining. The modifications to the algorithm selection rules may change a range of training error measurement values that correspond to a type machine of learning algorithm, cause specific ranges of training error measurement values to match to different types of machine learning algorithms, and/or so forth. In this way, the model training module 220 may generated a modified trained machine learning model based on the feedback). (iii) periodically updating, by the machine learning model, the friction metric to capture real-time changes in customer behavior as a customer engages with the product or service that mirrors shifts in the customer behavior (Paragraph Number [0033] teaches the analytic applications 104 may analyze the multiple sources of data obtained by the data management platform 102 to generate data reports 120 and troubleshoot solutions 122. The data reports 120 may provide comprehensive or end-to-end analysis results that aids in the resolution of quality of service issues for the wireless carrier network. For example, the data reports 120 may provide capacity upgrade recommendations, pinpoint malfunctions in device components or network components, provide real-time detection and alerting of quality of service issues, provide suggestions of new geolocations for the installation of small network cells within the wireless carrier network, and/or so forth. Paragraph Number [0057] teaches the model training module 220 may configure the rules engine 224 to modify the algorithm selection rules during retraining. The modifications to the algorithm selection rules may change a range of training error measurement values that correspond to a type machine of learning algorithm, cause specific ranges of training error measurement values to match to different types of machine learning algorithms, and/or so forth. In this way, the model training module 220 may generated a modified trained machine learning model based on the feedback). processing, by the machine learning model, the behavioral data streams in real-time from the first source, second source, and third source (Paragraph Number [0106] teaches a flow diagram of an example process 1200 for tracking the performance of network devices in relation to small network cells, macro cells, and backhauls of the wireless carrier network. At block 1202, an analytic application may retrieve quality of service metrics for user devices of subscribers as the user devices access a wireless carrier network via one or more small network cells in a geographical area. In various embodiments, the quality of service metrics may include call establishment delays, MOS of call audio quality, records of one-way audio problems, records of call drops, and/or so forth. Paragraph Number [0033] teaches the analytic applications 104 may analyze the multiple sources of data obtained by the data management platform 102 to generate data reports 120 and troubleshoot solutions 122. The data reports 120 may provide comprehensive or end-to-end analysis results that aids in the resolution of quality of service issues for the wireless carrier network. For example, the data reports 120 may provide capacity upgrade recommendations, pinpoint malfunctions in device components or network components, provide real-time detection and alerting of quality of service issues, provide suggestions of new geolocations for the installation of small network cells within the wireless carrier network, and/or so forth. (See also Paragraph Number [0054])). to generate an updated friction metric that mirrors real-time shifts in customer behavior (Paragraph Number [0106] teaches a flow diagram of an example process 1200 for tracking the performance of network devices in relation to small network cells, macro cells, and backhauls of the wireless carrier network. At block 1202, an analytic application may retrieve quality of service metrics for user devices of subscribers as the user devices access a wireless carrier network via one or more small network cells in a geographical area. In various embodiments, the quality of service metrics may include call establishment delays, MOS of call audio quality, records of one-way audio problems, records of call drops, and/or so forth. Paragraph Number [0054] teaches the analytic applications 104 may analyze the multiple sources of data obtained by the data management platform 102 to generate data reports 120 and troubleshoot solutions 122. The analytic applications 104 may have built in application user interfaces that simplify the data querying and requesting process such that status data and troubleshooting solutions may be provided via the application user interfaces. The application user interfaces of the analytic applications 104 may be displayed by the dashboard application 124. The analytic applications may process real time or non-real time data, in which data from multiple data sources may be aggregated or converged. The data reports 120 may provide real time or non-time views of device and network status data based on the performance data from the data sources 110-118. Accordingly, a user may use the data reports 120 to continuously or periodically monitor the statuses pertaining to all aspects of the wireless carrier network. The aspects may include the statuses of wireless carrier network itself, the network components of the network, user devices that are using the wireless carrier network, and/or device components of the user devices). deducing, by the machine learning model, a quantitative assessment that signifies potential friction issues that may manifest in a foreseeable future (Paragraph Number [0066] teaches the trouble ticket clustering 512 may enable the automatic customer complaint resolution application 500 to provide clustered trouble ticket data scores for different regions, such as different neighborhoods. The trouble ticket clustering 512 may group or link trouble tickets having a common originating cause into a master trouble ticket. In an embodiment, the trouble ticket clustering 512 may group similar tickets using clustering techniques in order to facilitate labeling into root cause categories, thereby simplifying training of a machine learning model. The data analysis may further involve individual ticket analysis 514 to resolve tickets. The individual ticket analysis 514 may include the analysis of associated data to 516 for individual trouble tickets. For example, the associated data may include user KPIs, network KPIs, alerts, network component health indicators, and/or so forth. Thus, by using a ticket resolution logic 518 that includes one or more trained machine learning models, the automatic customer complaint resolution application 500 may determine a root cause for resolving the trouble ticket. Paragraph Number [0119] teaches the analytic application may analyze the performance data using the trained machine learning model to predict a potential issue for one or more additional user devices that use the wireless carrier network. For example, the analysis of the performance data may indicate that a potential issue existing for a specific type of user devices due to hardware or software component similarity of the specific type to user devices that are found to be experiencing a particular issue). launching a root cause analysis when the updated friction metric surpasses a threshold (Paragraph Number [0091] teaches an analytic application may identify a root cause for an issue affects one or more subscribers of a wireless carrier network based on a set of live performance data using the machine learning model. The analytic application may further generate a solution for the root cause using a solutions database. In various embodiments, the live performance data may be real time or non-real time data pertaining to one or more network components of the wireless carrier network and/or one or more device components of the user devices that are using the wireless carrier network. Paragraph Number [0113] teaches the analytic application may analyze the performance of the specific set of device components and network components to input one or more components that negatively impacted a quality of service experienced by the user during the instance. For example, a component may be determined to have negatively impacted the quality of service when a performance metric of the component is below a predetermined performance threshold. In another example, the component may be determined to have negatively impacted the quality of service when the component is a bottleneck that is responsible for the biggest delay experienced by the user during the usage instance. In an additional example, the component may be determined to have negatively impacted the quality of service when the component experienced a rate of error that is higher than a maximum error threshold. (See also Paragraph Number [0020])). using the machine learning model to identify specific elements causing the customer friction (Paragraph Number [0022] teaches the comprehensive analysis of user device performance data and network performance data of a wireless carrier network on a granular level may enable the discovery of root causes of quality of service issues that are invisible to conventional data analysis techniques. Accordingly, such analysis may pinpoint the root cause of a quality of service issue to a specific device or network component. Further, the use of a machine learning model during the analysis may enable the automatic resolution of customer complaints. Such automatic resolution may reduce issue resolution time while increase issue resolution rate. The techniques described herein may be implemented in a number of ways. Example implementations are provided below with reference to the following FIGS. 1-14. Paragraph Number [0052] teaches following the application of a selected machine learning algorithm to the training corpus, the model training module 220 may determine a training error measurement of the machine learning model. The training error measurement may indicate the accuracy of the machine learning model in generating a solution. Accordingly, if the training error measurement exceeds a training error threshold, the model training module 220 may use a rules engine 224 to select an additional type of machine learning algorithm based on a magnitude of the training error measurement). based upon the quantitative assessment (Paragraph Number [0066] teaches the trouble ticket clustering 512 may enable the automatic customer complaint resolution application 500 to provide clustered trouble ticket data scores for different regions, such as different neighborhoods. The trouble ticket clustering 512 may group or link trouble tickets having a common originating cause into a master trouble ticket. In an embodiment, the trouble ticket clustering 512 may group similar tickets using clustering techniques in order to facilitate labeling into root cause categories, thereby simplifying training of a machine learning model. The data analysis may further involve individual ticket analysis 514 to resolve tickets. The individual ticket analysis 514 may include the analysis of associated data to 516 for individual trouble tickets. For example, the associated data may include user KPIs, network KPIs, alerts, network component health indicators, and/or so forth. Thus, by using a ticket resolution logic 518 that includes one or more trained machine learning models, the automatic customer complaint resolution application 500 may determine a root cause for resolving the trouble ticket. Paragraph Number [0119] teaches the analytic application may analyze the performance data using the trained machine learning model to predict a potential issue for one or more additional user devices that use the wireless carrier network. For example, the analysis of the performance data may indicate that a potential issue existing for a specific type of user devices due to hardware or software component similarity of the specific type to user devices that are found to be experiencing a particular issue). generating a report providing pertinent transaction information related to the interactions (Paragraph Number [0054] teaches the analytic applications 104 may analyze the multiple sources of data obtained by the data management platform 102 to generate data reports 120 and troubleshoot solutions 122. The analytic applications 104 may have built in application user interfaces that simplify the data querying and requesting process such that status data and troubleshooting solutions may be provided via the application user interfaces. The application user interfaces of the analytic applications 104 may be displayed by the dashboard application 124. The analytic applications may process real time or non-real time data, in which data from multiple data sources may be aggregated or converged. The data reports 120 may provide real time or non-time views of device and network status data based on the performance data from the data sources 110-118. Accordingly, a user may use the data reports 120 to continuously or periodically monitor the statuses pertaining to all aspects of the wireless carrier network. The aspects may include the statuses of wireless carrier network itself, the network components of the network, user devices that are using the wireless carrier network, and/or device components of the user devices). communicating the report to a response team (Paragraph Number [0054] teaches the analytic applications 104 may analyze the multiple sources of data obtained by the data management platform 102 to generate data reports 120 and troubleshoot solutions 122. The analytic applications 104 may have built in application user interfaces that simplify the data querying and requesting process such that status data and troubleshooting solutions may be provided via the application user interfaces. The application user interfaces of the analytic applications 104 may be displayed by the dashboard application 124. The analytic applications may process real time or non-real time data, in which data from multiple data sources may be aggregated or converged. The data reports 120 may provide real time or non-time views of device and network status data based on the performance data from the data sources 110-118. Accordingly, a user may use the data reports 120 to continuously or periodically monitor the statuses pertaining to all aspects of the wireless carrier network. The aspects may include the statuses of wireless carrier network itself, the network components of the network, user devices that are using the wireless carrier network, and/or device components of the user devices). enabling remedial actions based on the specific elements (Paragraph Number [0114] teaches the analytic application may provide data on the one or more components that negatively affected the quality of service for presentation. The presentation of such data may enable a user to initiate remediation measures to correct the problem with the one or more components. In various embodiments, the analytic application may provide the data on the one or more components via an application user interface. Paragraph Number [0121] teaches the analytic application may track the performance of the one or more additional user devices to detect occurrence of the potential issue on at least one additional user device. If the analytic application detects that the potential issue actually occurred, the analytic application may directly take remediation action or cause another application component of the wireless carrier network to take remediation action. The remediation action may include sending another alert message to a subscriber that is using an additional user device, informing a network engineer to contact the subscriber in order to resolve the issue, automatically terminating service to the additional user device until the issue is resolved, automatically pushing a software update to the additional user device to fix the issue, and/or so forth). Tapia teaches assessing customer friction during interactions with a product or service but does not explicitly teach recording, analyzing, and weighting prior user behavior associated with a particular user which is taught by the following citations from Beck: (ii) a second source recording user session replays that offer a granular visualization of user behavior (Paragraph Number [0062] teaches the Social Media Listening 114 may include an additional computer-automated process, where at least one computer processor executes specific computer instructions stored in computer memory, and causes the processor to access, review and automatically assess public social media posts on the internet about the company and/or company's customer relations or employment practices. It may access bulletin board where employees or customers share their experiences or provide reviews and/or comments about companies, company products or services. Similarly, the Media Coverage 115 may involve the access, review and automatic assessment of various media coverage on companies being analyzed by the present system). (ii) assigning weighted values to at least one of events, user behaviors, or performance indicators based on a correlation with an increase in the customer friction (Paragraph Number [0087] teaches sample weights and calculations of individual frustrations can be calculated as illustrated in FIG 2, which illustrates the Sample Vulnerability Score Calculations at the Individual Frustration and Company Level with Corresponding Weights. In at least one embodiment, the Vulnerability Scores are calculated to range from 0 to 10, with 0 signifying a frustration does not occur at all and has no impact on perceptions of uniqueness, or on actual behaviors around sharing, deepening and switching. A score of 10 signifies that a frustration occurs frequently and has significant impact on perceptions of uniqueness and on actual behaviors around sharing, relationship deepening and switching. Paragraph Number [0088] teaches the weights that may be used for each frustration metric are: Frequency 210=1, Uniqueness 220=1. Sharing 230=2, Impact 240=3, and Switching 250=3. Weights by definition for all components should add up to 10. The method used to arrive at the specific weights is automated and involves execution of a regression analysis and accounts for wider industry trends in helping to automatically predict switching behavior). allocates the weighted values based upon a relevance to the customer friction (Paragraph Number [0087] teaches sample weights and calculations of individual frustrations can be calculated as illustrated in FIG 2, which illustrates the Sample Vulnerability Score Calculations at the Individual Frustration and Company Level with Corresponding Weights. In at least one embodiment, the Vulnerability Scores are calculated to range from 0 to 10, with 0 signifying a frustration does not occur at all and has no impact on perceptions of uniqueness, or on actual behaviors around sharing, deepening and switching. A score of 10 signifies that a frustration occurs frequently and has significant impact on perceptions of uniqueness and on actual behaviors around sharing, relationship deepening and switching. Paragraph Number [0088] teaches the weights that may be used for each frustration metric are: Frequency 210=1, Uniqueness 220=1. Sharing 230=2, Impact 240=3, and Switching 250=3. Weights by definition for all components should add up to 10. The method used to arrive at the specific weights is automated and involves execution of a regression analysis and accounts for wider industry trends in helping to automatically predict switching behavior). Both Tapia and Beck are directed to assessing customer friction. Tapia discloses assessing customer friction during interactions with a product or service. Beck improves upon Tapia by disclosing recording, analyzing, and weighting prior user behavior associated with a particular user. One of ordinary skill in the art would be motivated to further include recording, analyzing, and weighting prior user behavior associated with a particular user, to efficiently review customer interaction with specific employees and make a determination as to success or failure of the interaction and then utilize it as instruction for future interactions with that employee. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of assessing customer friction during interactions with a product or service in Tapia to further utilize recording, analyzing, and weighting prior user behavior associated with a particular user as disclosed in Beck, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Tapia teaches assessing customer friction during interactions with a product or service but does not explicitly teach a neural network with multiple layers each having disparate functions which is taught by the following citations from Wright: input layer including input neurons configured to receive data from behavioral data sources (Paragraph Number [0065] teaches a feedforward network (see, e.g., feedforward network 260 referenced in FIG. 2A) may include a topography with a hidden layer 264 between an input layer 262 and an output layer 266. The input layer 262, having nodes commonly referenced in FIG. 2A as input nodes 204 for convenience, communicates input data, variables, matrices, or the like to the hidden layer 264, having nodes 274. The hidden layer 264 generates a representation and/or transformation of the input data into a form that is suitable for generating output data. Paragraph Number [0103] teaches a personal data set associated with any individual user 110 may accordingly include entries of any of the different types of data disclosed hereinabove, including entries relating to demographic data, behavioral data, and response data. Each entry of the personal data set may be representative of one of the demographic traits of the user 110, one of the behavioral traits of the user 110, one of the behavioral traits of the computing system 206, or one of the responses of the user 110 to a corresponding query. The number or types of entries available in each personal data set may vary among users 110 depending on the relationship to the enterprise system 200 and the availability of such data, as well as the participation of such users 110 in responding to such queries as a result of participation in the corresponding contest/survey. Some entries of the personal data set of some users 110 may accordingly be empty or may include assumed or predicted data, as desired, when utilized by the corresponding machine learning program. (See also Paragraph Number [0064])). with at least one input neuron for each of the behavioral data sources (Paragraph Number [0103] teaches a personal data set associated with any individual user 110 may accordingly include entries of any of the different types of data disclosed hereinabove, including entries relating to demographic data, behavioral data, and response data. Each entry of the personal data set may be representative of one of the demographic traits of the user 110, one of the behavioral traits of the user 110, one of the behavioral traits of the computing system 206, or one of the responses of the user 110 to a corresponding query. The number or types of entries available in each personal data set may vary among users 110 depending on the relationship to the enterprise system 200 and the availability of such data, as well as the participation of such users 110 in responding to such queries as a result of participation in the corresponding contest/survey. Some entries of the personal data set of some users 110 may accordingly be empty or may include assumed or predicted data, as desired, when utilized by the corresponding machine learning program. (See also Paragraph Number [0064])). a hidden layer including hidden neurons configured to process the data wherein the hidden neurons perform operations that produce outputs based on weighted inputs and bias adjustments (Paragraph Number [0065] teaches a feedforward network (see, e.g., feedforward network 260 referenced in FIG. 2A) may include a topography with a hidden layer 264 between an input layer 262 and an output layer 266. The input layer 262, having nodes commonly referenced in FIG. 2A as input nodes 204 for convenience, communicates input data, variables, matrices, or the like to the hidden layer 264, having nodes 274. The hidden layer 264 generates a representation and/or transformation of the input data into a form that is suitable for generating output data. Adjacent layers of the topography are connected at the edges of the nodes of the respective layers, but nodes within a layer typically are not separated by an edge. In at least one embodiment of such a feedforward network, data is communicated to the nodes 204 of the input layer, which then communicates the data to the hidden layer 264. The hidden layer 264 may be configured to determine the state of the nodes in the respective layers and assign weight coefficients or parameters of the nodes based on the edges separating each of the layers, e.g., an activation function implemented between the input data communicated from the input layer 262 and the output data communicated to the nodes 276 of the output layer 266. It should be appreciated that the form of the output from the neural network may generally depend on the type of model represented by the algorithm. Although the feedforward network 260 of FIG. 2A expressly includes a single hidden layer 264, other embodiments of feedforward networks within the scope of the descriptions can include any number of hidden layers. The hidden layers are intermediate the input and output layers and are generally where all or most of the computation is done). an output layer including output neurons configured to produce a friction metric as an output value wherein the friction metric is based on the weighted inputs and the bias adjustments from the hidden layer (Paragraph Number [0065] teaches a feedforward network (see, e.g., feedforward network 260 referenced in FIG. 2A) may include a topography with a hidden layer 264 between an input layer 262 and an output layer 266. The input layer 262, having nodes commonly referenced in FIG. 2A as input nodes 204 for convenience, communicates input data, variables, matrices, or the like to the hidden layer 264, having nodes 274. The hidden layer 264 generates a representation and/or transformation of the input data into a form that is suitable for generating output data. Adjacent layers of the topography are connected at the edges of the nodes of the respective layers, but nodes within a layer typically are not separated by an edge. In at least one embodiment of such a feedforward network, data is communicated to the nodes 204 of the input layer, which then communicates the data to the hidden layer 264. The hidden layer 264 may be configured to determine the state of the nodes in the respective layers and assign weight coefficients or parameters of the nodes based on the edges separating each of the layers. (See Tapia Paragraph Numbers [0033], [0057], and [0106] for teachings in regard to friction metrics)). wherein each of the input neurons establishes a connection with each of the hidden neurons to form an interconnected architecture (Paragraph Number [0064] teaches one type of algorithm suitable for use in machine learning modules as described herein is an artificial neural network or neural network, taking inspiration from biological neural networks. An artificial neural network can, in a sense, learn to perform tasks by processing examples, without being programmed with any task-specific rules. A neural network generally includes connected units, neurons, or nodes (e.g., connected by synapses) and may allow for the machine learning program to improve performance. A neural network may define a network of functions, which have a graphical relationship. As an example, a feedforward network may be utilized, e.g., an acyclic graph with nodes arranged in layers). Both the combination of Tapia and Beck and Wright are directed to implementation of machine learning algorithms. The combination of Tapia and Beck discloses assessing customer friction during interactions with a product or service. Wright improves upon the combination of Tapia and Beck by disclosing utilizing a neural network with multiple layers each having disparate functions. One of ordinary skill in the art would be motivated to further include utilizing a neural network with multiple layers each having disparate functions, to efficiently apply a robust model to the gathered data to arrive at successful remedial actions to be implemented. Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of assessing customer friction during interactions with a product or service in the combination of Tapia and Beck to further utilize a neural network with multiple layers each having disparate functions as disclosed in Wright, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 8, Tapia teaches: A computer system for assessing customer friction during interactions with a product or service, comprising: one or more processors; and non-transitory computer readable storage media encoding instructions which, when executed by the one or more processors, causes the computer system to (Paragraph Number [0038] teaches the memory 206 may be implemented using computer-readable media, such as computer storage media. Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communications media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), high-definition multimedia/data storage disks, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism). The remainer of the claim language is substantially similar to that found in claim 1 and is rejected for the same reasons put forth in regard to claim 1. As per claim 15, Tapia teaches: A computer program product residing on a non-transitory computer readable storage medium having a plurality of instructions stored thereon, which when executed by a processor, cause the processor to perform operations for assessing customer friction during interactions with a product or service, comprising: (Paragraph Number [0038] teaches the memory 206 may be implemented using computer-readable media, such as computer storage media. Computer-readable media includes, at least, two types of computer-readable media, namely computer storage media and communications media. Computer storage media includes volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules, or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD), high-definition multimedia/data storage disks, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information for access by a computing device. In contrast, communication media may embody computer-readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave, or other transmission mechanism. Paragraph Number [0106] teaches a flow diagram of an example process 1200 for tracking the performance of network devices in relation to small network cells, macro cells, and backhauls of the wireless carrier network. At block 1202, an analytic application may retrieve quality of service metrics for user devices of subscribers as the user devices access a wireless carrier network via one or more small network cells in a geographical area. In various embodiments, the quality of service metrics may include call establishment delays, MOS of call audio quality, records of one-way audio problems, records of call drops, and/or so forth). The remainer of the claim language is substantially similar to that found in claim 1 and is rejected for the same reasons put forth in regard to claim 1. As per claims 3, 10, and 18, the combination of Tapia, Beck, and Wright teaches each of the limitations of claims 1, 8, and 15 respectively. Tapia teaches assessing customer friction during interactions with a product or service but does not explicitly teach recording, analyzing, and weighting prior user behavior associated with a particular user which is taught by the following citations from Beck: wherein the second source is designed to document user session replays, offering a detailed perspective on user behavior (Paragraph Number [0062] teaches the Social Media Listening 114 may include an additional computer-automated process, where at least one computer processor executes specific computer instructions stored in computer memory, and causes the processor to access, review and automatically assess public social media posts on the internet about the company and/or company's customer relations or employment practices. It may access bulletin board where employees or customers share their experiences or provide reviews and/or comments about companies, company products or services. Similarly, the Media Coverage 115 may involve the access, review and automatic assessment of various media coverage on companies being analyzed by the present system. Paragraph Number [0082] teaches the process step of Surveying and Modeling Frustrations 160 may involve testing, by executing computer instructions by a processer, the frustration factors with consumers, along with value of current relationship and behaviors related to switching and cancellation decisions. This is also replicated for internal applications related to employees quitting their given employer. (See also Paragraph Numbers [0087]-[0088])). A person of ordinary skill would combine these references as described in regard to claim 1. As per claims 4, 11, and 19, the combination of Tapia, Beck, and Wright teaches each of the limitations of claims 1, 8, and 15 respectively. In addition, Tapia teaches: wherein the third source is adapted to provide application performance insights, including systemic or infrastructure-level information. (Paragraph Number [0036] teaches a block diagram showing various components of a data management platform and a performance management engine that performs distributed multi-data source performance management. The data management platform 102 and the analytic applications 104 may be implemented by one or more computing nodes 108 of a distributed processing computing infrastructure. The number of computing nodes 108 may be scaled up and down by a distributed processing control algorithm based on the data processing demands of the data management platform 102 and/or the analytic applications 104. Paragraph Number [0106] teaches a flow diagram of an example process 1200 for tracking the performance of network devices in relation to small network cells, macro cells, and backhauls of the wireless carrier network. At block 1202, an analytic application may retrieve quality of service metrics for user devices of subscribers as the user devices access a wireless carrier network via one or more small network cells in a geographical area. In various embodiments, the quality of service metrics may include call establishment delays, MOS of call audio quality, records of one-way audio problems, records of call drops, and/or so forth. (See also Paragraph Number [0113])). As per claims 5, 12, and 20, the combination of Tapia, Beck, and Wright teaches each of the limitations of claims 1, 8, and 15 respectively. In addition, Tapia teaches: wherein the root cause analysis further identifies potential complications or areas of dissatisfaction experienced by customers during their interactions with the product or service. (Paragraph Number [0091] teaches an analytic application may identify a root cause for an issue affects one or more subscribers of a wireless carrier network based on a set of live performance data using the machine learning model. The analytic application may further generate a solution for the root cause using a solutions database. In various embodiments, the live performance data may be real time or non-real time data pertaining to one or more network components of the wireless carrier network and/or one or more device components of the user devices that are using the wireless carrier network. Paragraph Number [0113] teaches the analytic application may analyze the performance of the specific set of device components and network components to input one or more components that negatively impacted a quality of service experienced by the user during the instance. For example, a component may be determined to have negatively impacted the quality of service when a performance metric of the component is below a predetermined performance threshold. In another example, the component may be determined to have negatively impacted the quality of service when the component is a bottleneck that is responsible for the biggest delay experienced by the user during the usage instance. In an additional example, the component may be determined to have negatively impacted the quality of service when the component experienced a rate of error that is higher than a maximum error threshold. (See also Paragraph Number [0020])). As per claims 6 and 13, the combination of Tapia, Beck, and Wright teaches each of the limitations of claims 1 and 8 respectively. In addition, Tapia teaches: wherein the report details at least one of: (i) a specific nature or type of error or issue; (ii) devices on which the product or service was accessed or utilized; (iii) identities or characteristics of affected customers; and (iv) pertinent transaction information related to the interactions.. (Paragraph Number [0054] teaches the analytic applications 104 may analyze the multiple sources of data obtained by the data management platform 102 to generate data reports 120 and troubleshoot solutions 122. The analytic applications 104 may have built in application user interfaces that simplify the data querying and requesting process such that status data and troubleshooting solutions may be provided via the application user interfaces. The application user interfaces of the analytic applications 104 may be displayed by the dashboard application 124. The analytic applications may process real time or non-real time data, in which data from multiple data sources may be aggregated or converged. The data reports 120 may provide real time or non-time views of device and network status data based on the performance data from the data sources 110-118. Accordingly, a user may use the data reports 120 to continuously or periodically monitor the statuses pertaining to all aspects of the wireless carrier network. The aspects may include the statuses of wireless carrier network itself, the network components of the network, user devices that are using the wireless carrier network, and/or device components of the user devices. (Examiner asserts that this section teaches at least options i and ii)). As per claims 7, 14, and 16, the combination of Tapia, Beck, and Wright teaches each of the limitations of claims 1, 8, and 15 respectively. In addition, Tapia teaches: wherein a validity of the friction metric is confirmed by at least one of: (i) juxtaposing the friction metric with customer survey and feedback data from customers; (ii) correlating the friction metric with customer satisfaction scores; or (iii) drawing parallels through a comparative evaluation with incident management protocols, including monitoring of telephonic activity and assessments of social media interactions. (Paragraph Number [0075] teaches the real-time data may include subscriber trouble tickets inputted to the trouble ticket data source 112 as a result of a customer interaction with a chatbot. or on a social media service such as Twitter® or the like. In such a case, at block 610 the analytic application may present the generated solution to the customer via the chatbot or social media service. The generated solution may be a solution that may implemented by the customer, such as by changing a customer-controlled setting on a customer device. Paragraph Number [0104] teaches the analytic application may obtain social media data indicating at least one geolocation at which the deployment of one or more small network cells is desired. In various embodiments, the social media data may include social postings on blog web pages, message feed web pages, web forums, and/or electronic bulletin boards. The social postings may highlight network problems with the wireless carrier network as experienced by different subscribers at various geolocations. (Examiner asserts that this section teaches at least options i and iv)). Response to Arguments Applicant’s arguments filed 1/20/2026 have been fully considered but they are not persuasive. Applicant argues that the claims are eligible under 35 USC 101. (See Applicant’s Remarks, 1/20/2026, pgs. 10-14). Examiner respectfully disagrees. As noted in the 35 USC 101 analysis presented above, the claims recite an abstract concept that is encapsulated by decision making analogous to a method of organizing human activity. Examiner notes that each of the limitations that encapsulate the abstract concepts are recited in the above 35 USC 101. Additionally, the claims do not recite a practical application of the abstract concepts in that there is no specific use or application of the method steps other than to make conclusory determinations and provide for direction for either a person or machine to follow at some future time (including monitoring information associated with agent/customer interactions). The claims do not recite any particular use for these determinations and directions that improve upon the underlying computer technology (in this instance the computer software, processor, and memory). Instead, Examiner asserts that the additional elements in the claim language are only used as implementation of the abstract concepts utilizing technology. The concepts described in the limitations when taken both as a whole and individually are not meaningfully different than those found by the courts to be abstract ideas and are similarly considered to be certain methods of organizing human activity such as managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions. The steps are then encapsulated into a particular technological environment by executing these steps upon a computer processor and utilizing features such as a computer interface or sending and receiving data over a network or displaying information via a computerized graphical user interface. However, sending and receiving of information over a network and execution of algorithms (including assessing customer friction and implementation of applied machine learning algorithms) on a computer are utilized only to facilitate the abstract concepts (i.e. selecting data on an interface, publishing/displaying information, etc.). As such, Examiner asserts that the implementation of the abstract concepts recited by the claims utilize computer technology in a way that is considered to be generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Accordingly, Examiner does not find that the claims recite a practical application of the abstract concepts recited by the claims. Applicant argues that the previously cited reference does not teach the newly amended portions including the new limitations recited by the independent claims. (See Applicant’s Remarks, 1/20/2026, pgs. 14-15). Examiner respectfully disagrees. Examiner notes that new citations from the previously cited Tapia reference have been applied to the newly presented claim limitations as indicated in the above in the new 103 rejections. As such, Applicant’s arguments directed towards the previous rejection are moot. In response to Applicant’s arguments, Examiner directs Applicant to review the new citations and explanations provided in the new 103 rejections presented above. Conclusion Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H DIVELBISS whose telephone number is (571)270-0166. The examiner can normally be reached on 7:30 am - 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about PAIR, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /M. H. D./ Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

Oct 19, 2023
Application Filed
May 01, 2025
Non-Final Rejection — §101, §103
Jul 16, 2025
Interview Requested
Jul 29, 2025
Examiner Interview Summary
Jul 29, 2025
Applicant Interview (Telephonic)
Aug 06, 2025
Response Filed
Aug 21, 2025
Final Rejection — §101, §103
Sep 22, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Oct 15, 2025
Non-Final Rejection — §101, §103
Jan 06, 2026
Interview Requested
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 20, 2026
Response Filed
Jan 20, 2026
Examiner Interview Summary
Feb 25, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572889
Optimization of Large-scale Industrial Value Chains
2y 5m to grant Granted Mar 10, 2026
Patent 12503000
OPTIMIZATION PROCEDURE FOR THE ENERGY MANAGEMENT OF A SOLAR ENERGY INSTALLATION WITH STORAGE MEANS IN COMBINATION WITH THE CHARGING OF AN ELECTRIC VEHICLE AND SYSTEM
2y 5m to grant Granted Dec 23, 2025
Patent 12493860
WASTE MANAGEMENT SYSTEM AND METHOD
2y 5m to grant Granted Dec 09, 2025
Patent 12482011
FAMILIARITY DEGREE ESTIMATION APPARATUS, FAMILIARITY DEGREE ESTIMATION METHOD, AND RECORDING MEDIUM
2y 5m to grant Granted Nov 25, 2025
Patent 12450574
METHOD FOR WASTE MANAGEMENT UTILIZING ARTIFICAL NEURAL NETWORK SYSTEM
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
23%
Grant Probability
46%
With Interview (+23.4%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 367 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month