Prosecution Insights
Last updated: April 19, 2026
Application No. 18/704,489

COMPUTERIZED WEIGHTED PROBLEM IMPACT SCORE CALCULATION SYSTEM AND A METHOD THEREOF

Non-Final OA §101§102§103§112
Filed
Apr 25, 2024
Examiner
OBAID, HAMZEH M
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
BAHCESEHIR UNIVERSITESI
OA Round
1 (Non-Final)
39%
Grant Probability
At Risk
1-2
OA Rounds
3y 0m
To Grant
59%
With Interview

Examiner Intelligence

Grants only 39% of cases
39%
Career Allow Rate
66 granted / 169 resolved
-12.9% vs TC avg
Strong +20% interview lift
Without
With
+19.9%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
46 currently pending
Career history
215
Total Applications
across all art units

Statute-Specific Performance

§101
27.6%
-12.4% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
10.0%
-30.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 169 resolved cases

Office Action

§101 §102 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. DETAILED ACTION This is a non-final, first office action on the merits. Claims 1-25 are pending. Specification Applicant is reminded of the proper content of an abstract of the disclosure. A patent abstract is a concise statement of the technical disclosure of the patent and should include that which is new in the art to which the invention pertains. The abstract should not refer to purported merits or speculative applications of the invention and should not compare the invention with the prior art. If the patent is of a basic nature, the entire technical disclosure may be new in the art, and the abstract should be directed to the entire disclosure. If the patent is in the nature of an improvement in an old apparatus, process, product, or composition, the abstract should include the technical disclosure of the improvement. The abstract should also mention by way of example any preferred modifications or alternatives. Where applicable, the abstract should include the following: (1) if a machine or apparatus, its organization and operation; (2) if an article, its method of making; (3) if a chemical compound, its identity and use; (4) if a mixture, its ingredients; (5) if a process, the steps. Extensive mechanical and design details of an apparatus should not be included in the abstract. The abstract should be in narrative form and generally limited to a single paragraph within the range of 50 to 150 words in length. See MPEP § 608.01(b) for guidelines for the preparation of patent abstracts. Claim Objections Claims 1-25 objected to because of the following informalities: Claims 1-25 acronym UX. The claims should recite user experience (UX). Appropriate correction is required. . Claim 2 objected to because of the following informalities: claim 2 contain bullets point for each limitation, to be consistent with independent claim 1, examiner recommend the applicant to remove the bullet points. Appropriate correction is required. Claims 12-15, and 19-21 objected to because of the following informalities: claim 12-15, and 19-21 contain formulas “C-WPI/PImax”, examiner recommend the applicant to define C-WPI/Plmax. For example, C-WPI/PImax, wherein C-WPI is the Computerized-Weighted Problem Impact and PImax is the Maximum possible impact score. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 1 recites the limitation “the user experience improvement” “the system under review”, “the heuristic evaluation”, “the server (130)”, “the severity of UX problem;”, “the improvement priority”, “the users via”, “the maximum possible”, and “the impact value” There is insufficient antecedent basis for this limitations in the claim. Claim 2 recites the limitation “the detected UX problem”, “the interface”, “the application (120)”, “the severity level”, “the level of importance”, “the answers”, “the questions asked”, “the server (130)”, “the number of problem”, “the examined system”, “the descriptions”, and “the database (140)”. There is insufficient antecedent basis for this limitations in the claim. Claim Rejections 35 USC §101 35 U.S.C. § 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-25 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to non-statutory subject matter, specifically an abstract idea without a practical application or significantly more than the abstract idea. Under the 35 U.S.C. §101 subject matter eligibility two-part analysis, Step 1 addresses whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. See MPEP §2106.03. If the claim does fall within one of the statutory categories, it must then be determined in Step 2A [prong 1] whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea). See MPEP §2106.04. If the claim is directed toward a judicial exception, it must then be determined in Step 2A [prong 2] whether the judicial exception is integrated into a practical application. See MPEP §2106.04(d). Finally, if the judicial exception is not integrated into a practical application, it must additionally be determined in Step 2B whether the claim recites "significantly more" than the abstract idea. See MPEP §2106.05. Examiner note: The Office's 2019 Revised Patent Subject Matter Eligibility Guidance (2019 PEG) is currently found in the Ninth Edition, Revision 10.2019 (revised June 2020) of the Manual of Patent Examination Procedure (MPEP), specifically incorporated in MPEP §2106.03 through MPEP §2106.07(c). Regarding Step 1 Claims 1-6 are directed toward a method (process). Claims 8-13 are directed to a system (machine) and Claims 15-23 are directed to a non-transitory (machine). Thus, all claims fall within one of the four statutory categories as required by Step 1. Regarding Step 2A [prong 1] Claims 1-25 are directed toward the judicial exception of an abstract idea. Regarding independent claim 1, the bolded limitations emphasized below correspond to the abstract ideas of the claimed invention: Claim 1. A computerized weighted problem impact score calculation system (100) that allows for evaluating the user experience improvement priority for the system under review, based on user experience problems identified in the heuristic evaluation, characterized by comprising; at least one electronic device (110) enabling that at least one application (120) is executed thereon, and enabling users to enter and exchange data, at least one database (140) that records computerized weighted problem impact score information calculated by the server (130) and questions to be asked to UX experts in order to determine the severity of UX problems; At least one application (120) that allows UX experts to ask questions through an interface to determine the severity of the UX problems they have identified, and that provides the weighted problem impact score calculated by the server (130) and the improvement priority level information to be presented to the users via an interface, at least one server (130) that allows for calculating the improvement priority level by being in communication with the application (120) over the electronic device (110) and by calculating the maximum possible problems impact score and computerized weighted problem impact score with the impact value given to the severity levels and UX problems over the application (120). Regarding independent claim 2, the bolded limitations emphasized below correspond to the abstract ideas of the claimed invention: Claim 2. A computerized weighted problem impact score calculation method, characterized by comprising the process steps of; Entering the detected UX problems by UX experts into the interface presented by the application (120)," Asking questions to UX experts by the application (120) to determine the severity level of UX problems, Determining the level of importance of UX problems according to the answers given to the questions asked by the server (130), Entering the impact value determined by the UX experts for the problem severity levels into the interface offered by the application (120), Calculating the weighted problem impact score by the server (130) by using the number of problems and the impact values, Receiving the high severity impact value as the maximum possible problem impact score by the server (130), Determining the improvement priority level for the examined system by the server (130) by using the maximum possible problem impact score and the computerized weighted problem impact score, Saving in the descriptions and severity levels of the detected UX problems, the calculated improvement priority level, the maximum possible problem impact score, and the computerized weighted problem impact score into the database (140), Presenting the system improvement priority level to the UX experts by the application (120). The Applicant's Specification titled " COMPUTERIZED WEIGHTED PROBLEM IMPACT SCORE CALCULATION SYSTEM AND A METHOD THEREOF "In summary, the present disclosure relates to methods and systems for calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts" (Spec. page 1). As the bolded claim limitations above demonstrate, independent claims 1, and 2 are recites the abstract idea of calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts. which is considered certain methods of organizing human activity because the bolded claim limitations pertain to (i) fundamental economic principles or practices (including mitigating risk) and (ii) commercial or legal interactions. See MPEP §2106.04(a)(2)(II). Applicant's claims as recited above provide a business solution of calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts. Applicant's claimed invention pertains to fundamental economic principles or practices and commercial/legal interactions because the limitations recite calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts. which pertain to “mitigating risk” and "agreements in the form of contracts; legal obligation; behaviors; business relations" expressly categorized under commercial/legal interactions. See MPEP §2106.04(a)(2)(II). Also, Applicant's claims as recited above steps of math calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts. Applicant's claimed invention pertains to mathematical relationships, mathematical formulas or equations and/or mathematical calculations because the limitations recite calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts. which pertain to “mathematical relationships, mathematical formulas or equations and/or mathematical calculations " expressly categorized under Mathematical Concepts. See MPEP §2106.04(a)(2)(II). Dependent claims 3-25 further reiterate the same abstract ideas with further embellishments, such as claim 3 characterized by comprising; database (140) that allows for storing the improvement priority level information calculated by the server (130). claim 4 characterized by comprising application (100) that enables UX experts to enter the UX problems they have identified and the impact values depending on the severity level of these problems through the interface. claim 5 characterized by comprising the application (120) that enables determining the severity level when the questions asked to the UX specialist is answered yes, and the next question to be asked when the answer is no (120). claim 6 characterized by comprising the application (120) that enables the UX expert to be asked whether this problem prevents the completion of the task or changes user preference, and if the answer is yes, the severity level to be determined as high, and the next question to be asked if the answer is no. claim 7 characterized by comprising the application (120) that enables the UX expert to be asked whether this problem negatively affects user preference by causing low user performance, low user satisfaction, and if the answer to this question is yes, severity level to be determined as medium, and the next question to be asked if the answer is no. claim 8 characterized by comprising the application (120) that enables the UX expert to be asked whether this problem is only visual or partially affects user performance, and if the answer is yes, the severity level to be determined as low (120), and if the answer is no, an error message to be presented on the application (120) as claim 9 characterized by comprising the application (120) that allows UX experts to enter impact values for each of the low, medium, and high severity levels. claim 10 characterized by comprising the server (130) that allows for calculating the computerized weighted problem impact score by summing the values obtained by multiplying the number of high severity UX problems by the high severity impact value; multiplying the number of medium severity UX problems by the medium severity impact value; and multiplying the number of low severity UX problems by the low severity impact value and by dividing this sum by the total number of problems. claim 11 characterized by comprising the server (130) ensuring that the impact value determined for high severity is taken as the maximum possible impact score. claim 12 characterized by comprising the server (130) that allows for determining the improvement priority level as "very high" if at least one high severity UX problem has been detected or if the C-WPI/ Plmax value is greater than 0.75. claim 13 characterized by comprising the server (130) that allows for determining the improvement priority level as " high" if at least one high severity UX problem has been detected or if the C-WPI/ Plmax value is between 0.50 and 0.75. claim 14 characterized by comprising the server (130) that allows for determining the improvement priority level as " medium" if high severity UX problem has not been detected or if the C-WPI/ Plmax value is between 0.25 and 0.50. claim 15 characterized by comprising the server (130) that allows for determining the improvement priority level as " low" if high severity UX problem has not been detected or if the C-WPI/ Plmax value is lower than 0.25. claim 16 characterized in that; in the process step of asking questions to UX experts by the application (120) to determine the severity of UX problems, the UX experts are asked by the application (120) whether this problem is preventing the completion of the task or changing user preference, and if the answer to this question is yes, the severity level is determined as high; if the answer is no, the next question is asked. claim 17 characterized in that; in the process step of asking questions to UX experts by the application (120) to determine the severity level of UX problems, application (120) asks the UX expert whether this problem effects user preference negative by leading low user performance and low user satisfaction, and if the answer to this question is yes, severity level is determined as medium; if the answer is no, the next question is asked. claim 18 characterized in that; in the process step of asking questions to UX experts by the application (120) to determine the severity level of UX problems, the application (120) asks the UX expert whether this problem is just visual or does it partially affect user performance and if the answer to this question is yes, the severity level is determined as low, and if the answer to this question is no, presenting an error message on the application (120) as "this problem may not be a UX problem, please reevaluate". claim 19 characterized in that; in the process step of determining the improvement priority level for the examined system by the server (130) by using the maximum possible problem impact score and the computerized weighted problem impact score, if at least one UX problem of high severity has been detected by the server (130), or if the value of C-WPI/ Plmax is greater than 0.75,the improvement priority level is determined as "very high". claim 20 characterized in that; in the process step of determining the improvement priority level for the examined system by the server (130) by using the maximum possible problem impact score and the computerized weighted problem impact score, if at least one UX problem of high severity has been detected by the server (130), or if the value of C-WPI/ Plmax is between 0.50 and 0.75, the improvement priority level is determined as "high". claim 21 characterized in that; in the process step of determining the improvement priority level for the examined system by the server (130) by using the maximum possible problem impact score and the computerized weighted problem impact score, if high severity UX problem has not been detected by server (130) and if the value of C-WPI/ Plmax is between 0.25 and 0.50, the improvement priority level is determined as "medium". claim 22 characterized in that; in the process step of determining the improvement priority level for the examined system by the server (130) by using the maximum possible problem impact score and the computerized weighted problem impact score, if high severity UX problem has not been detected by server (130) and if the value of C-WPI/ Plmax is less than 0.25, the improvement priority level is determined as "low". claim 23 characterized in that; in the process step of calculating the weighted problem impact score by the server (130) by using the impact values according to the number of problems and the severity levels thereof, the computerized weighted problem impact score is calculated by summing the values obtained by multiplying the number of high severity UX problems by the high severity impact value, multiplying the number of medium severity UX problems by the medium severity impact value; and multiplying the number of low severity UX problems by the low severity impact value and by dividing this sum by the total number of problems. claim 24 characterized in that; in the process step of receiving the high severity impact value as the maximum possible problem impact score (Plmax) by the server (130), the impact value determined by server (130) for the high severity UX problems is accepted as the maximum possible problem impact score. claim 25 characterized in that; in the process step of entering the UX problem descriptions determined by the UX experts into the interface offered by the application (120), UX problems are the problems such as interface design that will cause user error, that the terms and icons used in the design are not compatible with those used in reality, failure to provide the user with the function of understanding and undoing the incorrect operation, visual design that does not provide enough feedback to the user and does not show whether the process is progressing or not, deficiencies in the navigation of the website, designs that require unnecessary processing, not remembering the user's previous actions and constantly asking for the same information, color and shape choices that are complex and challenging in interface design, error messages that do not properly prompt the user. which are nonetheless directed towards fundamentally the same abstract ideas as indicated for independent claims 1, and 2. Regarding Step 2A [prong 2] Claims 1-25 fail to integrate the abstract idea into a practical application. Independent claims 1, and 2 include the following additional elements which do not amount to a practical application: Claim 1. A calculation system, at least one electronic device (110) enabling that at least one application (120) is executed thereon, at least one database (140), interface, the server (130), at least one application (120), electronic device (110) Claim 2. at least one electronic device (110) enabling that at least one application (120) is executed thereon, at least one database (140), interface, the server (130), at least one application (120), electronic device (110) The bolded limitations recited above in independent claims 1, and 2 pertain to additional elements which merely provide an abstract-idea-based-solution implemented with computer hardware and software components, including the additional elements of A calculation system, at least one electronic device (110) enabling that at least one application (120) is executed thereon, at least one database (140), interface, the server (130), at least one application (120), electronic device (110) which fail to integrate the abstract idea into a practical application because there are (1) no actual improvements to the functioning of a computer, (2) nor to any other technology or technical field, (3) nor do the claims apply the judicial exception with, or by use of, a particular machine, (4) nor do the claims provide a transformation or reduction of a particular article to a different state or thing, (5) nor provide other meaningful limitations beyond generally linking the use of the judicial exception to a particular technological environment, in view of MPEP §2106.04(d)(1) and §2106.05 (a-c & e-h), (6) nor do the claims apply the judicial exception to effect a particular treatment or prophylaxis for a disease or medical condition, in view of MPEP §2106.04(d)(2). The Specification provides a high level of generality regarding the additional elements claimed without sufficient detail or specific implementation structure so as to limit the abstract idea, for instance, (fig. 1). Nothing in the Specification describes the specific operations recited in claim 1 (Similarly claim 2) as particularly invoking any inventive programming, or requiring any specialized computer hardware or other inventive computer components, i.e., a particular machine, or that the claimed invention is somehow implemented using any specialized element other than all-purpose computer components to perform recited computer functions. The claimed invention is merely directed to utilizing computer technology as a tool for solving a business problem of data analytics. Nowhere in the Specification does the Applicant emphasize additional hardware and/or software elements which provide an actual improvement in computer functionality, or to a technology or technical field, other than using these elements as a computational tool to automate and perform the abstract idea. See MPEP §2106.05(a & e). The relevant question under Step 2A [prong 2] is not whether the claimed invention itself is a practical application, instead, the question is whether the claimed invention includes additional elements beyond the judicial exception that integrate the judicial exception into a practical application by imposing a meaningful limit on the judicial exception. This is not the case with Applicant's claimed invention which merely pertains to steps for calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts and the additional computer elements a tool to perform the abstract idea, and merely linking the use of the abstract idea to a particular technological environment. See MPEP §2106.04 and §21062106.05(f-h). Alternatively, the Office has long considered data gathering, analysis and data output to be insignificant extra-solution activity, and these additional elements do not impose any meaningful limits on practicing the abstract idea. See MPEP §2106.04 and §2106.05(g). Thus, the additional elements recited above fail to provide an actual improvement in computer functionality, or to a technology or technical field. See MPEP §2106.04(d)(1) and §2106§2106.05 (a & e). Instead, the recited additional elements above, merely limit the invention to a technological environment in which the abstract concept identified above is implemented utilizing the computational tools provided by the additional elements to automate and perform the abstract idea, which is insufficient to provide a practical application since the additional elements do no more than generally link the use of the abstract idea to a particular technological environment. See MPEP §2106.04. Automating the recited claimed features as a combination of computer instructions implemented by computer hardware and/or software elements as recited above does not qualify an otherwise unpatentable abstract idea as patent eligible. Alternatively, the Office has long considered data gathering and data processing as well as data output recruitment information on a social network to be insignificant extra-solution activity, and these additional elements used to gather and output recruitment information on a social network are insignificant extra-solution limitations that do not impose any meaningful limits on practicing the abstract idea. See MPEP §2106.05(g). The current invention calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts. When considered in combination, the claims do not amount to improvements of the functioning of a computer, or to any technology or technical field. Applicant's limitations as recited above do nothing more than supplement the abstract idea using additional hardware/software computer components as a tool to perform the abstract idea and generally link the use of the abstract idea to a technological environment, which is not sufficient to integrate the judicial exception into a practical application since they do not impose any meaningful limits. Dependent claims 3-25 merely incorporate the additional elements recited above, along with further embellishments of the abstract idea of independent claims 1, and 2 respectively, but these features only serve to further limit the abstract idea of independent claims 1, and 2, furthermore, merely using/applying in a computer environment such as merely using the computer as a tool to apply instructions of the abstract idea do nothing more than provide insignificant extra-solution activity since they amount to data gathering, analysis and outputting. Furthermore, they do not pertain to a technological problem being solved in a meaningful way beyond generally linking the use of the judicial exception to a particular technological environment, and/or the limitations fail to achieve an actual improvement in computer functionality or improvement in specific technology other than using the computer as a tool to perform the abstract idea. Therefore, the additional elements recited in the claimed invention individually, and in combination fail to integrate the recited judicial exception into any practical application. Regarding Step 2B Claims 1-25 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional element(s) as described above with respect to Step 2A Prong 2, the additional element of claims 1, and 2 include a calculation system, at least one electronic device (110) enabling that at least one application (120) is executed thereon, at least one database (140), interface, the server (130), at least one application (120), electronic device (110). The displaying interface and storing data merely amount to a general purpose computer used to apply the abstract idea(s) (MPEP 2106.05(f)) and/or performs insignificant extra-solution activity, e.g. data retrieval and storage, as described above (MPEP 2106.05(g)) which are further merely well-understood, routine, and conventional activit(ies) as evidenced by MPEP 2106.06(05)(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser’s back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to calculation that allows for evaluating the user experience improvement priority for a system under review based on different inputs and questions and displaying the system improvement priority to the UX experts. Claims 1-25 is accordingly rejected under 35 USC 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more. REJECTIONS BASED ON PRIOR ART Examiner Note: Some rejections will be followed/begin by an “EN” that will denote an examiner note. This will be place to further explain a rejection. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-4, 9, 11, and 24 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Prachi EP 2642444 (hereinafter Prachi). Regarding Claim 1: A computerized weighted problem impact score calculation system (100) that allows for evaluating the user experience improvement priority for the system under review, based on user experience problems identified in the heuristic evaluation (Prachi [0049], “Heuristic analysis”), characterized by comprising; (Prachi [0005], “evaluating a UXM score …. Levels based on the UXM score …. A pre-defined threshold value”. Prachi[0010], “user experience (UX) refers to a user's perceptions and responses that result from the use or anticipated use of a product, a system, or a service. The UX of the product depends upon, for example, user's requirements, user's emotions, expectations from the software product, and context of use”. EN: further in [0036-0037], “communicate with the external repository through the interface(s) to obtain information from the data 118”) at least one electronic device (110) enabling that at least one application (120) is executed thereon, and enabling users to enter and exchange data, (Prachi [0012], “checklists, and user test methodologies for identifying flaws in the usability of the software products”. EN: Prachi [0042-0045] disclose the enabling users to enter and exchange data.) at least one database (140) that records computerized weighted problem impact score information calculated by the server (130) and questions to be asked to UX experts in order to determine the severity of UX problems; (Prachi [0043-0047], “the assignment module 120 may facilitate assigning a weight to each of the UX parameters corresponding to the importance of the each of the one or more UX parameters. In cases, where no weight is assigned to a UX parameter, the assignment module 120 may define a default weight of '1' to that UX parameter … plurality of questions that may be used for creating an assessment questionnaire”.) At least one application (120) that allows UX experts to ask questions through an interface to determine the severity of the UX problems they have identified, and that provides the weighted problem impact score calculated by the server (130) and the improvement priority level information to be presented to the users via an interface,( EN: Examiner interpretation with regard to the severity and priority is “low, medium or high in view of dependent claim 9. See Prachi [0022], “higher hierarchical UXM level in a similar manner. However, if the UXM score of the software product is less than the pre-defined threshold value the software product may be categorized in a lower hierarchical UXM level, if any. For example, if the software product is being assessed for L 1 and the UXM score of the software product is less than the pre-defined threshold value for that level, the software product will be considered as not usable as L 1 is the lowest hierarchical UXM level. In remaining three 40 levels of the hierarchy, if the UXM score for that level is less than the pre-defined threshold value, the software product is categorized in a lower hierarchical UXM level. [0042-0048], “each attribute may be further elaborated in the form of questions that may be asked to the user across the hierarchical UXM levels for accessing the software product. It will be understood that although Table 3 has been described with reference to the attributes of one UX parameter, the subject matter is implemented to do the same for the remaining parameters as mentioned in Table 1. In accordance with the present implementation, the UXMM may be configured as a scalable model. In other words, the UXMM may facilitate addition of hierarchical UXM levels, various UX parameters, and attributes at a later point in time based on the requirement of the software product.) at least one server (130) that allows for calculating the improvement priority level by being in communication with the application (120) over the electronic device (110) and by calculating the maximum possible problems impact score and computerized weighted problem impact score with the impact value given to the severity levels and UX problems over the application (120). (Prachi [0069-0078], “indicating a highest level … each of the one or more UX parameters may include a plurality of attributes. The plurality of attributes may be understood as characteristics of each of the one or more UX parameters. For example, the UXMM may include a plurality of review questions based on which the software product may be assessed for each of the one or more UX parameters associated with the hierarchical UXM levels. The plurality of attributes may be provided ratings by expert reviewers, such as through the user device 104. The ratings may be provided based on various analysis techniques that may be conducted for identifying the UXM level of the software product. The analysis techniques that may be employed by the UXMM may include heuristic analysis, an expert evaluation, a mock-scenario based testing, a competitor benchmarking technique, and analysis of emotions of the users … defined to be 50%. Further, assuming that the evaluation module 122 has calculated the UXM score of the software product for L 1 to be 79%.”) Regarding Claim 2: A computerized weighted problem impact score calculation method, characterized by comprising the process steps of; Entering the detected UX problems by UX experts into the interface presented by the application (120), (Prachi [0005], “evaluating a UXM score …. Levels based on the UXM score …. A pre-defined threshold value”. Prachi[0010], “user experience (UX) refers to a user's perceptions and responses that result from the use or anticipated use of a product, a system, or a service. The UX of the product depends upon, for example, user's requirements, user's emotions, expectations from the software product, and context of use”. EN: further in [0036-0037], “communicate with the external repository through the interface(s) to obtain information from the data 118”) Asking questions to UX experts by the application (120) to determine the severity level of UX problems, (Prachi [0043-0047], “the assignment module 120 may facilitate assigning a weight to each of the UX parameters corresponding to the importance of the each of the one or more UX parameters. In cases, where no weight is assigned to a UX parameter, the assignment module 120 may define a default weight of '1' to that UX parameter … plurality of questions that may be used for creating an assessment questionnaire”.) Determining the level of importance of UX problems according to the answers given to the questions asked by the server (130), Entering the impact value determined by the UX experts for the problem severity levels into the interface offered by the application (120), Calculating the weighted problem impact score by the server (130) by using the number of problems and the impact values, (EN: Examiner interpretation with regard to the severity and priority is “low, medium or high in view of dependent claim 9. See Prachi [0022], “higher hierarchical UXM level in a similar manner. However, if the UXM score of the software product is less than the pre-defined threshold value the software product may be categorized in a lower hierarchical UXM level, if any. For example, if the software product is being assessed for L 1 and the UXM score of the software product is less than the pre-defined threshold value for that level, the software product will be considered as not usable as L 1 is the lowest hierarchical UXM level. In remaining three 40 levels of the hierarchy, if the UXM score for that level is less than the pre-defined threshold value, the software product is categorized in a lower hierarchical UXM level. [0042-0048], “each attribute may be further elaborated in the form of questions that may be asked to the user across the hierarchical UXM levels for accessing the software product. It will be understood that although Table 3 has been described with reference to the attributes of one UX parameter, the subject matter is implemented to do the same for the remaining parameters as mentioned in Table 1. In accordance with the present implementation, the UXMM may be configured as a scalable model. In other words, the UXMM may facilitate addition of hierarchical UXM levels, various UX parameters, and attributes at a later point in time based on the requirement of the software product.) Receiving the high severity impact value as the maximum possible problem impact score by the server (130), Determining the improvement priority level for the examined system by the server (130) by using the maximum possible problem impact score and the computerized weighted problem impact score, (Prachi [0069-0078], “indicating a highest level … each of the one or more UX parameters may include a plurality of attributes. The plurality of attributes may be understood as characteristics of each of the one or more UX parameters. For example, the UXMM may include a plurality of review questions based on which the software product may be assessed for each of the one or more UX parameters associated with the hierarchical UXM levels. The plurality of attributes may be provided ratings by expert reviewers, such as through the user device 104. The ratings may be provided based on various analysis techniques that may be conducted for identifying the UXM level of the software product. The analysis techniques that may be employed by the UXMM may include heuristic analysis, an expert evaluation, a mock-scenario based testing, a competitor benchmarking technique, and analysis of emotions of the users … defined to be 50%. Further, assuming that the evaluation module 122 has calculated the UXM score of the software product for L 1 to be 79%.”) Saving in the descriptions and severity levels of the detected UX problems, the calculated improvement priority level, the maximum possible problem impact score, and the computerized weighted problem impact score into the database (140), Presenting the system improvement priority level to the UX experts by the application (120). (Prachi [0043-0047], “the assignment module 120 may facilitate assigning a weight to each of the UX parameters corresponding to the importance of the each of the one or more UX parameters. In cases, where no weight is assigned to a UX parameter, the assignment module 120 may define a default weight of '1' to that UX parameter … plurality of questions that may be used for creating an assessment questionnaire”.) Regarding Claim 3: A computerized weighted problem impact score calculation system (100) according to Claim 1, characterized by comprising; database (140) that allows for storing the improvement priority level information calculated by the server (130). (Prachi [0031], “may store data in various format”. Prachi [0055], “the experts may provide ratings to the each of the plurality of attributes based on the responses received from the users or from the volunteers. The assignment module 120 may also be configured to store the ratings (EN: score) provided to different attributes associated with each of the UX parameter as ratings 130”. Also, see [0036], [0039], EN: calculating score and store it) Regarding Claim 4: A computerized weighted problem impact score calculation system (100) according to Claim 1,characterized by comprising application (100) that enables UX experts to enter the UX problems they have identified and the impact values depending on the severity level of these problems through the interface. (Prachi [0017-0019], “addressing a particular problem and may not be useful otherwise. In case the software product is unable to meet the conditions of L 1, the software product may not qualify as usable. The software product may be categorized in L2, by the UXM M, if the software product is assessed to be useful; however, the software product may lack differentiating aspects with respect to its competitors. Similarly, the software product may be categorized in L3 if the software product 50 may be found to be a market leader and may have an edge over the competitors. Finally, the software product may be categorized in L4 if the users are extremely satisfied to use the software product. In one implementation, the system of the present subject matter may be configured to assess the software product for a UXM level only when the software product has been assessed to belong to a lower UXM level, if any. For example, a software product is assessed for L2 only when the software product has met the criteria for L 1. This may facilitate in assessing the increasing maturity of the UX of the software product with each hierarchical UXM level”.) Regarding Claim 9: A computerized weighted problem impact score calculation system (100) according to Claim 1, characterized by comprising the application (120) that allows UX experts to enter impact values for each of the low, medium, and high severity levels. (See Prachi fig. 1A [0069], “with L 1 indicating a lowest level of UXM (EN: low), L2 indicating a first intermediate level of UXM (EN: medium), L3 indicating a second intermediate level of UXM, and L4 indicating a highest level of UXM (high)”.) Regarding Claim 11: A computerized weighted problem impact score calculation system (100) according to Claim 1, characterized by comprising the server (130) ensuring that the impact value determined for high severity is taken as the maximum possible impact score. (Prachi [0042-0044], “the highest level, such as L4, of the hierarchical UXM levels represents delight that may be experienced by the users”. ) Regarding Claim 24: A computerized weighted problem impact score calculation method according to Claim 2, characterized in that; in the process step of receiving the high severity impact value as the maximum possible problem impact score (Plmax) by the server (130), the impact value determined by server (130) for the high severity UX problems is accepted as the maximum possible problem impact score. (Prachi [0042-0044], “the highest level, such as L4, of the hierarchical UXM levels represents delight that may be experienced by the users”. ) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 5-8, and 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Prachi EP 2642444 (hereinafter Prachi) in view of Kopikare US 2019/0066136 (hereinafter Kopikare). Regarding Claim 5: A computerized weighted problem impact score calculation system (100) according to Claim 1, characterized by comprising the application (120) that enables determining the severity level when the questions asked to the UX specialist is answered yes, and the next question to be asked when the answer is no (120). (Kopikare figure 8 and 13, [0043-0045 “an indication of an answer, an actual answer, and/or an attachment. For example, a response to a multiple-choice question may include a selection of one of the available answer choices associated with the multiple-choice question. As another example, a response may include a numerical value, letter, or symbol that that corresponds to an available answer choice (EN: yes or no). In some cases, a response may include a numerical value that is the actual answer to a corresponding survey question”. EN: figures 8 and 18 disclose option based on user response to question(s)”.) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of Prachi, to include the feature of as taught by Kopikare, in order to enables determining the severity level based on the answer from UX and generate a follow up question based on the answer (Kopikare figure 8 and 13, [0043-0045). Further, the claimed invention is merely a combination of old elements in a similar field of endeavor and, in the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Prachi and Kopikare, the results of the combination were predictable (see MPEP 2143 A). Regarding Claim 6: A computerized weighted problem impact score calculation system (100) according to Claim 1, characterized by comprising the application (120) that enables the UX expert to be asked whether this problem prevents the completion of the task or changes user preference, and if the answer is yes, the severity level to be determined as high, and the next question to be asked if the answer is no. (Kopikare [0132], “provide a follow-up question in response to determining that the sentiment associated with topic B satisfies a particular logic rule (e.g., the sentiment is positive, the sentiment is above a particular threshold, etc.). On the other hand, in response to determining that the sentiment ( or other response feature) does not satisfy the particular logic rule ( or determining that the sentiment satisfies a different logic rule), the conversational survey system 106 performs act 1016b to provide a different follow-up question”. also, [0137] figure 13,. “while not providing follow-up questions (EN: not completing)) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of Prachi, to include the feature of as taught by Kopikare, in order to enables determining the severity level based on the answer from UX and generate a follow up question based on the answer (Kopikare figure 8 and 13, [0043-0045). Further, the claimed invention is merely a combination of old elements in a similar field of endeavor and, in the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Prachi and Kopikare, the results of the combination were predictable (see MPEP 2143 A). Regarding Claim 7: A computerized weighted problem impact score calculation system (100) according to Claim 6, characterized by comprising the application (120) that enables the UX expert to be asked whether this problem negatively affects user preference by causing low user performance, low user satisfaction, and if the answer to this question is yes, severity level to be determined as medium, and the next question to be asked if the answer is no. (Kopikare [0068], “where more negative sentiments correlate to more negative opinions and more positive sentiments correlate to more positive opinions. Additionally, a response feature can include a magnitude. As used herein, the term "magnitude" reflects an effort that a respondent (e.g., respondent 122) goes through to provide a survey response, and may be represented by a magnitude score or rating. For example, a magnitude may refer to a length of a response, a composition time of a response, an input type of a response (e.g., text, image, voice, etc.), or a combination thereof. For instance, a magnitude may refer to a score to reflect a number of characters or length of a portion of a response pertaining to a particular entity. A magnitude may be on a scale from Oto 100 ( or some other fixed scale) or else may be an open-ended rating that correlates to an amount of effort expended to express an opinion relating to a particular entity”. Kopikare [0071-0073], “upon identifying a negative sentiment associated with a particular product (act 212), the conversational survey system 106 may determine that a logical condition associated with the particular product is triggered by the survey response. Since the sentiment toward the product is negative, the conversational survey system 106 may generate a follow-up question that is different from a potential follow-up question”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of Prachi, to include the feature of as taught by Kopikare, in order to enables determining the severity level based on the answer from UX and generate a follow up question based on the answer (Kopikare figure 8 and 13, [0043-0045). Further, the claimed invention is merely a combination of old elements in a similar field of endeavor and, in the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Prachi and Kopikare, the results of the combination were predictable (see MPEP 2143 A). Regarding Claim 8: A computerized weighted problem impact score calculation system (100) according to Claim 7, characterized by comprising the application (120) that enables the UX expert to be asked whether this problem is only visual or partially affects user performance, and if the answer is yes, the severity level to be determined as low (120), and if the answer is no, an error message to be presented on the application (120) as "this problem may not be a UX problem, please reevaluate". (Kopikare [0073-0077], “conversational survey system 106 identifies a positive sentiment toward a particular topic, the conversational survey system 106 may generate a follow-up question that includes a thank-you message ("We are so glad you enjoyed product A."). In either case, based on determining which logical condition is triggered within the survey flow, the conversational survey system 106 generates a follow-up question that matches the logical condition”. Also, see figure 13 [0031]) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of Prachi, to include the feature of as taught by Kopikare, in order to enables determining the severity level based on the answer from UX and generate a follow up question based on the answer (Kopikare figure 8 and 13, [0043-0045]) and generate a message display to a user. Further, the claimed invention is merely a combination of old elements in a similar field of endeavor and, in the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Prachi and Kopikare, the results of the combination were predictable (see MPEP 2143 A). Regarding Claim 16: A computerized weighted problem impact score calculation method according to Claim 2, characterized in that; in the process step of asking questions to UX experts by the application (120) to determine the severity of UX problems, (Kopikare figure 8 and 13, [0043-0045 “an indication of an answer, an actual answer, and/or an attachment. For example, a response to a multiple-choice question may include a selection of one of the available answer choices associated with the multiple-choice question. As another example, a response may include a numerical value, letter, or symbol that that corresponds to an available answer choice (EN: yes or no). In some cases, a response may include a numerical value that is the actual answer to a corresponding survey question”. EN: figures 8 and 18 disclose option based on user response to question(s)”.) the UX experts are asked by the application (120) whether this problem is preventing the completion of the task or changing user preference, and if the answer to this question is yes, the severity level is determined as high; if the answer is no, the next question is asked. (Kopikare [0132], “provide a follow-up question in response to determining that the sentiment associated with topic B satisfies a particular logic rule (e.g., the sentiment is positive, the sentiment is above a particular threshold, etc.). On the other hand, in response to determining that the sentiment ( or other response feature) does not satisfy the particular logic rule ( or determining that the sentiment satisfies a different logic rule), the conversational survey system 106 performs act 1016b to provide a different follow-up question”. also, [0137] figure 13,. “while not providing follow-up questions (EN: not completing)) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of Prachi, to include the feature of as taught by Kopikare, in order to enables determining the severity level based on the answer from UX and generate a follow up question based on the answer (Kopikare figure 8 and 13, [0043-0045). Further, the claimed invention is merely a combination of old elements in a similar field of endeavor and, in the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Prachi and Kopikare, the results of the combination were predictable (see MPEP 2143 A). Regarding Claim 17: A computerized weighted problem impact score calculation method according to Claim 2, characterized in that; in the process step of asking questions to UX experts by the application (120) to determine the severity level of UX problems, application (120) asks the UX expert whether this problem effects user preference negative by leading low user performance and low user satisfaction, and if the answer to this question is yes, severity level is determined as medium; if the answer is no, the next question is asked. (Kopikare [0068], “where more negative sentiments correlate to more negative opinions and more positive sentiments correlate to more positive opinions. Additionally, a response feature can include a magnitude. As used herein, the term "magnitude" reflects an effort that a respondent (e.g., respondent 122) goes through to provide a survey response, and may be represented by a magnitude score or rating. For example, a magnitude may refer to a length of a response, a composition time of a response, an input type of a response (e.g., text, image, voice, etc.), or a combination thereof. For instance, a magnitude may refer to a score to reflect a number of characters or length of a portion of a response pertaining to a particular entity. A magnitude may be on a scale from Oto 100 ( or some other fixed scale) or else may be an open-ended rating that correlates to an amount of effort expended to express an opinion relating to a particular entity”. Kopikare [0071-0073], “upon identifying a negative sentiment associated with a particular product (act 212), the conversational survey system 106 may determine that a logical condition associated with the particular product is triggered by the survey response. Since the sentiment toward the product is negative, the conversational survey system 106 may generate a follow-up question that is different from a potential follow-up question”) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of Prachi, to include the feature of as taught by Kopikare, in order to enables determining the severity level based on the answer from UX and generate a follow up question based on the answer (Kopikare figure 8 and 13, [0043-0045). Further, the claimed invention is merely a combination of old elements in a similar field of endeavor and, in the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Prachi and Kopikare, the results of the combination were predictable (see MPEP 2143 A). Regarding Claim 18: A computerized weighted problem impact score calculation method according to Claim 2, characterized in that; in the process step of asking questions to UX experts by the application (120) to determine the severity level of UX problems, the application (120) asks the UX expert whether this problem is just visual or does it partially affect user performance and if the answer to this question is yes, the severity level is determined as low, and if the answer to this question is no, presenting an error message on the application (120) as "this problem may not be a UX problem, please reevaluate". ([0073-0077], “conversational survey system 106 identifies a positive sentiment toward a particular topic, the conversational survey system 106 may generate a follow-up question that includes a thank-you message ("We are so glad you enjoyed product A."). In either case, based on determining which logical condition is triggered within the survey flow, the conversational survey system 106 generates a follow-up question that matches the logical condition”. Also, see figure 13 [0031]) It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to modify the system of Prachi, to include the feature of as taught by Kopikare, in order to enables determining the severity level based on the answer from UX and generate a follow up question based on the answer (Kopikare figure 8 and 13, [0043-0045]) and generate a message display to a user. Further, the claimed invention is merely a combination of old elements in a similar field of endeavor and, in the combination, each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Prachi and Kopikare, the results of the combination were predictable (see MPEP 2143 A). Allowable Subject Matter Regarding the 35 USC 103 rejection, No art rejections has been put forth in the rejection for claims 10, 12-15, 19-23, and 25. Closest prior art to the invention include Prachi Sakhardande EP 2642444: User experience maturity level assessment, Kopikare US 2019/0066136: Providing a conversational digital survey by generating digital survey questions based on digital survey response, Muto et al. US 201/0289016: Enhancement of root cause analysis of consumer feedback using micro-surveys and applications thereof, and Alves, Rui, Pedro Valente, and Nuno Jardim Nunes. "The state of user experience evaluation practice." Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational. 2014. None of the prior art of record, taken individually or in combination, teach, inter alia, teaches the claimed invention as detailed in claims 10, and 23, “characterized by comprising the server (130) that allows for calculating the computerized weighted problem impact score by summing the values obtained by multiplying the number of high severity UX problems by the high severity impact value; multiplying the number of medium severity UX problems by the medium severity impact value; and multiplying the number of low severity UX problems by the low severity impact value and by dividing this sum by the total number of problems. Claims 12-15, and 19-23, “characterized by comprising the server (130) that allows for determining the improvement priority level as "very high" if at least one high severity UX problem has been detected or if the C-WPI/ Plmax value is greater than 0.75. , characterized by comprising the server (130) that allows for determining the improvement priority level as " high" if at least one high severity UX problem has been detected or if the C-WPI/ Plmax value is between 0.50 and 0.75., characterized by comprising the server (130) that allows for determining the improvement priority level as " medium" if high severity UX problem has not been detected or if the C-WPI/ Plmax value is between 0.25 and 0.50. , characterized by comprising the server (130) that allows for determining the improvement priority level as " low" if high severity UX problem has not been detected or if the C-WPI/ Plmax value is lower than 0.25. and claim 25 characterized in that; in the process step of entering the UX problem descriptions determined by the UX experts into the interface offered by the application (120), UX problems are the problems such as interface design that will cause user error, that the terms and icons used in the design are not compatible with those used in reality, failure to provide the user with the function of understanding and undoing the incorrect operation, visual design that does not provide enough feedback to the user and does not show whether the process is progressing or not, deficiencies in the navigation of the website, designs that require unnecessary processing, not remembering the user's previous actions and constantly asking for the same information, color and shape choices that are complex and challenging in interface design, error messages that do not properly prompt the user”. The reason for not applying any rejection under 35 USC 102/103 claims 10, 12-15, and 19-25 in the instant application is because the prior art of record fails to teach the overall combination as claimed. Therefore, it would not have been obvious to one of ordinary skill in the art to modify the prior art to meet the combination above without unequivocal hindsight and one of ordinary skill would have no reason to do so. Upon further searching the examiner could not identify any prior art to teach these limitations. The prior art on record, alone or in combination, neither anticipates, reasonably teaches, not renders obvious the Applicant’s claimed invention. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Iryanti, Emi, Suning Kusumawardani, and Indriana Hidayah. "Determining Priority of Ten Usability Heuristic Using Consistent Fuzzy Preference Relations." 2021 9th International Conference on Cyber and IT Service Management (CITSM). IEEE, 2021. Alves, Rui, Pedro Valente, and Nuno Jardim Nunes. "The state of user experience evaluation practice." Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational. 2014. Schwerin et al. US 2021/0374777: Customer Loyalty Dashboard. Rockwell et al. US 2018/0374106: System for establishing ideal experience framework. Packer US 2017/0024753: System and method for performing a quality assessment by segmenting and analyzing verbatims. Williams et al. US 2016/0203500: System for providing remote processing and interaction with artificial survey administrator. Muto et al. US 2014/0289016: Enhancement of root cause analysis of consumer feedback using micro-survey and applications thereof. Killow et al. US 2014/0095697: Heuristic analysis of responses to user requests. Wagner US 2012/0259676: Methods and apparatus to model consumer choice sourcing. Kasravi et al. US 2009/0070160: Quantitative alignment of business offerings with the expectations of a business prospect. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HAMZEH OBAID whose telephone number is (313)446-4941. The examiner can normally be reached M-F 8 am-5 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571) 270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HAMZEH OBAID/Primary Examiner, Art Unit 3624
Read full office action

Prosecution Timeline

Apr 25, 2024
Application Filed
Jan 28, 2026
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591835
BUILDING SYSTEM WITH BUILDING HEALTH RECOMMENDATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12561749
FIELD SURVEY SYSTEM
2y 5m to grant Granted Feb 24, 2026
Patent 12536571
DYNAMIC SERVICE QUALITY ADJUSTMENTS BASED ON CAUSAL ESTIMATES OF SERVICE QUALITY SENSITIVITY
2y 5m to grant Granted Jan 27, 2026
Patent 12505396
MACHINE LEARNED ENTITY ISSUE MODELS FOR CENTRALIZED DATABASE PREDICTIONS
2y 5m to grant Granted Dec 23, 2025
Patent 12488293
MANAGING FACILITY AND PRODUCTION OPERATIONS ACROSS ENTERPRISE OPERATIONS TO ACHIEVE SUSTAINABILITY GOALS
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
39%
Grant Probability
59%
With Interview (+19.9%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 169 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month