DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1-4, 6, 8-11, 13, 15-18 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention.
Referring to claims 1, 8, 15, and consequently their dependent claims, it is unclear where original support lies for “identify a team via the MLM, based on rank, the summary of the set of elements of the received dataset, and context describing an environment identified within the summary of the set of elements, to process the received dataset, wherein the team identified via the MLM has previously encountered sets of elements matching the generated summary”. While the specification was found to provide support for a summary including context describing the environment (paragraph 83), examiner is unable to find support for that context being a basis for identifying a team.
Further, while examiner has found support for “Identification of the team may be conducted by matching specialties/expertise of teams to the received dataset and previously encountered datasets by the identified team.” (paragraph 85), this does not appear to be “matching the generated summary”.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-4, 6, 8-11, 13, 15-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
At step 1, if no statutory category rejection was given above, then the claims have been determined to have a statutory category.
At step 2a, prong one, referring to claims 1, 8, and 15, there is disclosed a generic computer that receives data comprising a set of elements, ranks the dataset using a machine learning model, generates a summary of the set of elements via an artificial intelligence engine, identifies a team via the MLM, and transmits the dataset to the team identified. Claim 1 recited herein, “A system for routing data transmissions using machine learning models and enriching data using artificial intelligence, the system comprising; at least one non-transitory storage device; and at least one processing device coupled to the at least one non-transitory storage device, wherein the at least one processing device is configured to: receive a dataset comprising a set of elements, wherein the set of elements at least partially comprises an incident report; rank the received dataset among a plurality of datasets via a machine learning model (MLM), wherein ranking the received dataset determines priority of the received dataset within the plurality of datasets; generate a summary of the set of elements within the received dataset via an artificial intelligence engine, wherein the summary of the set of elements further comprises references to previously encountered datasets associated with the incident report; identify a team via the MLM, based on rank, the summary of the set of elements of the received dataset, and context describing an environment identified within the summary of the set of elements, to process the received dataset, wherein the team identified via the MLM has previously encountered sets of elements matching the generated summary; and transmit the received dataset to the team identified by the MLM.” Claims 8 and 15 are similar.
The limitations of ranking, generating, identifying, as crafted, are processes that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of additional elements that do not integrate the judicial exception into a practical application. That is, nothing in these claim elements as emphasized precludes the step from practically being performed in the mind, possibly with the aid pen and paper. For example, these steps perform steps of observation, evaluation, judgment, or opinion.
If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of additional elements that do not integrate the judicial exception into a practical application, then it falls within the "Mental Processes" grouping of abstract ideas. Accordingly, the claim recites an abstract idea.
At step 2a, prong two, this judicial exception is not integrated into a practical application. In particular the claim additionally recites a generic computer, receiving, use of a MLM, use of an AI engine, and transmission.
In each of the limitations, the computer is recited at a high level of generality. This amounts to no more than mere instructions to apply the exception using a generic computer. See MPEP 2106.05(f).
The limitations of receiving and transmitting are mere data gathering and output recited at a high level of generality, and thus are insignificant extra-solution activity. See MPEP 2106.05(g) (“whether the limitation is significant”). In addition, all uses of the recited judicial exceptions require such data gathering and output, and, as such, these limitations do not impose any meaningful limits on the claim. These limitations amount to necessary data gathering and outputting. See MPEP 2106.05.
The limitations of use of the MLM and AI engine provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception.
The judicial exception of ranking and generating is performed using the MLM and AI engine. The MLM and AI engines are used to generally apply the abstract idea without placing any limits on how the MLM or AI engine function. Rather, these limitations only recite the outcome of ranking and generating and do not include any details about how the ranking or generating are accomplished. See MPEP 2106.05(f).
The recitation of ranking and generating also merely indicates a field of use or technological environment in which the judicial exception is performed. Although the additional use of the MLM and AI engine limits the identified judicial exceptions, this type of limitation merely confines the use of the abstract idea to a particular technological environment (AI) and thus fails to add an inventive concept to the claims. See MPEP 2106.05(h).
Even when viewed in combination, the additional elements in this claim do no more than automate the mental processes a person may perform, using the computer components as a tool. Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claim is directed to an abstract idea.
At step 2b, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of a generic computer, receiving and transmitting data, and use of an MLM and AI engine amounts to no more than mere instructions to apply the exception using generic computer components. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept.
The additional element of using the MLM and AI engine are at best mere instructions to “apply” the abstract ideas, which cannot provide an inventive concept. See MPEP 2106.05(f).
Additional elements of receiving and transmitting were both found to be insignificant extra-solution activity in Step 2A, Prong Two, because they were determined to be insignificant limitations as necessary data gathering and outputting. However, a conclusion that an additional element is insignificant extra-solution activity in Step 2A, Prong Two should be re-evaluated in Step 2B. See MPEP 2106.05, subsection I.A. At Step 2B, the evaluation of the insignificant extra-solution activity consideration takes into account whether or not the extra-solution activity is well understood, routine, and conventional in the field. See MPEP 2106.05(g). As discussed in Step 2A, Prong Two above, the recitations of receiving and transmitting are recited at a high level of generality. These elements amount to receiving or transmitting data over a network and are well-understood, routine, conventional activity. See MPEP 2106.05(d), subsection II.
As discussed in Step 2A, Prong Two above, the recitation of a computer to perform limitations amounts to no more than mere instructions to apply the exception using a generic computer component.
Even when considered in combination, these additional elements represent mere instructions to implement an abstract idea or other exception on a computer and insignificant extra-solution activity, which do not provide an inventive concept.
Further referring to claim 2-4, this further performs steps of observation, evaluation, judgment, or opinion.
Further referring to claim 6, this further performs steps of observation, evaluation, judgment, or opinion.
Referring to claims 9-11, 13, and 16-18, see claims 2-4, 6 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-4, 8-11, and 15-18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US20210014136 to Rath and US5666481 to Lewis.
Referring to claim 1, Rath discloses a system for routing data transmissions using machine learning models and enriching data using artificial intelligence, the system comprising: at least one non-transitory storage device; and at least one processing device coupled to the at least one non-transitory storage device, wherein the at least one processing device is configured to:
receive a dataset comprising a set of elements, wherein the set of elements at least partially comprises an incident report (Figure 4, 414, “Receive support ticket”.);
rank the received dataset among a plurality of datasets via a machine learning model (MLM), wherein ranking the received dataset determines priority of the received dataset within the plurality of datasets (Paragraph 64, “The production support tickets assignment system 332 can achieve optimal workload for a highly skilled and experienced support agent by reassigning, on an as-needed basis, the lower priority support tickets that the expert support agent currently has in their queue. The production server 314 can determine the priority of a support ticket by the support ticket's urgency or by the life stage of the support ticket, such as whether the central issue been resolved. If a series of complex support tickets are received and are assigned to an expert support agent, the expert support agent's workload might become full quickly. However, when critical support tickets are received, the production support tickets assignment system 332 can reassign some of the complex support tickets in the expert support agent's current queue to other support agents. This reassignment may be useful because during the period of support ticket ownership, the expert support agent should have worked the complex support tickets towards resolution, on account of being adept in handling similar complex support tickets in the past.” Paragraph 101, “Subsequent to scoring support agents, the support agent scores are used to assign a support ticket to a support agent from at least one of the sets of support agents, block 428. The system optimally assigns a support ticket to a support agent. By way of example and without limitation, this can include the production support tickets assignment system 332 assigning the open support ticket 200 to Dana because Dana's score of 90 is higher than Bob's score of 45, which reflects that Dana is the only support agent who is sufficiently qualified to resolve the open support ticket 200 since Dana has the experiences which Bob lacks resolving the high complexity support tickets such as the open support ticket 200”.);
generate a summary of the set of elements within the dataset via an artificial intelligence engine (Paragraph 94-96, “After being trained, a support ticket is received, block 414. The system receives a support ticket for determining the support ticket's topic(s) and complexity. For example, and without limitation, this can include the production support tickets assignment system 332 receiving support tickets, which includes the support ticket 200 that contains the subsequent communication 202 and the support ticket's metadata 204, as depicted by FIG. 2. Following the receipt of a support ticket, a topic of the support ticket is determined, block 416. The system identifies a support ticket's topic(s). By way of example and without limitation, this can include the production server 314 applying the support ticket topics machine-learning model 336 to the analysis of the support ticket 200 by the natural language processor machine-learning model 334, thereby identifying a remote mount problem as the topic of the support ticket 200.Having received a support ticket, a complexity of the support ticket is estimated, block 418. The system estimates a support ticket's complexity. In embodiments, this can include the production server 314 applying the support ticket complexities machine-learning model 338 to the natural language analysis of the support ticket 200 by the natural language processor machine-learning model 334, thereby estimating a high complexity from the customer's description of their remote mount problem because the support ticket 200 includes multiple machine language error messages.”);
identify a team via the MLM, based on rank, the summary of the set of elements of the received dataset, and context describing an environment identified within the summary of the set of elements to process the received dataset, wherein the team identified via the MLM has previously encountered sets of elements matching the generated summary; and transmit the dataset to the team identified by the MLM (Paragraph 97-101, “Subsequent to identifying a topic of a support ticket, a first set of support agents are identified who have skills handling the topic of the support ticket, block 420. The system identifies support agent who have skills to handle the support ticket's topic(s). For example, and without limitation, this can include the production server 314 applying the support agent topical skills machine-learning model 340 to the summary rows for the support agents in the support agent-topical skills matrix to identify Bob and Dana as support agents who have skills handling the remote mount problem of the open support ticket 200. After estimating a support ticket's complexity, a machine-learning model identifies a second set of support agents who have experiences resolving support tickets of the estimated complexity, block 422. The system identifies support agents who have experiences resolving support ticket of the estimated complexity. By way of example and without limitation, this can include the production server 314 applying the support agent complexity experiences machine-learning model 340 to the summary rows for the support agents in the support agent-complexity experiences matrix to identify only Dana as a support agent who has experiences resolving support tickets that have the high complexity of the remote mount problem described in the open support ticket 200. Following identification of support agents who can handle a support ticket's topic and resolve support tickets of the estimated complexity, workload availabilities of some of the identified support agents are projected for the support ticket, block 424. The system projects the workload availabilities of identified support agents. In embodiments, this can include the production server 312 applying the support ticket projected workloads machine-learning model 342 to the support agent-workload data structure to project the workloads of Bob and Dana, who were identified as support agents who had skills to handle remote mount problems. The production server 314 also references the support agent-availability data structure to verify that Bob and Dana will have projected availabilities to be assigned the open support ticket 200, such that Bob and Dana both have the projected workload availability for accepting assignment of the open support ticket. Having identified available support agents, support agent scores are generated based on the support agents' skills handling the topic of the support tickets, experiences resolving support tickets of the estimated complexity, and projected workload availabilities for the support ticket, block 426. The system scores support agents for optimal assignment of a support ticket. For example, and without limitation, this can include the production support tickets assignment system 332 generating a score of 45 for assigning the open support ticket 200 to Bob. This score of 45 is based on Bob having skills handling the remote mount problem of the open support ticket 200, no experiences resolving high complexity support tickets such as the open support ticket 200, a projected workload that would have permitted resolving a high complexity support ticket, and the projected availability to be assigned the open support ticket 200. Additionally, the production support tickets assignment system 332 generates a score of 90 for assigning the open support ticket 200 to Dana. This score of 90 is based on Dana having skills handling the remote mount problem of the open support ticket 200, experiences resolving high complexity support tickets such as the open support ticket 200, a projected workload that would have permitted resolving the high complexity support ticket 200, and the projected availability to be assigned the open support ticket 200. Subsequent to scoring support agents, the support agent scores are used to assign a support ticket to a support agent from at least one of the sets of support agents, block 428. The system optimally assigns a support ticket to a support agent. By way of example and without limitation, this can include the production support tickets assignment system 332 assigning the open support ticket 200 to Dana because Dana's score of 90 is higher than Bob's score of 45, which reflects that Dana is the only support agent who is sufficiently qualified to resolve the open support ticket 200 since Dana has the experiences which Bob lacks resolving the high complexity support tickets such as the open support ticket 200”. Paragraph 64, “The production support tickets assignment system 332 can achieve optimal workload for a highly skilled and experienced support agent by reassigning, on an as-needed basis, the lower priority support tickets that the expert support agent currently has in their queue. The production server 314 can determine the priority of a support ticket by the support ticket's urgency or by the life stage of the support ticket, such as whether the central issue been resolved. If a series of complex support tickets are received and are assigned to an expert support agent, the expert support agent's workload might become full quickly. However, when critical support tickets are received, the production support tickets assignment system 332 can reassign some of the complex support tickets in the expert support agent's current queue to other support agents. This reassignment may be useful because during the period of support ticket ownership, the expert support agent should have worked the complex support tickets towards resolution, on account of being adept in handling similar complex support tickets in the past.” Wherein, for example, a “remote mount problem” is context describing an environment and the agent having topical skills that match a problem is descriptive of “previously encountered sets of elements matching the generated summary” of Rath’s paragraph 95.).
Although Rath does not specifically disclose wherein the summary of the set of elements further comprises references to previously encountered datasets associated with the incident report, this is very well known in the art. In a related field of computing, an example of this is shown by Lewis, from the abstract, “An improved method and apparatus of resolving faults in a communications network. The preferred system uses a trouble ticket data structure to describe communications network faults. Completed trouble tickets are stored in a library and when an outstanding trouble ticket is received, the system uses at least one determinator to correlate the outstanding communications network fault to data fields in the set of data fields of the trouble ticket data structure to determine which completed trouble tickets in the library are relevant to the outstanding communications network fault. The system retrieves a set of completed trouble tickets from the library that are similar to the outstanding trouble ticket and uses at least a portion of the resolution from at least one completed trouble ticket to provide a resolution of the outstanding trouble ticket. The determinators may be macros, rules, a decision tree derived from an information theoretic induction algorithm and/or a neural network memory derived from a neural network learning algorithm. The system may adapt the resolution from a retrieved trouble ticket to provide the resolution using null adaptation, parameterized adaptation, abstraction/respecialization adaptation, or critic-based adaptation techniques.” It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to reference previously encountered datasets because, as shown by Lewis, this may allow an outstanding issue to be resolved.
Referring to claim 2, Rath discloses wherein individual elements within the received dataset are ranked on priority according to a predetermined set of indicators (Paragraph 64, “The production support tickets assignment system 332 can achieve optimal workload for a highly skilled and experienced support agent by reassigning, on an as-needed basis, the lower priority support tickets that the expert support agent currently has in their queue. The production server 314 can determine the priority of a support ticket by the support ticket's urgency or by the life stage of the support ticket, such as whether the central issue been resolved. If a series of complex support tickets are received and are assigned to an expert support agent, the expert support agent's workload might become full quickly. However, when critical support tickets are received, the production support tickets assignment system 332 can reassign some of the complex support tickets in the expert support agent's current queue to other support agents. This reassignment may be useful because during the period of support ticket ownership, the expert support agent should have worked the complex support tickets towards resolution, on account of being adept in handling similar complex support tickets in the past.”).
Referring to claim 3, Rath discloses wherein identification of the team via the MLM further comprises determining the team from a set of teams compatible with the received dataset based on rank and the summary of the set of elements of the received dataset (Paragraph 97-101, “Subsequent to identifying a topic of a support ticket, a first set of support agents are identified who have skills handling the topic of the support ticket, block 420. The system identifies support agent who have skills to handle the support ticket's topic(s). For example, and without limitation, this can include the production server 314 applying the support agent topical skills machine-learning model 340 to the summary rows for the support agents in the support agent-topical skills matrix to identify Bob and Dana as support agents who have skills handling the remote mount problem of the open support ticket 200. After estimating a support ticket's complexity, a machine-learning model identifies a second set of support agents who have experiences resolving support tickets of the estimated complexity, block 422. The system identifies support agents who have experiences resolving support ticket of the estimated complexity. By way of example and without limitation, this can include the production server 314 applying the support agent complexity experiences machine-learning model 340 to the summary rows for the support agents in the support agent-complexity experiences matrix to identify only Dana as a support agent who has experiences resolving support tickets that have the high complexity of the remote mount problem described in the open support ticket 200. Following identification of support agents who can handle a support ticket's topic and resolve support tickets of the estimated complexity, workload availabilities of some of the identified support agents are projected for the support ticket, block 424. The system projects the workload availabilities of identified support agents. In embodiments, this can include the production server 312 applying the support ticket projected workloads machine-learning model 342 to the support agent-workload data structure to project the workloads of Bob and Dana, who were identified as support agents who had skills to handle remote mount problems. The production server 314 also references the support agent-availability data structure to verify that Bob and Dana will have projected availabilities to be assigned the open support ticket 200, such that Bob and Dana both have the projected workload availability for accepting assignment of the open support ticket. Having identified available support agents, support agent scores are generated based on the support agents' skills handling the topic of the support tickets, experiences resolving support tickets of the estimated complexity, and projected workload availabilities for the support ticket, block 426. The system scores support agents for optimal assignment of a support ticket. For example, and without limitation, this can include the production support tickets assignment system 332 generating a score of 45 for assigning the open support ticket 200 to Bob. This score of 45 is based on Bob having skills handling the remote mount problem of the open support ticket 200, no experiences resolving high complexity support tickets such as the open support ticket 200, a projected workload that would have permitted resolving a high complexity support ticket, and the projected availability to be assigned the open support ticket 200. Additionally, the production support tickets assignment system 332 generates a score of 90 for assigning the open support ticket 200 to Dana. This score of 90 is based on Dana having skills handling the remote mount problem of the open support ticket 200, experiences resolving high complexity support tickets such as the open support ticket 200, a projected workload that would have permitted resolving the high complexity support ticket 200, and the projected availability to be assigned the open support ticket 200. Subsequent to scoring support agents, the support agent scores are used to assign a support ticket to a support agent from at least one of the sets of support agents, block 428. The system optimally assigns a support ticket to a support agent. By way of example and without limitation, this can include the production support tickets assignment system 332 assigning the open support ticket 200 to Dana because Dana's score of 90 is higher than Bob's score of 45, which reflects that Dana is the only support agent who is sufficiently qualified to resolve the open support ticket 200 since Dana has the experiences which Bob lacks resolving the high complexity support tickets such as the open support ticket 200”. Paragraph 64, “The production support tickets assignment system 332 can achieve optimal workload for a highly skilled and experienced support agent by reassigning, on an as-needed basis, the lower priority support tickets that the expert support agent currently has in their queue. The production server 314 can determine the priority of a support ticket by the support ticket's urgency or by the life stage of the support ticket, such as whether the central issue been resolved. If a series of complex support tickets are received and are assigned to an expert support agent, the expert support agent's workload might become full quickly. However, when critical support tickets are received, the production support tickets assignment system 332 can reassign some of the complex support tickets in the expert support agent's current queue to other support agents. This reassignment may be useful because during the period of support ticket ownership, the expert support agent should have worked the complex support tickets towards resolution, on account of being adept in handling similar complex support tickets in the past.”).
Referring to claim 4, Rath discloses wherein the summary of the set of elements generated by the artificial intelligence engine provides a context for the received dataset (Paragraph 94-96, “After being trained, a support ticket is received, block 414. The system receives a support ticket for determining the support ticket's topic(s) and complexity. For example, and without limitation, this can include the production support tickets assignment system 332 receiving support tickets, which includes the support ticket 200 that contains the subsequent communication 202 and the support ticket's metadata 204, as depicted by FIG. 2. Following the receipt of a support ticket, a topic of the support ticket is determined, block 416. The system identifies a support ticket's topic(s). By way of example and without limitation, this can include the production server 314 applying the support ticket topics machine-learning model 336 to the analysis of the support ticket 200 by the natural language processor machine-learning model 334, thereby identifying a remote mount problem as the topic of the support ticket 200.Having received a support ticket, a complexity of the support ticket is estimated, block 418. The system estimates a support ticket's complexity. In embodiments, this can include the production server 314 applying the support ticket complexities machine-learning model 338 to the natural language analysis of the support ticket 200 by the natural language processor machine-learning model 334, thereby estimating a high complexity from the customer's description of their remote mount problem because the support ticket 200 includes multiple machine language error messages.”).
Referring to claims 8-11, 15-18, see rejection of claims 1-4 above.
Claim(s) 6,13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Rath and Lewis as applied to claims 1,8 above, and further in view of Official notice (root cause analysis).
Referring to claims 6,13, although Rath does not specifically disclose wherein the summary of the set of elements identifies potential causes of the incident report, identifying a root cause is very well known in the art. In a related field of computing, examiner takes official notice for root cause analysis. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to identify a cause because this aids in diagnosis and resolution.
Response to Arguments
Applicant's arguments filed 23 December 2025 have been fully considered but they are not persuasive.
Regarding Applicant’s characterizing (page 9) their invention as addressing “routing data transmissions using machine learning models and enriching data using artificial intelligence”, this is more accurately described as using AI to determine a team and then assigning work to that team. As shown above in the 101 rejection, while there is data transmitted, this is extrasolution and well understood, routine, and conventional.
Regarding Applicant’s argument (page 10) that the claims provide a “specific manner” for this determination of a team, these merely describe steps of observation, evaluation, judgment, or opinion.
Regarding Applicant’s argument (page 10-11) that invention improves with fewer steps, more accuracy, removing manual input, an determining an optimal amount at best these describe improvements in processes of observation, evaluation, judgment, or opinion. On the removal of manual data entry, it is not evident where this even occurs, but even had it been present this at best seems to describe an additional element of data input/reception, an extrasolution activity.
Regarding Applicant’s argument (page 12) that the claims recite additional elements that are significantly more, Applicant again summarizes their claim as routing data transmissions using AI, which were deemed insignificant and extrasolution, above, and no particular argument is further given here, beyond the allegation.
Applicant then goes on to refer to McRo (page 12), emphasizing examination of the claims as a whole and in an ordered combination and further arguing that McRo supports that describing a “specific way” to solve the problem renders the claims eligible. In rejecting the claims, Examiner considered the ordering and the claims as a whole. As claimed, it merely receives data, analyzes it (with and without AI), and then outputs a result of the analysis. Specifically regarding the order of the application of the MLM and the AI engine steps, as written and as understood, the steps can be done in either order, but the results from either must be available prior to making a decision that is based on those results. Beyond the mere identification of their use, the MLM and AI engine steps do nothing to further the abstract mental steps except to automate them by machine. To the extent present, steps of receiving and transmitting merely serve to perform I/O for analysis and output. Nothing unconventional or unexpected is presented. To the extent that a “specific” solution is presented, it only describes steps of observation, evaluation, judgment, or opinion implemented by a generic machine.
Applicant further argues (page 12) that in Bascom a nonconvention and nongeneric arrangement of known conventional pieces can render the claims eligible. See analysis just previous. Applicant then further argues that “software-based inventions that improve the performance of the computer itself” can be eligible. This does not improve the performance of a computer itself, but rather, at best, improves the performance of observation, evaluation, judgment, or opinion that is implemented in a generic machine.
Regarding Applicant’s argument (page 13-14) that Rath and Lewis do not teach the newly claimed features of the invention, see 112 and art rejections above. On page 14, Applicant further points to paragraphs 84-85 both in support and in apparently reliance of a fleshing out of the claims. As above, Examiner has not found substantive support for the language amended. Applicant may more clearly supply reasoning in the absence of verbatim language. Regardless, as claimed, it is not found to distinguish.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US20240275699, see abstract.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GABRIEL L CHU whose telephone number is (571)272-3656. The examiner can normally be reached weekdays 8 am to 5 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ashish Thomas can be reached at (571)272-0631. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GABRIEL CHU/ Primary Examiner, Art Unit 2114