Prosecution Insights
Last updated: April 19, 2026
Application No. 17/334,209

SYSTEMS AND METHODS FOR CONTEXTUAL PARTICIPATION FOR REMOTE EVENTS

Final Rejection §101§103
Filed
May 28, 2021
Examiner
VASAT, PETER S
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
AT&T Intellectual Property I, L.P.
OA Round
4 (Final)
50%
Grant Probability
Moderate
5-6
OA Rounds
4y 1m
To Grant
83%
With Interview

Examiner Intelligence

Grants 50% of resolved cases
50%
Career Allow Rate
200 granted / 397 resolved
-19.6% vs TC avg
Strong +33% interview lift
Without
With
+32.9%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
32 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
6.1%
-33.9% vs TC avg
§103
47.0%
+7.0% vs TC avg
§102
17.9%
-22.1% vs TC avg
§112
20.3%
-19.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 397 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims This office action is in response to arguments and amendments entered on August 28, 2025 for the patent application 17/334,209 originally filed on May 28, 2021. Claims 1, 3, and 18 are amended. Claims 1-20 are pending. The first office action of July 17, 2024, the second office action of January 13, 2025, and the third office action of May 28, 2025 are fully incorporated by reference into this office action. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. § 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Claim 1 is directed to “a device” (i.e. a machine); claim 13 is directed to “a non-transitory machine-readable” (i.e. a machine); and claim 18 is directed to “a method” (i.e. a process), hence the claims are directed to one of the four statutory categories (i.e. process, machine, manufacture, or composition of matter). In other words, Step 1 of the subject-matter eligibility analysis is “Yes.” However, the claims are drawn to an abstract idea of “facilitating performance of operations,” in the form of “certain methods of organizing human activity,” in terms of managing personal behavior or relationships or interactions between people (including social activities, teaching and following rules or instructions), or reasonably in the form of “mental processes,” in terms of processes that can be performed in the human mind (including an observation, evaluation, judgement or opinion). Regardless, the claims are reasonably understood as either “certain methods of organizing human activity” or “mental processes,” which require the following limitations: Per claim 1: “facilitating establishing a communication session … between the device and a first user device; receiving … a first network code from the first user device; handshaking between the device and the first user device based on the first network code resulting in a first handshake; based on the first handshake, establishing the communication session between the device and the first user device; engaging in first communications… wherein the first communications comprise a first visual representation of first actions performed by a first user, and wherein the first communications are sent from the first user device to the device; facilitating establishing the communication session, over the communication network, between the device and a second user device; receiving, over the communication network, a second network code from the second user device; handshaking between the device and the second user device based on the second network code resulting in a second handshake; based on the second handshake, establishing the communication session between the device and the second user device; engaging in second communications between the device and the second user device… wherein the second communications comprise a second visual representation of second actions performed by a second user, and wherein the second communications occur substantially simultaneously with the first communications; making a first determination via a first machine learning algorithm associated with the first user, based at least in part upon the first visual representation, whether a performance of a first task by the first user has been completed; making a second determination via a second machine learning algorithm associated with the second user, based at least in part upon the second visual representation, whether the performance of the first task by the second user has been completed, wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm; responsive to the first determination being that the performance of the first task by the first user has been completed, prompting an instructor to provide an indication of a next task to be performed by the first user; and responsive to the second determination being that the performance of the first task by the second user has not been completed, prompting the instructor to provide additional instructions to the second user to aid the second user in the performance of the first task. Per claim 13: “facilitating establishing a communication session … between the device and a first user device; receiving … a first network code from the first user device; handshaking between the device and the first user device based on the first network code resulting in a first handshake; based on the first handshake, establishing the communication session between the device and the first user device; receiving from the first user device…, a first visual representation of a first action performed by a first user; facilitating establishing the communication session, over the communication network, between the device and a second user device; receiving, over the communication network, a second network code from the second user device; handshaking between the device and the second user device based on the second network code resulting in a second handshake; based on the second handshake, establishing the communication session between the processing system and the second user device; receiving from the second user device… a second visual representation of a second action performed by a second user; engaging in a first machine learning algorithm associated with the first user to determine whether, based at least in part upon the first visual representation, a performance of a first portion of a sequential process has been completed by the first user; engaging in a second machine learning algorithm associated with the second user to determine whether, based at least in part upon the second visual representation, the performance of the first portion of the sequential process has been completed by the second user, wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm; responsive to a first determination that the performance of the first portion of the sequential process by the first user has been completed, prompting an instruction provider to provide an indication of a next portion of the sequential process to be performed by the first user; and responsive to a second determination that the performance of the first portion of the sequential process by the second user has not been completed, prompting the instruction provider to provide additional instructions to the second user to aid the second user in the performing the first portion of the sequential process. Per claim 18: “facilitating establishing a communication session … between the processing system and a first end user device; receiving … a first network code from the first user device; handshaking … between the processing system and the first end user device based on the first network code resulting in a first handshake; based on the first handshake, establishing … the communication session between the processing system and the first end user device; facilitating … establishing the communication session, over the communication network, between the processing system and a second end user device; receiving… , over the communication network, a second network code from the second end user device; handshaking … between the processing system and the second end user device based on the second network code resulting in a second handshake; based on the second handshake, establishing … the communication session between the processing system and the second end user device; receiving, by the processing system, a plurality of video feeds, each of the plurality of video feeds over the communication session, each of the plurality of video feeds being provided by a respective one of a plurality of end user devices, wherein the plurality of end user devices comprises the first end user device and the second end user device; determining by the processing system, via a first machine learning algorithm associated with a first user of the first end user device, for a first video feed of the plurality of video feeds, in which particular sequential process of a plurality of potential sequential processes the first user is engaged; determining by the processing system, via a second machine learning algorithm associated with a second user of the second end user device, for a second video feed of the video feeds, in which particular sequential process of the potential sequential processes a second user is engaged, wherein the particular sequential process in which the second user is engaged is different from the particular sequential process in which the first user is engaged, and wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm; prompting a first instructor who is associated with the particular sequential process in which the first user is engaged to provide to the first user first instructions on how to perform a next stage of the particular sequential process in which the first user is engaged; and prompting a second instructor who is associated with the particular sequential process in which the second user is engaged to provide to the second user additional instructions on how to perform a current stage of the particular sequential process in which the second user is engaged. These limitations simply describe a process of data gathering and manipulation, which is sufficiently analogous to “collecting information, analyzing it, and displaying certain results of the collection analysis” (i.e. Electric Power Group, LLC, v. Alstom, 830 F.3d 1350, 119 U.S.P.Q.2d 1739 (Fed. Cir. 2016)). Furthermore, these limitations simply describe a process of facilitating communication between people that falls under “certain methods of organizing human activity,” in terms of managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). Hence, these limitations are akin to an abstract idea which has been identified among non-limiting examples to be an abstract idea. In other words, Step 2A, Prong 1 of the subject-matter eligibility analysis is “Yes.” Furthermore, this judicial exception is not integrated into a practical application because: (a) It does not improve the functioning of a computer or to any other technology or technical field; (b) Applying the judicial exception does not effect a particular treatment or prophylaxis for a disease or medical condition; (c) Do not apply the judicial exception with, or by use of a particular machine; (d) It does not effect a transformation or reduction of a particular article to a different state or thing; (e) It does not apply or use the judicial exception in some other meaningful way beyond generally linking the use of the exception to a particular technological environment such that the claims as a whole are more than a drafting effort designed to monopolize the exception. Namely, the Applicant’s claimed elements of “a communication network”, “a network code”, “a handshake”, “a processing system including a processor,” “a memory,” “a first user device,” “a first end user device,” “a second user device,” “a second end user device” “a first communication channel,” “a second communication channel,” and “a first machine learning algorithm,” and “a second machine learning algorithm” are merely claimed to generally link the use of a judicial exception (e.g., pre-solution activity of data gathering and post-solution activity of presenting data) to (1) a particular technological environment or (2) field of use, per MPEP §2106.05(h); and are applying the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, per MPEP §2106.05(f). In other words, the claimed “facilitating performance of operations,” is not providing a practical application, thus Step 2A, Prong 2 of the subject-matter eligibility analysis is “No.” Likewise, the claims do not include additional elements that either alone or in combination are sufficient to amount to significantly more than the judicial exception because to the extent that, e.g. “a communication network”, “a network code”, “a handshake”, “a processing system including a processor,” “a memory,” “a first user device,” “a first end user device,” “a second user device,” “a second end user device” “a first communication channel,” “a second communication channel,” and “a first machine learning algorithm,” and “a second machine learning algorithm” are claimed, these are generic, well-known, and conventional data gather computing elements are claimed. As evidence that these are generic, well-known, and a conventional data gathering computing elements (or an equivalent term), as a commercially available product, or in a manner that indicates that the additional elements are sufficiently well-known, the Applicant’s specification discloses these in a manner that indicates that the additional elements are sufficiently well-known that the specification does not need to describe the particulars of such additional elements to satisfy 35 U.S.C. § 112(a), per MPEP § 2106.07(a) III (a). As such, this satisfies the Examiner’s evidentiary burden requirement per the Berkheimer memo. Specifically, the Applicant’s claimed “a communication network” is described in para. [0085] and [00106] as follows: “[0085]: In contrast to traditional network elements - which are typically integrated to perform a single function, the virtualized communication network employs virtual network elements (VNEs)330, 332, 334, etc. that perform some or all of the functions of network elements150, 152, 154, 156, etc. For example, the network architecture can provide a substrate of networking capability, often called Network Function Virtualization Infrastructure (NFVI) or simply infrastructure that is capable of being directed with software and Software Defined Networking (SDN) protocols to perform a broad variety of network functions and services” “[000106] When used in a LAN networking environment, the computer402 can be connected to the LAN452 through a wired and/or wireless communication network interface or adapter456. The adapter456 can facilitate wired or wireless communication to the LAN452, which can also comprise a wireless AP disposed thereon for communicating with the adapter456.” This element is reasonably interpreted as a generic communication network which provides no details of anything beyond ubiquitous standard equipment. As such, the claimed limitation of “a communication network” is reasonably understood as not providing anything significantly more. Further, the Applicant’s claimed “a network code” and “a handshake” are described in para. [0032], as follows: “[0032]: one device (e.g., the first user device) can be instructed to "handshake" another device (e.g., the device) with a network code when connected. In another example, one device (e.g., the device) can be instructed to "handshake" another device (e.g., the first user device) with a network code when connected. In another example, a background process can be executed by either the first user device or the device of first actions to execute the above "handshake" and proceed to send other operational data (e.g. network keys, power level, etc.)” These elements are reasonably interpreted as generic code that performs a common function in networking which provides no details of anything beyond ubiquitous standard network code. As such, the claimed limitations of “a network code” and “a handshake” are reasonably understood as not providing anything significantly more. Further, the Applicant’s claimed “a processing system including a processor,” and “a memory,” are described in paras. [0098] and [0128], as follows: “[0098] With reference again to FIG. 4, the example environment can comprise a computer 402, the computer 402 comprising a processing unit 404, a system memory 406 and a system bus 408. The system bus 408 couples system components including, but not limited to, the system memory 406 to the processing unit 404. The processing unit 404 can be any of various commercially available processors. Dual microprocessors and other multiprocessor architectures can also be employed as the processing unit 404.” “[0128] Moreover, it will be noted that the disclosed subject matter can be practiced with other computer system configurations, comprising single-processor or multiprocessor computer systems, mini-computing devices, mainframe computers, as well as personal computers, hand-held computing devices (e.g., PDA, phone, smartphone, watch, tablet computers, netbook computers, etc.), microprocessor-based or programmable consumer or industrial electronics, and the like.” These elements are reasonably interpreted as generic components as part of a generic computer which provides no details of anything beyond ubiquitous standard equipment. As such, the claimed limitations of “a processing system including a processor,” and “a memory,” are reasonably understood as not providing anything significantly more. Likewise, “a first user device,” and “a second user device,” are described in claims 8, 16 and 20 as follows: “8. The device of claim 1, wherein: the first user device comprises a first desktop computer, a first laptop computer, a first tablet, a first smartphone, or any first combination thereof; and the second user device comprises a second desktop computer, a second laptop computer, a second tablet, a second smartphone, or any second combination thereof.” “16. The non-transitory machine-readable medium of claim 13, wherein: the first user device comprises a first desktop computer, a first laptop computer, a first tablet, a first smartphone, or any first combination thereof; the second user device comprises a second desktop computer, a second laptop computer, a second tablet, a second smartphone, or any second combination thereof; the first visual representation comprises a first image, a first plurality of images, a first video, or any third combination thereof; and the second visual representation comprises a second image, a second plurality of images, a second video, or any fourth combination thereof.” “20. The method of claim 18, wherein: each end user device comprises a respective desktop computer, a respective laptop computer, a respective tablet, a respective smartphone, or any respective combination thereof; the plurality of end user devices comprises a first end user device associated with the first user and a second end user device associated with the second user; the method further comprises sending, by the processing system, to the first end user device the first instructions, the first instructions comprising first text, first audio, first video, or any first combination thereof; and the method further comprises sending, by the processing system, to the second end user device the second instructions, the second instructions comprising second text, second audio, second video, or any second combination thereof.” These elements are reasonably interpreted as generic computers which provides no details of anything beyond ubiquitous standard equipment. As such, the claimed limitations of “a first user device,” and “a second user device,” are reasonably understood as not providing anything significantly more. The claimed limitations of “a first end user device” and “a second end user device” are mentioned in para. [0036] but also provides no details of anything beyond “a first user device” and “a second user device”. Also, the Applicant’s claimed “a first communication channel,” and “a second communication channel,” are described in para. [0029] and in claim 15, as follows: “[0029] FIG. 2C is a block diagram illustrating an example, non-limiting embodiment of a system 260 (which can function, for example, fully or partially within the communication network of FIG. 1) in accordance with various aspects described herein. This FIG. 2C shows an example of local tech repairs aided by a remote expert. As seen, each of a plurality of participants 261 utilizes a respective channel 262 (e.g., wireless channel facilitated via use of a respective smartphone, tablet, or the like) to communicate with a remote leader or expert 264 via routing 263 (such routing can be performed, for example, by one or more servers).” “15. The non-transitory machine-readable medium of claim 14, wherein: the first communication channel comprises a first wireless communication channel, a first wired communication channel, or any first combination thereof; and the second communication channel comprises a second wireless communication channel, a second wired communication channel, or any second combination thereof.” These elements are reasonably interpreted as generic modes of communication amongst generic computers which provides no details of anything beyond ubiquitous standard equipment. As such, the claimed limitations of “a first communication channel,” and “a second communication channel,” are reasonably understood as not providing anything significantly more. Finally, the Applicant’s claimed a “a first machine learning algorithm” and “a second machine learning algorithm” are described in para. [0032] as follows: “Next, step 2006 comprises making a first determination via machine learning, based at least in part upon the first visual representation, whether performance of a first task by the first user has been completed. In one example, machine learning detects the correlation of actions taken by the first user device and the second user device that trigger the same communication signal of completion (e.g., both the first and second user devices, upon completion of a task send a network signal and/or audible sound to acknowledge a cable is plugged into a router). In another example, a network signal, poll and/or indicator can be sent to one or more of the first and/or second devices as part of the sequence. An implicit step in the process can involve utilizing that signal to complete the process and machine learning (e.g., via automated, continual testing for both the first and second user devices) would detect that an additional (e.g., concluding or next-step) process is now available. For example, one device (e.g., the first user device) can be instructed to “handshake” another device (e.g., the device) with a network code when connected. In another example, one device (e.g., the device) can be instructed to “handshake” another device (e.g., the first user device) with a network code when connected. In another example, a background process can be executed by either the first user device or the device of first actions to execute the above “handshake” and proceed to send other operational data (e.g. network keys, power level, etc.). In one example, only when the action is correctly completed by the user can the process continue and a machine learning method can determine (e.g., through root cause analysis) which step in the process is at fault (e.g., an action of the user, a failure of the user device, a failure of the user action device, etc.). In one example, the result of this determination is utilized in step 2006. Next, step 2008 comprises making a second determination via the machine learning, based at least in part upon the second visual representation, whether performance of the first task by the second user has been completed. In one example, the machine learning of step 2006 is the same (e.g., uses the same machine learning/artificial intelligence algorithm(s)) as the machine learning of step 2008. In another example, the machine learning of step 2006 is different (e.g., uses different machine learning/artificial intelligence algorithm(s)) from the machine learning of step 2008. In another example (wherein the machine learning of step 2006 is different from the machine learning of step 2008), the different machine learning/artificial intelligence algorithm(s) for each of the steps 2006, 2008 can be based upon different users and/or based upon other different scenarios.” This element is reasonably interpreted as a commercially available product, or in a manner that indicates that the additional element is sufficiently well-known, which provides no details of anything beyond ubiquitous standard equipment. As such, the claimed limitations of “a first machine learning algorithm” and “a second machine learning algorithm” are reasonably understood as not providing anything significantly more. Therefore, Step 2B, of the subject-matter eligibility analysis is “No.” In addition, dependent claims 2-12, 13-17 and 19-20 do not provide a practical application and are insufficient to amount to significantly more than the judicial exception. As such, dependent claims 2-12, 13-17 and 19-20 are also rejected under 35 U.S.C. § 101, based on their respective dependencies to claim 1, 13 and 18. Therefore, claims 1-20 are rejected under 35 U.S.C. § 101 as being directed to non-statutory subject-matter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable by Shavit (US 2017/0368413) in view of Miller et al. (hereinafter ‘Miller’, US 2018/0268738). Regarding claim 1, and substantially similar limitations in claim 13, Shavit discloses a device comprising: a processing system including a processor (see para. [0064]: The processing unit 215 may include one or more processors); and a memory that stores executable instructions that, when executed by the processing system, facilitate performance of operations (see para. [0065]: The processing unit 215 maybe be coupled to a memory included in a data-holding sub-system 214. In an embodiment, the memory contains instructions that when executed by the processing unit 215 results in the performance of the methods described), the operations comprising: facilitating establishing a communication session, over a communication network, between the first device and a first user device (see para. [0131]: 522 is schematically describing the electronic block in the example embodiment 500. This block may comprise a processing unit 215, a data repository system 214, I/O subsystem 220, a plurality of wireless or wired communication devices ad any other part of the system described in conjunction with FIGS. 1 and 2. Some of the advantageous components may be Bluetooth and NFC wireless communications components for interaction with Smart-phones or WCDs, a processor component and a RF communication component for higher range wireless communications); receiving, over the communication network, a first network code from the first user device (see para. [0065): the processing unit 215 may include machine-readable media for storing software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing unit 215 to perform the various functions described herein); handshaking between the device and the first user device based on the first network code resulting in a first handshake; based on the first handshake, establishing the communication session between the device and the first user device (see para. [0754]: The system can identify the user in several ways: by his input to the system (the input may be vocal—the system can use voice recognition to identify the user, or ask him to say his name), communicating with the user's smart watch or wearable computing device via any wireless communication method known in the art such as NFC, WiFi, Blue-Tooth, ultrasonic, and the like, communicating with the user's mobile device via a wireless communication method known in the art or wired communication. Handshaking is reasonably understood as a process in which two devices establish a communication link. In para. [0754], the system establishes a communication link with the user’s device via a wireless communication method); engaging in first communications between the device and a first user device (see para. [0072]: device that can communicate with the Computing Device), wherein the first communications comprise a first visual representation of first actions performed by a first user, (see para. [0067]: present a visual representation of data held by data-holding subsystem. Also see para. [0330]: In another example, the capture device may include two or more physically separated cameras that view a target from different angles, to obtain visual stereo data) and wherein the first communications are sent from the first user device to the device (see para. [1086]: The system can send the robot the information about the user, the training session, the next exercise, and the like. The sending can be done by any interface known in the art, for example Wi-Fi or Bluetooth); facilitating establishing the communication session, over the communication network, between the device and a second user device; receiving, over the communication network, a second network code from the second user device; handshaking between the device and the second user device based on the second network code resulting in a second handshake; based on the second handshake, establishing the communication session between the device and the second user device (see paras. [0131], [0065], and [0754 above, also see para. [0443]: supervising the performance or implementation of the plan by a plurality of users. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users); engaging in second communications between the device and a second user device during the communication session, wherein the second communications comprise a second visual representation of sections actions performed by a second user, wherein the second communications are sent from the second user device to the device, and wherein the second communications occur substantially simultaneously with the first communications (see para. [0443]: supervising the performance or implementation of the plan by a plurality of users. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood as second communications and a second user device); making a first determination via a first machine learning algorithm associated with the first user, based at least in part upon the first visual representation, whether a performance of a first task by the first user has been completed; making a second determination via a second machine learning algorithm associated with the second user, based at least in part upon the second visual representation, whether a performance of the first task by the second user has been completed (see para. [0387-0389]: A Machine Learning method requires the expected result, a measure of success of the result and the attributes/classes for definition and implementation. Measure of success can be binary, i.e. success or failure, a quantity representing the measure of success… The measure of success can be for example if the ML method output 3D representation pass this step or future steps filtering. The filtering can be done in any of the methods disclosed or shall be disclosed. Another measure of success, can the measure of resemblance of the ML outputs to the actual subject/s as measured by a human); responsive to the first determination being that the performance of the first task by the first user has been completed, prompting an instructor to provide an indication of a next task to be performed by the first user; and responsive to the second determination being that the performance of the first task by the second user has not been completed, prompting the instructor to provide additional instructions to the second user to aid the second user in the performance of the first task (See para. [1100]: A human supervisor may be notified upon detection and para. [1088]: the system may serve as an aid to the human coach. It can give him instructions for all of the above detailed interactions. It can remind him about the training plan, how to perform an exercise, remind him to motivate the user, point him to when and what the user is doing wrong Also see para. [1116]: current exercise, the number of repetitions required and the number of repetitions done, the number of sets expected and the number of sets done, the next exercise in the session plan or parts of the session plan or all of it, indication for speed of performance, guidance on the exercise done, or remarks from a virtual coach of what to improve—on the fly or in retrospect. Also see para. [1054]: calling from help from a human coach or a robot to teach the exercise). Shavit does not explicitly disclose wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm. However, Miller teaches wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm (see para. [0074]: a ML module receives unlabeled data including CG element data, device data, user preferences, user profile data, and transaction data. The ML module further employs an unsupervised learning method such as “clustering” to identify patterns and organize the unlabeled data into meaningful groups; [0072]: a plurality of ML methods and algorithms may be applied, which may include but are not limited to: linear or logistic regression, instance-based algorithms, regularization algorithms, decision trees, Bayesian networks, cluster analysis, association rule learning, artificial neural networks, deep learning, dimensionality reduction, and support vector machines). Miller is analogous to Shavit, as both are drawn to the art of generating and using a visual representation of the user in a training system. It would be obvious to try by one of ordinary skill in the art at the time of filing to have modified the system and method as taught by Shavit, to include wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm, as taught by Miller, since doing so enables the determination of whether a task by a user has been completed to be customized to user’s with different user preferences, profiles, and data (see para. [0072]). Regarding claim 2, the combination of Shavit and Miller teaches the device of claim 1 as above, and Shavit further discloses wherein the operations further comprise sending to the first user device via the first communications the indication of the next task to be performed by the first user, the indication of the next task to be performed by the first user comprising instructions as to how to perform the next task. (See para. [0755]: the system can load its data and start interfacing with him. It can greet him and the interface may be in the form of a coach communicating with the system or a virtual coach rendered by the system. Also see para. [0829]: exercise list is assigned. Also see para. [1116]: current exercise, the number of repetitions required and the number of repetitions done, the number of sets expected and the number of sets done, the next exercise in the session plan or parts of the session plan or all of it, indication for speed of performance, guidance on the exercise done, or remarks from a virtual coach of what to improve—on the fly or in retrospect). Regarding claim 3, the combination of Shavit and Miller teaches the device of claim 1 as above, and Shavit further discloses wherein the operations further comprise sending to the second user device via the second communications the additional instructions (See para. [0755]: the system can load its data and start interfacing with him. It can greet him and the interface may be in the form of a coach communicating with the system or a virtual coach rendered by the system. Also see para. [1088]: The human coach … can give him instructions for all of the above detailed interactions. It can remind him about the training plan, how to perform an exercise, remind him to motivate the user, point him to when and what the user is doing wrong, and the like. The human coach may request for information on the fly or in retrospect. He may also modify data or aspects of the training (for example change exercise, difficulty level, or update the user details) in real-time or in retrospect). Regarding claim 4, the combination of Shavit and Miller teaches the device of claim 3 as above, and Shavit further discloses wherein the second communications further comprise an additional visual representation of additional actions performed by the second user, wherein the additional visual representation is sent from the second user device to the device (see para. [0067]: present a visual representation of data held by data-holding subsystem. Also see para. [0330]: In another example, the capture device may include two or more physically separated cameras that view a target from different angles, to obtain visual stereo data), and wherein the additional actions are performed by the second user in response to the additional instructions that had been sent to the second user device [see para. [0760]: the system may give further explanations about physical exercise … the system may teach the exercise(s) to the user). Regarding claim 5, the combination of Shavit and Miller teaches the device of claim 4 as above, and Shavit further discloses wherein the additional visual representation comprises an image, a video, or any combination thereof (see para. [0331]: a depth camera and a separate video camera may be used. When a video camera is used, it may be used to provide target tracking data, confirmation data for error correction of target tracking, image capture, face recognition, high-precision tracking of fingers (or other small features), light sensing, and/or other functions). Regarding claim 6, the combination of Shavit and Miller teaches the device of claim 5 as above, and Shavit further discloses wherein the operations further comprise: making a third determination, based at least in part upon the additional visual representation, whether the performance of the first task by the second user has been completed; responsive to the third determination being that the performance of the first task by the second user has been completed, prompting the instructor to provide the indication of the next task to be performed by the second user (see para. [0443]: supervising the performance or implementation of the plan by a plurality of users. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood as second communications and a second user device. See para. [0829]: exercise list is assigned, and para. [1116]: the next exercise in the session plan or parts of the session plan or all of it, indication for speed of performance, guidance on the exercise done, or remarks from a virtual coach of what to improve, and further in para. [1054]: calling from help from a human coach or a robot to teach the exercise and para. [0786]: if bad performance is seen in later exercises in the session or the user does not finish the session properly. The next exercise in the plan, guidance on the exercise, remarks from a coach of what to improve, and if bad performance or the user does not finish, calling help from a human coach are reasonably understood as the instructor providing an indication of the next task after the first task is completed properly). Regarding claim 7, the combination of Shavit and Miller teaches the device of claim 1 as above, and Shavit further discloses wherein each of the first communications and the second communications is carried out via the Internet (See para. [0182]: Another example is—the resources of the smart-phone or WCD can be used to communicate with a cloud data-base. It is reasonably understood that a cloud data-base requires the communications carried out via the Internet. Also see para. [0059]: Many training modules 110 may be connected to each other wirelessly or wired via a network). Regarding clam 8, and substantially similar limitations in claim 16, the combination of Shavit and Miller teaches the device of claim 1 as above, and Shavit further discloses wherein: the first user device comprises a first tablet, a first smartphone, or any first combination thereof; and the second user device comprises a second tablet, a second smartphone, or any second combination thereof (See fig. 1, para. [0058 and 0071]: The training module 110 or parts thereof may reside in the wearable computing device 190 or on a mobile computing system such as a Smartphone or tablet. The computing system or parts thereof may reside in a wearable computing device or on a mobile computing system such as a smartphone or tablet). Regarding claim 9, and substantially similar limitations in claim 16, the combination of Shavit and Miller teaches the device of claim 1 as above, and Shavit further discloses wherein: the first visual representation comprises a first image, a first video, or any first combination thereof; and the second visual representation comprises a second image, a second video, or any second combination thereof (See para. [0053 and 0082]: The sensors may be a camera or an optical sensor that can produce 2D or 3D still or video images. The sensors may further include acoustical LIDAR or RADAR sensors that can produce 2D or 3D still or moving images or mapping in any method known in the art. This data may be incorporated with other sensors data to achieve for example identification of the exact exercise the user is doing or creating an image or skeleton diagram or alike representing the user and its motion. This image or diagram can be 2D or 3D. Also see para. [0331]: a depth camera and a separate video camera may be used. When a video camera is used, it may be used to provide target tracking data, confirmation data for error correction of target tracking, image capture, face recognition, high-precision tracking of fingers (or other small features), light sensing, and/or other functions). Regarding claim 10, the combination of Shavit and Miller teaches the device of claim 1 as above, and Shavit further discloses wherein the first task and the next task are part of a sequential plurality of tasks to be performed (See para. [0533]: the order of the exercises is set according to his highest body fat percentage measurements in the table above considering muscle pairing. First the muscle group was selected corresponding to the body area with the highest fat percentage from the ones left unallocated. Next if this muscle group has a pair opposite a muscle group according to rule (2f) the paired muscle group will be selected next. The selected groups are removed from the selection pool. This process is iterated until the muscle groups are ordered or until no more ordering can take place because of insufficient data or inability of the rules above to determine the order). Regarding claim 11, and substantially similar limitations in claim 17, the combination of Shavit and Miller teaches the device of claim 1 and the non-transitory machine-readable medium of claim 13 as above, and Shavit further discloses wherein the making of the first determination via the first machine learning algorithm is further based upon an abstraction of the performance of the first task, the abstraction of the performance of the first task being based upon a plurality of prior performances of the first task by each respective one of a plurality of prior performers of the first task (See para. [0388-0389]: The expected result from the ML method can be the current step 3D representation/s of the subject/s. It may also be a collection of possible 3D representations. The step maybe the first step i.e. Capturing/Creating the first frame of 3D model and/or body image and or skeleton mapping and or body mapping and alike; or one of the following steps. The measure of success can be for example if the ML method output 3D representation pass this step or future steps filtering. The filtering can be done in any of the methods disclosed or shall be disclosed. Another measure of success, can the measure of resemblance of the ML outputs to the actual subject/s as measured by a human, which can be present at the training stage or even after). Regarding claim 12, the combination of Shavit and Miller teaches the device of claim 11 as above, and Shavit further discloses wherein the making of the second determination via the second machine learning algorithm is further based upon the abstraction of the performance of the first task (See para. [0443]: supervising the performance or implementation of the plan by a plurality of users. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood to require the second determination. See para. [0388-0389]: The expected result from the ML method can be the current step 3D representation/s of the subject/s. It may also be a collection of possible 3D representations. The step maybe the first step i.e. Capturing/Creating the first frame of 3D model and/or body image and or skeleton mapping and or body mapping and alike; or one of the following steps. The measure of success can be for example if the ML method output 3D representation pass this step or future steps filtering. The filtering can be done in any of the methods disclosed or shall be disclosed. Another measure of success, can the measure of resemblance of the ML outputs to the actual subject/s as measured by a human, which can be present at the training stage or even after). Regarding claim 14, the combination of Shavit and Miller teaches the non-transitory machine-readable medium of claim 13 as above, and Shavit further discloses wherein: the operations further comprise sending to the first user device via the first communication channel the indication of the next portion of the sequential process to be performed by the first user, the indication of the next portion of the sequential process to be performed by the first user comprising first instructions as to how to perform the next portion of the sequential process (See para. [0829]: the following exercise list is assigned. Also see para. [1116]: the next exercise in the session plan or parts of the session plan or all of it, indication for speed of performance, guidance on the exercise done, or remarks from a virtual coach of what to improve), and the first instructions comprising first text, first audio, first video, or any first combination thereof (See para. [0318]: the training module 100 may use the wearable computing device's 190 speakers, microphone, touch screen, and the like, to deliver instructions. Also see para. [1067]: Reports can be given by printing them, by presenting them on screen, by hologram, a video, read by voice, shown as graphs or images or videos, and the like); and the operations further comprise sending to the second user device via the second communication channel the additional instructions, the additional instructions comprising more detailed instructions, as to how to perform the first portion of the sequential process, and the additional instructions comprising second text, second audio, second video, or any second combination thereof (See para. [1067]: During the session the system may further guide the trainee between exercises ... Reports can be given by printing them, by presenting them on screen, by hologram, a video, read by voice, shown as graphs or images or videos, and the like). Regarding claim 15, the combination of Shavit and Miller teaches the non-transitory machine-readable medium of claim 14 as above, and Shavit further discloses wherein: the first communication channel comprises a first wireless communication channel, a first wired communication channel, or any first combination thereof; and the second communication channel comprises a second wireless communication channel, a second wired communication channel, or any second combination thereof (See para. [0059]: Many training modules 110 may be connected to each other wirelessly or wired via a network. Also see para. [0131]: a plurality of wireless or wired communication devices. See para. [0443]: supervising the performance or implementation of the plan by a plurality of users. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood to require second communications and a second user device). Regarding claim 16, and substantially similar limitations in claims 8 and 9, the combination of Shavit and Miller teaches the non-transitory machine-readable medium of claim 13 as above, and Shavit further discloses wherein: the first user device comprises a first tablet, a first smartphone, or any first combination thereof; the second user device comprises a second tablet, a second smartphone, or any second combination thereof (para. [0058 and 0071]: a mobile computing system such as a Smartphone or tablet. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood to require a second user device comprising a tablet or smartphone; the first visual representation comprises a first image, a first video, or any third combination thereof; and the second visual representation comprises a second image, a second video, or any fourth combination thereof (See para. [0053 and 0082]: camera or an optical sensor that can produce 2D or 3D still or video images… 2D or 3D still or moving images or mapping in any method known in the art. This data may be incorporated with other sensors data to achieve for example identification of the exact exercise the user is doing or creating an image or skeleton diagram or alike representing the user and its motion. Also see para. [0331]: a depth camera and a separate video camera may be used. When a video camera is used, it may be used to provide target tracking data, confirmation data for error correction of target tracking, image capture, face recognition, high-precision tracking of fingers (or other small features), light sensing, and/or other functions. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood to require a second visual representation). Regarding claim 18, Shavit teaches a method comprising: facilitating, by a processing system comprising a processor (see para. [0064]), (see para. [0131]: 522 is schematically describing the electronic block in the example embodiment 500. This block may comprise a processing unit 215, a data repository system 214, I/O subsystem 220, a plurality of wireless or wired communication devices ad any other part of the system described in conjunction with FIGS. 1 and 2. Some of the advantageous components may be Bluetooth and NFC wireless communications components for interaction with Smart-phones or WCDs, a processor component and a RF communication component for higher range wireless communications); establishing a communication session, over the communication network, between the processing system and a first end user device (see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users); receiving, by the processing system, over the communication network, a first network code from the first end user device (see para. [0065): the processing unit 215 may include machine-readable media for storing software. Software shall be construed broadly to mean any type of instructions, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Instructions may include code (e.g., in source code format, binary code format, executable code format, or any other suitable format of code). The instructions, when executed by the one or more processors, cause the processing unit 215 to perform the various functions described herein); handshaking, by the processing system, between the processing system and the first end user device based on the first network code resulting in a first handshake; based on the first handshake, establishing, by the processing system, the communication session between the processing system and the first end user device (see para. [0754]: The system can identify the user in several ways: by his input to the system (the input may be vocal—the system can use voice recognition to identify the user, or ask him to say his name), communicating with the user's smart watch or wearable computing device via any wireless communication method known in the art such as NFC, WiFi, Blue-Tooth, ultrasonic, and the like, communicating with the user's mobile device via a wireless communication method known in the art or wired communication. Handshaking is reasonably understood as a process in which two devices establish a communication link. In para. [0754], the system establishes a communication link with the user’s device via a wireless communication method); facilitating, by the processing system, establishing the communication session, over the communication network, between the processing system and a second end user device; receiving, by the processing system, over the communication network, a second network code from the second end user device; handshaking, by the processing system, between the processing system and the second end user device based on the second network code resulting in a second handshake; based on the second handshake, establishing, by the processing system, the communication session between the processing system and the second end user device (see para. [0443]: supervising the performance or implementation of the plan by a plurality of users. Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood as second communications and a second end user device); receiving, by the processing system, a plurality of video feeds over the communication session, each of the plurality of video feeds being provided by a respective one of a plurality of end user devices, wherein the plurality of end user devices comprises the first end user device and the second end user device (see para. [0331]: In some embodiments, two or more different cameras may be incorporated into an integrated capture device. For example, a depth camera and a video camera (e.g., RGB video camera) may be incorporated into a common capture device. In some embodiments, two or more separate capture devices may be cooperatively used. For example, a depth camera and a separate video camera may be used. When a video camera is used, it may be used to provide target tracking data, confirmation data for error correction of target tracking, image capture, face recognition, high-precision tracking of fingers (or other small features), light sensing, and/or other functions. Also see para. [0762]: plurality of users … a game with similar real or virtual users); determining by the processing system, for a first video feed of the plurality of video feeds, in which particular sequential process of a plurality of potential sequential processes a first user is engaged (See para. [0388-0389]: The expected result from the ML method can be the current step 3D representation/s of the subject/s. It may also be a collection of possible 3D representations. The step maybe the first step i.e. Capturing/Creating the first frame of 3D model and/or body image and or skeleton mapping and or body mapping and alike; or one of the following steps. The measure of success can be for example if the ML method output 3D representation pass this step or future steps filtering. The filtering can be done in any of the methods disclosed or shall be disclosed. Another measure of success, can the measure of resemblance of the ML outputs to the actual subject/s as measured by a human, which can be present at the training stage or even after); determining by the processing system, for a second video feed of the plurality of video feeds, in which particular sequential process of the potential sequential processes a second user is engaged, the particular sequential process in which the second user is engaged being different from the particular sequential process in which the first user is engaged (See para. [0762]: a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). The real or virtual users are reasonably understood to be the second user, requiring the second video feed. Also see para. [0464]: the order of the exercises is set according to his highest body fat percentage measurements in the table above considering muscle pairing. First the muscle group was selected corresponding to the body area with the highest fat percentage from the ones left unallocated. Next if this muscle group has a pair opposite a muscle group according to rule (2f) the paired muscle group will be selected next. The selected groups are removed from the selection pool. This process is iterated until the muscle groups are ordered or until no more ordering can take place because of insufficient data or inability of the rules above to determine the order); prompting, by the processing system, a first instructor who is associated with the particular sequential process in which the first user is engaged to provide to the first user first instructions on how to perform a next stage of the particular sequential process in which the first user is engaged (See para. [0829]: the following exercise list is assigned. Also see para. [1116]: the next exercise in the session plan or parts of the session plan or all of it, indication for speed of performance, guidance on the exercise done, or remarks from a virtual coach of what to improve); and prompting, by the processing system, a second instructor who is associated with the particular sequential process in which the second user is engaged to provide to the second user additional instructions on how to perform a current stage of the particular sequential process in which the second user is engaged (See para. [0762]: a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). The real or virtual users are reasonably understood to be the second user). Shavit does not explicitly disclose determining wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm. However, Miller teaches wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm (see para. [0074]: a ML module receives unlabeled data including CG element data, device data, user preferences, user profile data, and transaction data. The ML module further employs an unsupervised learning method such as “clustering” to identify patterns and organize the unlabeled data into meaningful groups; [0072]: a plurality of ML methods and algorithms may be applied, which may include but are not limited to: linear or logistic regression, instance-based algorithms, regularization algorithms, decision trees, Bayesian networks, cluster analysis, association rule learning, artificial neural networks, deep learning, dimensionality reduction, and support vector machines). Miller is analogous to Shavit, as both are drawn to the art of generating and using a visual representation of the user in a training system. It would be obvious to try by one of ordinary skill in the art at the time of filing to have modified the system and method as taught by Shavit, to include wherein the second machine learning algorithm is a different machine learning algorithm than the first machine learning algorithm, as taught by Miller, since doing so enables the determination of whether a task by a user has been completed to be customized to user’s with different user preferences, profiles, and data (see para. [0072]). Regarding claim 19, the combination of Shavit and Miller teaches the method of claim 18 as above, and Shavit further discloses wherein: the determining via the first machine learning algorithm in which particular sequential process of the plurality of potential sequential processes the first user is engaged further comprises generating a first abstraction of a performance of the particular sequential process in which the first user is engaged, the first abstraction of the performance of the particular sequential process in which the first user is engaged being based upon a first plurality of prior performances of the performance of the particular sequential process in which the first user is engaged by each respective one of a first plurality of prior performers of the performance of the particular sequential process in which the first user is engaged (See para. [0388-0389]: The expected result from the ML method can be the current step 3D representation/s of the subject/s. It may also be a collection of possible 3D representations. The step maybe the first step i.e. Capturing/Creating the first frame of 3D model and/or body image and or skeleton mapping and or body mapping and alike; or one of the following steps. The measure of success can be for example if the ML method output 3D representation pass this step or future steps filtering. The filtering can be done in any of the methods disclosed or shall be disclosed. Another measure of success, can the measure of resemblance of the ML outputs to the actual subject/s as measured by a human, which can be present at the training stage or even after. Also see para. [0974]: training sets are fed into the system. Each set contains a value for each attribute like female, age 34, height 170 [cm] . . . and the desired result. The desired result in this example is achieved by using a human expert coach. The expert coach goes over the attributes in each data and chooses the most suitable program from the bank. A combination of a few expert coaches can be utilized. The desired result in each data set will be the majority vote. In case there is no majority either one of the coaches selected programs will be selected at random, or some priority scheme between the coaches will be used. The human coaches can also give the error or cost function by estimating it by comparing their selection to the ML algorithm selection); and the determining via the second machine learning algorithm in which particular sequential process of the plurality of potential sequential processes the second user is engaged further comprises generating a second abstraction of a performance of the particular sequential process in which the second user is engaged, the second abstraction of the performance of the particular sequential process in which the second user is engaged being based upon a second plurality of prior performances of the performance of the particular sequential process in which the second user is engaged by each respective one of a second plurality of prior performers of the performance of the particular sequential process in which the second user is engaged (Also see para. [0762]: The system may do a “Gamification” of the training session. For example, a user on a treadmill may be simulated to be a part of a race with other real users on other treadmills or virtual users. Similarly a trainee doing a strength exercise on a training machine or with an exercise device or without any device or machine, may also be a part of a game with similar real or virtual users (real users' avatars may be displayed to the user if these real users are far away). Also see para. [1111]: A plurality of users can decide about mutual training goal(s). The goal can be, for example, the same for each of the individual participants. The plurality of users and real or virtual users are reasonably understood to require a second abstraction). Response to Arguments The Applicant’s arguments filed on August 28, 2025 related to claims 1-20 are fully considered, but are not persuasive. Claim Rejections - 35 U.S.C. § 101 The Applicant respectfully argues that amendments to independent claims 1, 13, and 18 overcome the 101 rejections for the following reasons: “Amended independent claim 1… does not, as a whole, encompass the abstract idea alleged by the Office.” The Examiner respectfully disagrees. The independent claims continue to simply describe a process of data gathering and manipulation, which is sufficiently analogous to “collecting information, analyzing it, and displaying certain results of the collection analysis” (i.e. Electric Power Group, LLC, v. Alstom, 830 F.3d 1350, 119 U.S.P.Q.2d 1739 (Fed. Cir. 2016)). Furthermore, these limitations simply describe a process of facilitating communication between people that falls under “certain methods of organizing human activity,” in terms of managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions). “Amended independent claim 1 provides for a practical application, namely, prior to establishing a communication session between the device and the first user device, providing security via a handshake… Claim 1 provides an improvement of providing security for a communication session that facilitates completing performance of a task”. The Examiner respectfully disagrees. this judicial exception is not integrated into a practical application because: (a) It does not improve the functioning of a computer or to any other technology or technical field; (b) Applying the judicial exception does not effect a particular treatment or prophylaxis for a disease or medical condition; (c) Do not apply the judicial exception with, or by use of a particular machine; (d) It does not effect a transformation or reduction of a particular article to a different state or thing; (e) It does not apply or use the judicial exception in some other meaningful way beyond generally linking the use of the exception to a particular technological environment such that the claims as a whole are more than a drafting effort designed to monopolize the exception. Namely, the Applicant’s claimed elements of “a communication network”, “a network code”, “a handshake”, “a processing system including a processor,” “a memory,” “a first user device,” “a first end user device,” “a second user device,” “a second end user device” “a first communication channel,” “a second communication channel,” and “a first machine learning algorithm,” and “a second machine learning algorithm” are merely claimed to generally link the use of a judicial exception (e.g., pre-solution activity of data gathering and post-solution activity of presenting data) to (1) a particular technological environment or (2) field of use, per MPEP §2106.05(h); and are applying the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, per MPEP §2106.05(f). In other words, the claimed “facilitating performance of operations,” is not providing a practical application. Furthermore, the Examiner disagrees with the Applicant’s assertion that the amended independent claim 1 provides an improvement to the technical field of providing security for a communication session that facilitates completing performance of a task. MPEP 2106.04(d)(1) states that “The specification need not explicitly set forth the improvement, but it must describe the invention such that the improvement would be apparent to one of ordinary skill in the art. Conversely, if the specification explicitly sets forth an improvement but in a conclusory manner (i.e., a bare assertion of an improvement without the detail necessary to be apparent to a person of ordinary skill in the art), the examiner should not determine the claim improves technology. The specification describes the Applicant’s asserted improvement in para. [0032]: one device (e.g., the first user device) can be instructed to "handshake" another device (e.g., the device) with a network code when connected. In another example, one device (e.g., the device) can be instructed to "handshake" another device (e.g., the first user device) with a network code when connected. In another example, a background process can be executed by either the first user device or the device of first actions to execute the above "handshake" and proceed to send other operational data (e.g. network keys, power level, etc.)”. The specification does not describe the invention such that the improvement is apparent to one of ordinary skill in the art. Furthermore, “Handshaking” in networking is a process in which two devices establish a communication link that is well-known in the art. As such, the argument is not persuasive. The rejection of claims 1-20 under 35 U.S.C. § 101 are not withdrawn. Claim Rejections - 35 U.S.C. § 103 The Applicant respectfully argues that Shavit and Miller do not describe or suggest the amended limitations of independent claims 1, 13, and 18. The Examiner respectfully disagrees. Shavit in view of Miller teaches all of the limitations of claims 1-20, as above in the 103 rejections. As such, the argument is not persuasive. Therefore, the rejections under 35 U.S.C. §103 are not withdrawn. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JAYMIN J BAEK whose telephone number is (703)756-1017. The examiner can normally be reached Monday - Friday 8-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Peter Vasat can be reached at (571) 270-7625. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /J.J.B./ Examiner, Art Unit 3715 /PETER S VASAT/ Supervisory Patent Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

May 28, 2021
Application Filed
Jul 11, 2024
Non-Final Rejection — §101, §103
Oct 09, 2024
Interview Requested
Oct 16, 2024
Applicant Interview (Telephonic)
Oct 16, 2024
Examiner Interview Summary
Oct 17, 2024
Response Filed
Jan 06, 2025
Final Rejection — §101, §103
Mar 27, 2025
Request for Continued Examination
Mar 28, 2025
Response after Non-Final Action
May 19, 2025
Non-Final Rejection — §101, §103
Aug 28, 2025
Response Filed
Jan 06, 2026
Final Rejection — §101, §103
Apr 03, 2026
Interview Requested
Apr 13, 2026
Applicant Interview (Telephonic)
Apr 13, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12544627
DEVICE FOR LEARNING AND TRAINING THE UNDERWATER DOLPHIN KICK AND METHODS
2y 5m to grant Granted Feb 10, 2026
Patent 10820861
ENDOTRACHEAL TUBES AND SYSTEMS AND METHODS FOR EVALUATING BREATHING
2y 5m to grant Granted Nov 03, 2020
Patent 10806643
ABSORBENT LAMINATE WITH MULTIPLE SUBSTRATES
2y 5m to grant Granted Oct 20, 2020
Patent 10799657
POWER MANAGEMENT IN RESPIRATORY TREATMENT APPARATUS
2y 5m to grant Granted Oct 13, 2020
Patent 10765570
NULL
2y 5m to grant Granted Sep 08, 2020
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
50%
Grant Probability
83%
With Interview (+32.9%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 397 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month