Prosecution Insights
Last updated: April 19, 2026
Application No. 17/931,592

FACILITATING REAL-TIME TRANSPORT PROTOCOL SIGNALING FOR ELASTIC DISTRIBUTED COMPUTING FOR RESOURCE INTENSIVE TASKS IN ADVANCED NETWORKS

Non-Final OA §103
Filed
Sep 13, 2022
Examiner
HACKENBERG, RACHEL J
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
AT&T Intellectual Property I, L.P.
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
236 granted / 300 resolved
+20.7% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
53.2%
+13.2% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 300 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/20/2026 has been entered. Response to Arguments Applicant's arguments filed 01/20/2026 have been fully considered. Applicant argues the amendments (see below) to the independent claims are not taught by the prior art of record. In response to the argument, Examiner respectfully disagrees. Claim 1 is amended to recite: “ … and wherein the facilitating is based on a request sent to the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of to participate in the split processing and a determination that the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has accepted the request;” Claim 10 is amended to recite: “ … and wherein the enabling involves a second request sent to the second device to participate in the split rendering and a determination that the second device has accepted the second request;” Claim 17 is amended to recite: “ … and wherein the migrating is based on a request sent to the elastic computing equipment to participate in task division and a determination that the elastic computing equipment has accepted the request;” Zou teaches on these limitations. Zou teaches on requiring load status from each peer and that each peer involved has accepted their role in participating with load balancing. See Zou, [0041] As the edge node 310 becomes overloaded, it offloads the processing of certain video frames to another peer edge node 320 ( e.g., edge server ES2) to avoid dropping the frames. [0053] Load status information must be collected from all edge nodes. For example, all edge compute nodes involved in this collaborative video analytics pipeline must share their system load status to allow overloaded edge nodes to choose optimal peer edge nodes for offloading compute tasks and rebalancing the overall load. Applicant argues the amendments (see below) to the independent claims are not taught by the prior art of record. In response to the argument, Examiner respectfully agrees. Claim 1 is amended to recite: “ … determining, by the network equipment, whether at least one equipment of the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has successfully performed an assigned task of the split processing; assessing, by the network equipment, a penalty against the at least one equipment based on a determination that the at least one equipment has not successfully performed the assigned task.” Claim 10 is amended to recite: “ … determining whether the second device has successfully performed an assigned task of the split rendering; and assessing a penalty against the second device based on a determination that the second device has not successfully performed the assigned task.” Claim 17 is amended to recite: “ … determining whether the elastic computing equipment has successfully performed an assigned task; and assessing a penalty against the elastic computing equipment based on a determination that the elastic computing equipment has not successfully performed the assigned task.” An updated search was conducted and a prior art was discovered to read on the amendments to the independent claims: US 6917979 B1 (Dutra) Zou still teaches on most of the limitations of the independent Claims. Regarding Claim 1, Zou teaches on load status heartbeat messages ([0026]). However, Zou (as modified by Lee & Yamazaki) is silent on determining, by the network equipment, whether at least one equipment of the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has successfully performed an assigned task of the split processing; assessing, by the network equipment, a penalty against the at least one equipment based on a determination that the at least one equipment has not successfully performed the assigned task. Dutra teaches determining, by the network equipment, whether at least one equipment of the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has successfully performed an assigned task of the split processing; assessing, by the network equipment, a penalty against the at least one equipment based on a determination that the at least one equipment has not successfully performed the assigned task. See Dutra, Col 5 ln 35-39, SLA clauses are transformed into penalty value-generating parameters, allowing the Policy-based Delivery Processing System to assign a value to the question "how much penalty will the Service Provider incur if the recipient-job being prioritized fails to meet an SLA clause?". Col 13 ln 51-59, The communication between Queue Manager 508 and Policy Manager 510 is one whereby jobs are passed to Policy Manager 510 for penalty assessment, and the penalty value, plus any delay in job delivery start time, is returned. The penalty value assessment algorithm is a mathematical function dependent on values assigned to the SLA attributes described above, as well as the current time, and the relation between the current time and the expiration of any expected delivery time. Col 8 ln 62-67, The present invention also includes a means for identifying successes and failures of meeting SLA guarantees, notifying subscribers of such successes and failures as each recipient-job delivery is disposed of finally, and generating records for billing systems to identify such successes and failures. It would have been obvious to modify Zou (as modified by Lee & Yamazaki) by modifying Zou per Dutra as it would allow the combined system to provide business incentives for work completion in a timely manner. Please see updated rejection below in view of : Claim(s) 1-5, 7-14, 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2019/0327506 A1 (Zou) in view of US 2013/0173819 A1 (Lee) further in view of further in view of JP 2016149630 A (Yamazaki) more in view of US 6917979 B1 (Dutra). Claim(s) 6, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2019/0327506 A1 (Zou) in view of US 2013/0173819 A1 (Lee) further in view of JP 2016149630 A (Yamazaki) more in view of US 6917979 B1 (Dutra) even more in view of US 2022/0255988 A1 (Salmasi). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-5, 7-14, 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2019/0327506 A1 (Zou) in view of US 2013/0173819 A1 (Lee) further in view of further in view of JP 2016149630 A (Yamazaki) more in view of US 6917979 B1 (Dutra). Regarding Claim 1: Zou teaches A method, comprising: receiving, by network equipment ([0058] ES 2) comprising a processor ([0134] FIG. 9, processor 900), a signal (ie. load status heart-beat message(s)) transmitted via a transport protocol packet (ie. internet protocols); ([0024] edge nodes 110a-c may be implemented with the following capabilities: [0026] (ii) A scalable dynamic replication peer selection algorithm based on the real-time load status from all edge compute server nodes, where load status is shared using common mechanism such as broadcast/multicast of heart-beat messages; [0134] Fig 9, Processor 900 is an example of a type of hardware device that can be used in connection with the embodiments described throughout this disclosure. Transport protocol: [0094] Any number of network protocols and communications standards may be used, wherein each protocol and standard is designed to address specific objectives. [0096] The respective IoT networks may also operate with use of a variety of network and internet application protocols.) based on the signal (ie. load status heart-beat message(s)), determining, by the network equipment, that a task executing at a user equipment ([0058] ES1) utilizes more resources than a defined level of resources; ([0041] FIG. 3 An edge video streaming architecture 300 that leverages dynamic resource rebalancing to avoid dropping frames. In the illustrated embodiment, for example, an edge node 310 (e.g., edge server ES1) is performing object identification and tracking on a video stream 304 captured by a camera 302, but as the edge node 310 becomes overloaded, it offloads the processing of certain video frames to another peer edge node 320 ( e.g., edge server ES2) to avoid dropping the frames. [0074] The edge node may detect a resource overload if the receive buffer is full, or if the receive buffer otherwise exceeds a memory utilization threshold (e.g., the percentage of the receive buffer's overall capacity that is currently being used exceeds a threshold). Alternatively, any other metric may also be used to detect when the edge node's resources have become overloaded.) based on the determining, facilitating ([0053]), by the network equipment ([0058] ES2), split processing of the task between the user equipment ([0058] ES1) and one or more other user equipment, one or more edge equipment, one or more cloud-based equipment, or a combination thereof, ([0058] ES2). ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load.) and wherein the facilitating is based on a request sent to the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of to participate in the split processing and a determination that the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has accepted the request; ([0041] As the edge node 310 becomes overloaded, it offloads the processing of certain video frames to another peer edge node 320 ( e.g., edge server ES2) to avoid dropping the frames. [0053] Load status information must be collected from all edge nodes. For example, all edge compute nodes involved in this collaborative video analytics pipeline must share their system load status to allow overloaded edge nodes to choose optimal peer edge nodes for offloading compute tasks and rebalancing the overall load.) Load status is required from each peer and each peer involved has accepted their role in participating with load balancing. Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou is silent on a signal transmitted via a transport protocol header, and wherein the transport protocol header is included in a first packet of a stream of packets. Lee teaches, in the same field of endeavor, on a streaming data input unit is configured to receive a plurality of streaming content groups sent by the streaming server, Abstract. Lee also teaches on a signal transmitted via a transport protocol header and wherein the transport protocol header is included in a first packet of a stream of packets. (ie. RTP Header Extension). ([0041] The receiving or sending of data may be controlled through Real-time Transport Protocol (RTP) and Real-time Transport Control Protocol (RTCP). [0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou per Lee to include a signal transmitted via a transport protocol header and wherein the transport protocol header is included in a first packet of a stream of packets. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Zou teaches on load status heartbeat messages ([0026]). However, Zou (as modified by Lee) is silent on wherein at least one additional packet of the stream of packets includes an additional signal that is indicative of resource sufficiency of the user equipment. Yamazaki teaches, in the same field of endeavor, on a resource assignment device, a resource assignment method, and a resource assignment program, [0001]. Yamazaki also teaches wherein at least one additional packet of the stream of packets includes an additional signal that is indicative of resource sufficiency of the user equipment. ([0029] The "processing capacity" and the "remaining processing capacity" indicating the remaining resources that can be allocated for processing the flow are stored. FIG. 4, the device information storage unit 122 stores the throughput indicating the processing capacity per unit time as the “processing capacity” or the “remaining processing capacity”. [0037] The resource assignment unit 133 performs assignment processing based on various information stored in the traffic information storage unit 121. [0040][0041] The resource assignment unit 133 may pass to the setting unit 134 information on SCs set in a series of processes by the assignment process. The setting unit 134 transmits, to the processing apparatus 10, information indicating which transmission rate the packet received to the flow identification apparatus 50 is to be transmitted.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee) by modifying Zou per Yamazaki to include wherein at least one additional packet of the stream of packets includes an additional signal that is indicative of resource sufficiency of the user equipment. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide dynamic offloading decisions by monitoring the split processing and updating within the data stream the amount of resources that are remaining. Zou teaches on load status heartbeat messages ([0026]). However, Zou (as modified by Lee & Yamazaki) is silent on determining, by the network equipment, whether at least one equipment of the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has successfully performed an assigned task of the split processing; assessing, by the network equipment, a penalty against the at least one equipment based on a determination that the at least one equipment has not successfully performed the assigned task. Dutra teaches, in the same field of endeavor, compliance with subscriber job delivery requirements, Abstract. Dutra also teaches determining, by the network equipment, whether at least one equipment of the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has successfully performed an assigned task of the split processing; assessing, by the network equipment, a penalty against the at least one equipment based on a determination that the at least one equipment has not successfully performed the assigned task. (Col 5 ln 35-39, SLA clauses are transformed into penalty value-generating parameters, allowing the Policy-based Delivery Processing System to assign a value to the question "how much penalty will the Service Provider incur if the recipient-job being prioritized fails to meet an SLA clause?". Col 13 ln 51-59, The communication between Queue Manager 508 and Policy Manager 510 is one whereby jobs are passed to Policy Manager 510 for penalty assessment, and the penalty value, plus any delay in job delivery start time, is returned. The penalty value assessment algorithm is a mathematical function dependent on values assigned to the SLA attributes described above, as well as the current time, and the relation between the current time and the expiration of any expected delivery time. Col 8 ln 62-67, The present invention also includes a means for identifying successes and failures of meeting SLA guarantees, notifying subscribers of such successes and failures as each recipient-job delivery is disposed of finally, and generating records for billing systems to identify such successes and failures.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee & Yamazaki) by modifying Zou per Dutra to include determining, by the network equipment, whether at least one equipment of the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination there of has successfully performed an assigned task of the split processing; assessing, by the network equipment, a penalty against the at least one equipment based on a determination that the at least one equipment has not successfully performed the assigned task. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide business incentives for work completion in a timely manner. Regarding Claim 2: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 1 as described. Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou (as modified by Yamazaki & Dutra) is silent on wherein offloading of the split processing of the task involves use of a header extension. Lee teaches wherein offloading of the split processing of the task involves use of a header extension. ([0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140, wherein different source streaming contents have unique identification codes, i.e. source streaming content identifiers, as identification labels. When a source streaming content is added into or removed from the input streaming content group, the streaming data input unit 220 can update the stream input table 270. [0067] The content identifying labels and tables may be transmitted in the RTP header extension and the packet of the Real-time Transport Control Protocol (RTCP), wherein each source streaming content has its unique identifier as the identifying label.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Yamazaki & Dutra) by modifying Zou per Lee to include wherein offloading of the split processing of the task involves use of a header extension. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Regarding Claim 3: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 2 as described. Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou (as modified by Yamazaki & Dutra) is silent on wherein the header extension relates divided tasks associated with the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination thereof. Lee teaches wherein the header extension relates divided tasks associated with the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination thereof. ([0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140, wherein different source streaming contents have unique identification codes, i.e. source streaming content identifiers, as identification labels. When a source streaming content is added into or removed from the input streaming content group, the streaming data input unit 220 can update the stream input table 270. [0067] The content identifying labels and tables may be transmitted in the RTP header extension and the packet of the Real-time Transport Control Protocol (RTCP), wherein each source streaming content has its unique identifier as the identifying label.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Yamazaki & Dutra) by modifying Zou per Lee to include wherein the header extension relates divided tasks associated with the one or more other user equipment, the one or more edge equipment, the one or more cloud-based equipment, or the combination thereof. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Regarding Claim 4: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 1 as described. Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou (as modified by Yamazaki & Dutra) is silent on wherein the receiving comprises receiving a transport protocol header extension from the user equipment. Lee teaches on wherein the receiving comprises receiving a transport protocol header extension (ie. RTP Header Extension) from the user equipment. ([0006] The user can use various kinds of network-connected devices 14, such as desktop personal computers (Desktop PCs), notebooks, tablet PCs, mobile phones, as a carrier to watch streaming content. When the user watches an individual streaming content, the user first establishes a streaming connection with the streaming server 12 via a network 10 through the connection control protocol, such as the Real-Time Streaming Protocol (RTSP). [0041] The receiving or sending of data may be controlled through Real-time Transport Protocol (RTP) and Real-time Transport Control Protocol (RTCP). [0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Yamazaki & Dutra) by modifying Zou per Lee to include wherein the receiving comprises receiving a transport protocol header extension from the user equipment. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Regarding Claim 5: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 1 as described. Zou teaches wherein the receiving comprises receiving information indicative of a latency tolerance defined (ie. low latency desired) for the task executing at the user equipment. ([0063][0084] The overloaded node 310 may deliver the offloaded video segment to the peer node 320 using a fast replication mechanism, which may be designed to achieve low latency using a "zero-copy" implementation that avoids memory copy operations.) Regarding Claim 7: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 1 as described. Zou teaches wherein the signal transmitted via the transport protocol packet is a broadcast signal that comprises resource distribution information indicative of a resource distribution applicable to the task executing at the user equipment. ([0026] (ii) A scalable dynamic replication peer selection algorithm based on the real-time load status from all edge compute server nodes, where load status is shared using common mechanism such as broadcast/multicast of heart-beat messages;) Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou (as modified by Yamazaki & Dutra) is silent on a signal transmitted via a transport protocol header. Lee teaches on sending data via a transport protocol header (ie. RTP Header Extension). ([0041] The receiving or sending of data may be controlled through Real-time Transport Protocol (RTP) and Real-time Transport Control Protocol (RTCP). [0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Yamazaki & Dutra) by modifying Zou per Lee to include sending data via a transport protocol header. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Regarding Claim 8: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 1 as described. Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou (as modified by Yamazaki & Dutra) is silent on wherein the transport protocol header is a real-time transport protocol header. Lee teaches on wherein the transport protocol header is a real-time transport protocol header (ie. RTP Header Extension). ([0041] The receiving or sending of data may be controlled through Real-time Transport Protocol (RTP) and Real-time Transport Control Protocol (RTCP). [0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Yamazaki & Dutra) by modifying Zou per Lee to include wherein the transport protocol header is a real-time transport protocol header. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Regarding Claim 9: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 1 as described. Zou teaches wherein the user equipment is an internet of things device. ([0090] FIGS. 5-8 illustrate examples of Internet-of-Things (IoT) networks and devices that can be used in accordance with embodiments disclosed herein. For example, the operations and functionality described throughout this disclosure may be embodied by an IoT device or machine in the example form of an electronic processing system.) Regarding Claim 10: Zou teaches A system, comprising: a processor (Fig 9, processor 900); and a memory (Fig 9, memory 902) that stores executable instructions (Fig 9, code 904) that, when executed by the processor, facilitate performance of operations, ([0137] Code 904, which may be one or more instructions to be executed by processor 900, may be stored in memory 902, or may be stored in software, hardware, firmware.) comprising: receiving information indicative of a request for split rendering on behalf of an application executing at a user equipment ([0058] ES1), wherein the information is received via a transport protocol packet; ([0024] edge nodes 110a-c may be implemented with the following capabilities: [0026] (ii) A scalable dynamic replication peer selection algorithm based on the real-time load status from all edge compute server nodes, where load status is shared using common mechanism such as broadcast/multicast of heart-beat messages; [0134] Fig 9, Processor 900 is an example of a type of hardware device that can be used in connection with the embodiments described throughout this disclosure. Transport protocol: [0094] Any number of network protocols and communications standards may be used, wherein each protocol and standard is designed to address specific objectives. [0096] The respective IoT networks may also operate with use of a variety of network and internet application protocols.) based on the information indicative of the request, determining that an available computational resource capacity at the user equipment ([0058] ES1) is insufficient to process the application executing at the user equipment ([0058] ES1); ([0041] FIG. 3 An edge video streaming architecture 300 that leverages dynamic resource rebalancing to avoid dropping frames. In the illustrated embodiment, for example, an edge node 310 (e.g., edge server ES1) is performing object identification and tracking on a video stream 304 captured by a camera 302, but as the edge node 310 becomes overloaded, it offloads the processing of certain video frames to another peer edge node 320 ( e.g., edge server ES2) to avoid dropping the frames. [0074] The edge node may detect a resource overload if the receive buffer is full, or if the receive buffer otherwise exceeds a memory utilization threshold (e.g., the percentage of the receive buffer's overall capacity that is currently being used exceeds a threshold). Alternatively, any other metric may also be used to detect when the edge node's resources have become overloaded.) based on the determining, enabling a first rendering of a first portion of the application at a first device ([0058] ES2, ES1) and a second rendering of a second portion of the application at a second device ([0058] ESn) different from the first device. ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load.) and wherein the enabling involves a second request sent to the second device to participate in the split rendering and a determination that the second device has accepted the second request; ([0041] As the edge node 310 becomes overloaded, it offloads the processing of certain video frames to another peer edge node 320 ( e.g., edge server ES2) to avoid dropping the frames. [0053] Load status information must be collected from all edge nodes. For example, all edge compute nodes involved in this collaborative video analytics pipeline must share their system load status to allow overloaded edge nodes to choose optimal peer edge nodes for offloading compute tasks and rebalancing the overall load.) Load status is required from each peer and each peer involved has accepted their role in participating with load balancing. Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou is silent on a signal transmitted via a transport protocol header extension, wherein the transport protocol header extension is included in a first packet of a stream of packets. Lee teaches on a signal transmitted via a transport protocol header extension and wherein the transport protocol header extension is included in a first packet of a stream of packets. (ie. RTP Header Extension). ([0041] The receiving or sending of data may be controlled through Real-time Transport Protocol (RTP) and Real-time Transport Control Protocol (RTCP). [0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou per Lee to include a signal transmitted via a transport protocol header extension and wherein the transport protocol header extension is included in a first packet of a stream of packets. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Zou teaches on load status heartbeat messages ([0026]). However, Zou (as modified by Lee) is silent on wherein at least one additional packet of the stream of packets includes additional information that is indicative of computational resource sufficiency of the user equipment. Yamazaki teaches wherein at least one additional packet of the stream of packets includes additional information that is indicative of computational resource sufficiency of the user equipment. ([0029] The "processing capacity" and the "remaining processing capacity" indicating the remaining resources that can be allocated for processing the flow are stored. FIG. 4, the device information storage unit 122 stores the throughput indicating the processing capacity per unit time as the “processing capacity” or the “remaining processing capacity”. [0037] The resource assignment unit 133 performs assignment processing based on various information stored in the traffic information storage unit 121. [0040][0041] The resource assignment unit 133 may pass to the setting unit 134 information on SCs set in a series of processes by the assignment process. The setting unit 134 transmits, to the processing apparatus 10, information indicating which transmission rate the packet received to the flow identification apparatus 50 is to be transmitted.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee) by modifying Zou per Yamazaki to include wherein at least one additional packet of the stream of packets includes additional information that is indicative of computational resource sufficiency of the user equipment. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide dynamic offloading decisions by monitoring the split processing and updating within the data stream the amount of resources that are remaining. Zou teaches on load status heartbeat messages ([0026]). However, Zou (as modified by Lee & Yamazaki) is silent on determining whether the second device has successfully performed an assigned task of the split rendering; and assessing a penalty against the second device based on a determination that the second device has not successfully performed the assigned task. Dutra teaches determining whether the second device has successfully performed an assigned task of the split rendering; and assessing a penalty against the second device based on a determination that the second device has not successfully performed the assigned task. (Col 5 ln 35-39, SLA clauses are transformed into penalty value-generating parameters, allowing the Policy-based Delivery Processing System to assign a value to the question "how much penalty will the Service Provider incur if the recipient-job being prioritized fails to meet an SLA clause?". Col 13 ln 51-59, The communication between Queue Manager 508 and Policy Manager 510 is one whereby jobs are passed to Policy Manager 510 for penalty assessment, and the penalty value, plus any delay in job delivery start time, is returned. The penalty value assessment algorithm is a mathematical function dependent on values assigned to the SLA attributes described above, as well as the current time, and the relation between the current time and the expiration of any expected delivery time. Col 8 ln 62-67, The present invention also includes a means for identifying successes and failures of meeting SLA guarantees, notifying subscribers of such successes and failures as each recipient-job delivery is disposed of finally, and generating records for billing systems to identify such successes and failures.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee & Yamazaki) by modifying Zou per Dutra to include determining whether the second device has successfully performed an assigned task of the split rendering; and assessing a penalty against the second device based on a determination that the second device has not successfully performed the assigned task. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide business incentives for work completion in a timely manner. Regarding Claim 11: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 10 as described. Zou teaches wherein the first rendering comprises a first execution of the first portion of the application at the first device ([0058] ES1), and wherein the second rendering comprises a second execution of the second portion of the application at the second device ([0058] ES2, ESn). ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load. [0050] (8) Peer edge node 320 (ES2) receives the replicated video segment k+2 from edge node 310 (ES1), and peer edge node 320 (ES2) performs the requisite compute tasks on that video segment (e.g., object identification and tracking) on behalf of edge node 310 (ES1).) Portion is the next video segment as being offloaded. Regarding Claim 12: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 10 as described. Zou teaches wherein the first device is the user equipment ([0058] ES1), and wherein the second device is network equipment ([0058] ES2, ESn). ([0090] FIGS. 5-8 illustrate examples of Internet-of-Things (IoT) networks and devices that can be used in accordance with embodiments disclosed herein. For example, the operations and functionality described throughout this disclosure may be embodied by an IoT device or machine in the example form of an electronic processing system.) Regarding Claim 13: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 10 as described. Zou teaches wherein the user equipment is first user equipment, wherein the first device is the first user equipment ([0058] ES1), and wherein the second device is a second user equipment ([0058] ES2, ESn). ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load.) Regarding Claim 14: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 10 as described. Zou teaches wherein the first device is edge equipment ([0058] ES1), and wherein the second device is cloud equipment ([0058] ES2, ESn). ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0015] The functionality of video streaming system 100 and/or edge nodes 110a-c can be distributed across any combination of devices and components deployed throughout an edge-to-cloud network topology, including at the edge, in the cloud, and/or anywhere in between in the "fog." [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load.) Regarding Claim 16: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 10 as described. Zou teaches wherein the user equipment is classified as an internet of everything device. ([0090] FIGS. 5-8 illustrate examples of Internet-of-Things (IoT) networks and devices that can be used in accordance with embodiments disclosed herein. For example, the operations and functionality described throughout this disclosure may be embodied by an IoT device or machine in the example form of an electronic processing system.) Regarding Claim 17: Zou teaches A non-transitory machine-readable medium (Fig 9, memory 902), comprising executable instructions that, when executed by a processor (Fig 9, processor 900), facilitate performance of operations, ([0137] Code 904, which may be one or more instructions to be executed by processor 900, may be stored in memory 902, or may be stored in software, hardware, firmware.) comprising: receiving an indication, via a real-time transport protocol packet, that a resource-intensive application is to be enabled (ie. certain video frames) for a user equipment via an elastic computing system; ([0041] FIG. 3 An edge video streaming architecture 300 that leverages dynamic resource rebalancing to avoid dropping frames. In the illustrated embodiment, for example, an edge node 310 (e.g., edge server ES1) is performing object identification and tracking on a video stream 304 captured by a camera 302, but as the edge node 310 becomes overloaded, it offloads the processing of certain video frames to another peer edge node 320 ( e.g., edge server ES2) to avoid dropping the frames. [0074] The edge node may detect a resource overload if the receive buffer is full, or if the receive buffer otherwise exceeds a memory utilization threshold (e.g., the percentage of the receive buffer's overall capacity that is currently being used exceeds a threshold). Alternatively, any other metric may also be used to detect when the edge node's resources have become overloaded.) The indication is that the edge node is becoming overloaded (processing of a resource intensive application). migrating (ie. offloading) the resource-intensive application from the user equipment to elastic computing equipment associated with the elastic computing system. ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load.) and wherein the migrating is based on a request sent to the elastic computing equipment to participate in task division and a determination that the elastic computing equipment has accepted the request; ([0041] As the edge node 310 becomes overloaded, it offloads the processing of certain video frames to another peer edge node 320 ( e.g., edge server ES2) to avoid dropping the frames. [0053] Load status information must be collected from all edge nodes. For example, all edge compute nodes involved in this collaborative video analytics pipeline must share their system load status to allow overloaded edge nodes to choose optimal peer edge nodes for offloading compute tasks and rebalancing the overall load.) Load status is required from each peer and each peer involved has accepted their role in participating with load balancing. Zou teaches on utilizing a transport protocol ([0094][0096]). However, Zou is silent on receiving an indication, via a real-time transport protocol header extension and wherein the real-time transport protocol header extension is included in a first packet of a stream of packets. Lee teaches on receiving an indication, via a real-time transport protocol header extension and wherein the real-time transport protocol header extension is included in a first packet of a stream of packets (ie. RTP Header Extension). ([0041] The receiving or sending of data may be controlled through Real-time Transport Protocol (RTP) and Real-time Transport Control Protocol (RTCP). [0043] The stream input table 270 can be transmitted using the Real-time Transport Protocol header extension (RTP Header Extension) and Real-time Transport Control Protocol (RTCP) packets within the stream of input streaming content group 140.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou per Lee to include receiving an indication, via a real-time transport protocol header extension and wherein the real-time transport protocol header extension is included in a first packet of a stream of packets. It would have been advantageous to include these details as discussed above, as it would allow the modified system to provide flexibility for many types of systems/devices by using well-known and widely used protocols to transmit device processing status by utilizing a transport header extension. Zou teaches on load status heartbeat messages ([0026]). However, Zou (as modified by Lee) is silent on wherein at least one additional packet of the stream of packets includes an additional indication of resource sufficiency of the user equipment. Yamazaki teaches wherein at least one additional packet of the stream of packets includes an additional indication of resource sufficiency of the user equipment. ([0029] The "processing capacity" and the "remaining processing capacity" indicating the remaining resources that can be allocated for processing the flow are stored. FIG. 4, the device information storage unit 122 stores the throughput indicating the processing capacity per unit time as the “processing capacity” or the “remaining processing capacity”. [0037] The resource assignment unit 133 performs assignment processing based on various information stored in the traffic information storage unit 121. [0040][0041] The resource assignment unit 133 may pass to the setting unit 134 information on SCs set in a series of processes by the assignment process. The setting unit 134 transmits, to the processing apparatus 10, information indicating which transmission rate the packet received to the flow identification apparatus 50 is to be transmitted.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee) by modifying Zou per Yamazaki to include wherein at least one additional packet of the stream of packets includes an additional indication of resource sufficiency of the user equipment. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide dynamic offloading decisions by monitoring the split processing and updating within the data stream the amount of resources that are remaining. Zou teaches on load status heartbeat messages ([0026]). However, Zou (as modified by Lee & Yamazaki) is silent on determining whether the elastic computing equipment has successfully performed an assigned task; and assessing a penalty against the elastic computing equipment based on a determination that the elastic computing equipment has not successfully performed the assigned task. Dutra teaches determining whether the elastic computing equipment has successfully performed an assigned task; and assessing a penalty against the elastic computing equipment based on a determination that the elastic computing equipment has not successfully performed the assigned task. (Col 5 ln 35-39, SLA clauses are transformed into penalty value-generating parameters, allowing the Policy-based Delivery Processing System to assign a value to the question "how much penalty will the Service Provider incur if the recipient-job being prioritized fails to meet an SLA clause?". Col 13 ln 51-59, The communication between Queue Manager 508 and Policy Manager 510 is one whereby jobs are passed to Policy Manager 510 for penalty assessment, and the penalty value, plus any delay in job delivery start time, is returned. The penalty value assessment algorithm is a mathematical function dependent on values assigned to the SLA attributes described above, as well as the current time, and the relation between the current time and the expiration of any expected delivery time. Col 8 ln 62-67, The present invention also includes a means for identifying successes and failures of meeting SLA guarantees, notifying subscribers of such successes and failures as each recipient-job delivery is disposed of finally, and generating records for billing systems to identify such successes and failures.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee & Yamazaki) by modifying Zou per Dutra to include determining whether the elastic computing equipment has successfully performed an assigned task; and assessing a penalty against the elastic computing equipment based on a determination that the elastic computing equipment has not successfully performed the assigned task. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide business incentives for work completion in a timely manner. Regarding Claim 18: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 17 as described. Zou teaches wherein the user equipment is a first user equipment ([0058] ES1), and wherein the elastic computing equipment of the elastic computing system comprises a second user equipment, network equipment, or both the second user equipment and the network equipment ([0058] ES2, ESn). ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load.) Regarding Claim 19: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 17 as described. Zou teaches wherein the network equipment ([0058] ES1, ES2, ESn) comprises edge equipment and cloud equipment. ([0014] The cameras 102a-c capture video footage of their respective surroundings, and that video footage is then streamed to the edge nodes 110a-c (e.g., via the network switch 120) for further processing. If one of the edge nodes 110a-c becomes overloaded, however, a portion of its video processing workload can be dynamically offloaded to other edge nodes 110a-c to prevent video frames from being dropped. [0015] The functionality of video streaming system 100 and/or edge nodes 110a-c can be distributed across any combination of devices and components deployed throughout an edge-to-cloud network topology, including at the edge, in the cloud, and/or anywhere in between in the "fog." [0053] The peer selection algorithm uses the load status of all available edge nodes to select the appropriate peer node to handle the offloaded processing and rebalance the overall processing load.) Regarding Claim 20: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 17 as described. Zou teaches wherein the migrating comprises enabling distributed cooperative computing for the user equipment. ([0016] FIGS. 2A through 2C are system block diagrams illustrating distributed edge computing systems. [0059] (3) Generate the peer selection set EP as a subset of E', where EP contains the first m elements from E'. The value of m is a preconfigured load balancing factor that can changed at runtime. As an example, if the total number of edge nodes is 10, then m may be set to a value of 4 (e.g., m=4 for n=l0). This allows the load to be distributed evenly without potentially overloading other servers.) Claim(s) 6, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2019/0327506 A1 (Zou) in view of US 2013/0173819 A1 (Lee) further in view of JP 2016149630 A (Yamazaki) more in view of US 6917979 B1 (Dutra) even more in view of US 2022/0255988 A1 (Salmasi). Regarding Claim 6: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 1 as described. Zou teaches on receiving information indicative of a task executing at the user equipment ([0041][0074]). However, Zou (as modified by Lee & Yamazaki & Dutra) is silent on wherein the receiving comprises receiving priority information indicative of a priority of the task executing at the user equipment. Salmasi teaches, in the same field of endeavor, an edge computing system configured to dynamically offload tasks from a user device to an edge device, Abstract. Salmasi also teaches wherein the receiving comprises receiving priority information indicative of a priority of the task executing at the user equipment. ([0076] In some embodiments, the edge computing system may be configured to determine whether there are multiple resource requests. In some embodiments, the edge computing system may be configured to determine the order in which each application is able to obtain the required resources in response to determining that there are multiple resource requests. In some embodiments the edge computing system may be configured to restrict one application in favor of another application that has a higher priority either in resource requirements, latency requirements or policy based decisions. [0088] As such, in various embodiments, the HMD 100 may be configured to perform all processing locally on the processor 120 in the HMD 100, offload all of the main processing to a processor in another computing device, or split the main processing operations between the processor 120 in the HMD 100 and the processor in the other computing device. In some embodiments, the "other" computing device may be a user computing device, an edge device, or a cloud server.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee & Yamazaki & Dutra) by modifying Zou per Salmasi to include wherein the receiving comprises receiving priority information indicative of a priority of the task executing at the user equipment. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide load balancing based on application priority allowing for QoS implementation. Regarding Claim 15: Zou (as modified by Lee & Yamazaki & Dutra) teaches on the invention of claim 10 as described. Zou teaches on offloading processing of a video segment ([0014][0053]). However, Zou (as modified by Lee & Yamazaki & Dutra) is silent on wherein the operations further comprise: identifying the first portion with first identification information and the second portion with second identification information, wherein the first identification information and the second identification information enable a mapping between the first portion and the second portion. Salmasi teaches wherein the operations further comprise: identifying the first portion with first identification information and the second portion with second identification information, wherein the first identification information and the second identification information enable a mapping between the first portion and the second portion. ([0249] in some embodiments, the edge device may send a capabilities message identifying portions of the software application that could be run on the edge device in response to determining that the edge device is capable of running at least one portion of the software application. The capabilities message may include information identifying the specific portions or functions that it can ( or cannot) perform. The application controller may use this information to determine whether to assign tasks associated with the XR application to edge device (e.g., based on availability of resources in the edge computing system, etc.) and/or to determine the tasks that are to be assigned to the edge device.) It would have been obvious to a person with ordinary skill in the art before the effective filing date of the claimed invention, to modify Zou (as modified by Lee & Yamazaki & Dutra) by modifying Zou per Salmasi to include wherein the operations further comprise: identifying the first portion with first identification information and the second portion with second identification information, wherein the first identification information and the second identification information enable a mapping between the first portion and the second portion. It would have been advantageous to include these details as discussed above, as it would allow the combined system to provide assurance of proper task division tracking. Conclusion & Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL J HACKENBERG whose telephone number is (571)272-5417. The examiner can normally be reached 9am-5pm M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton B Burgess can be reached on (571)272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL J HACKENBERG/Primary Examiner, Art Unit 2454
Read full office action

Prosecution Timeline

Sep 13, 2022
Application Filed
May 03, 2025
Non-Final Rejection — §103
Aug 05, 2025
Response Filed
Nov 08, 2025
Final Rejection — §103
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 14, 2026
Examiner Interview Summary
Jan 20, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 06, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587464
FAULT INJECTION CONFIGURATION EQUIVALENCY TESTING
2y 5m to grant Granted Mar 24, 2026
Patent 12580819
DETERMINING SERVICE GROUP CAPACITY BASED ON AN AGGREGATE RISK METRIC
2y 5m to grant Granted Mar 17, 2026
Patent 12500823
SYSTEM AND METHOD FOR ENTERPRISE - WIDE DATA UTILIZATION TRACKING AND RISK REPORTING
2y 5m to grant Granted Dec 16, 2025
Patent 12495001
CAPACITY AWARE LOAD PACKING FOR LAYER-4 LOAD BALANCER
2y 5m to grant Granted Dec 09, 2025
Patent 12470508
RESTRICTING MESSAGE NOTIFICATIONS AND CONVERSATIONS BASED ON DEVICE TYPE, MESSAGE CATEGORY, AND TIME PERIOD
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 300 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month