Prosecution Insights
Last updated: April 19, 2026
Application No. 17/796,061

METHOD FOR INSTANTIATING A NETWORK SERVICE AND CORRESPONDING APPARATUS

Final Rejection §103
Filed
Jul 28, 2022
Examiner
SAMLUK, JESSE PAUL
Art Unit
2411
Tech Center
2400 — Computer Networks
Assignee
Interdigital Ce Patent Holdings
OA Round
4 (Final)
45%
Grant Probability
Moderate
5-6
OA Rounds
3y 3m
To Grant
93%
With Interview

Examiner Intelligence

Grants 45% of resolved cases
45%
Career Allow Rate
23 granted / 51 resolved
-12.9% vs TC avg
Strong +48% interview lift
Without
With
+47.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
49 currently pending
Career history
100
Total Applications
across all art units

Statute-Specific Performance

§101
0.6%
-39.4% vs TC avg
§103
69.5%
+29.5% vs TC avg
§102
20.4%
-19.6% vs TC avg
§112
8.8%
-31.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 51 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 30-31 and 38-39 are rejected under 35 U.S.C. § 103 as being unpatentable over Drew et. al. (U.S. Pat. Pub. 2017/0318097), herein referred to as “Drew”, in view of Hyoudou (U.S. Pat. Pub. 2019/0132252), and in view of Koo (U.S. Pat. Pub. 2016/0366014). Regarding Claim 30, Drew discloses: A method for transmitted packets between virtual network function instances (vnf instances) of a computing node comprising the vnf instances, the vnf instances being interconnected [0104] Mixed integer programming solution 796 depicted in FIG. 7 may depict a mixed integer programming solution 796 for β=0, for which the objective constitutes reducing node resource utilization over all the switches (e.g., core switch 704 and aggregation switched 706-1 . . . 706-N) and VNFs 712-1 . . . 712-N. Mixed integer programming solution 796 may utilize all eight servers 710-1 . . . 710-N to host the VNFs 712-1 . . . 712-N. The service chains 714-1 . . . 714-N may be split across multiple paths in the network infrastructure, in order to distribute traffic evenly. The highest computational resource usage over the network infrastructure is approximately sixty-three percent. Note: The VNFs are connected to aggregation switches, which are connected to a core switch. These switches are “nodes”, and the service chains are “links”. splitting chaining information relative to the vnf instances [0104] Mixed integer programming solution 796 depicted in FIG. 7 may depict a mixed integer programming solution 796 for β=0, for which the objective constitutes reducing node resource utilization over all the switches (e.g., core switch 704 and aggregation switched 706-1 . . . 706-N) and VNFs 712-1 . . . 712-N. Mixed integer programming solution 796 may utilize all eight servers 710-1 . . . 710-N to host the VNFs 712-1 . . . 712-N. The service chains 714-1 . . . 714-N may be split across multiple paths in the network infrastructure, in order to distribute traffic evenly. The highest computational resource usage over the network infrastructure is approximately sixty-three percent. the routing information comprising information for forwarding, by each vnf instance, packets output by the respective vnf instance based on identifier information in the packets [0013] As used herein, a service chain may include a sequence of actions such as requests for execution of a task. The sequence of actions may be a sequence of actions specified in a request, for example, from a network user or administrator. The actions may be specified in a stream of data packets traversing the network infrastructure. The stream of data packets may include units of data utilized in Internet Protocols (IP) transmissions for data navigating a network. The actions associated with the stream of data packets may correspond to or be accomplished through corresponding network functions. The service chains may specify an order in which the actions will be performed. Further, the service chains may specify the order in which a corresponding series of functions will be utilized to execute the specified order of actions. As such, the service chain may specify a sequence and/or order of VNFs to be visited by the network traffic for the given service chain and/or a plurality of service chains. [0079] In a software application utilizing the Transmission Control Protocol (TCP) for transferring network packets, TCP may ensure that all packets of a service chain will arrive at a destination. If any packet is dropped during transmission, TCP may resend the packets from the source until the packet reaches the destination. The expected latency computation may factor in a packet's expected queuing delay at each node included in the VNF placement as well as extra time incurred due to resent packets. E(T.sub.1.fwdarw.n) may represent the expected latency for a packet to visit the sequence of nodes as {1, 2, . . . , n} in N.sub.c, for n=1, 2, . . . , |N.sub.c|. Note: Paragraph [0013] is being used to demonstrate what is contained in a packet (units of data in IP transmissions). TCP is used then to ensure all packets are transmitted from the source until is reached at each node in the VNF path, of which there are a plurality of VNFs in the reference. and transmitting, by each vnf instance information in the packet and according to the routing information received by the respective vnf instance [0013] As used herein, a service chain may include a sequence of actions such as requests for execution of a task. The sequence of actions may be a sequence of actions specified in a request, for example, from a network user or administrator. The actions may be specified in a stream of data packets traversing the network infrastructure. The stream of data packets may include units of data utilized in Internet Protocols (IP) transmissions for data navigating a network. The actions associated with the stream of data packets may correspond to or be accomplished through corresponding network functions. The service chains may specify an order in which the actions will be performed. Further, the service chains may specify the order in which a corresponding series of functions will be utilized to execute the specified order of actions. As such, the service chain may specify a sequence and/or order of VNFs to be visited by the network traffic for the given service chain and/or a plurality of service chains. [0079] In a software application utilizing the Transmission Control Protocol (TCP) for transferring network packets, TCP may ensure that all packets of a service chain will arrive at a destination. If any packet is dropped during transmission, TCP may resend the packets from the source until the packet reaches the destination. The expected latency computation may factor in a packet's expected queuing delay at each node included in the VNF placement as well as extra time incurred due to resent packets. E(T.sub.1.fwdarw.n) may represent the expected latency for a packet to visit the sequence of nodes as {1, 2, . . . , n} in N.sub.c, for n=1, 2, . . . , |N.sub.c|. Note: Paragraph [0013] is being used to demonstrate what is contained in a packet (units of data in IP transmissions). TCP is used then to ensure all packets are transmitted from the source until is reached at each node in the VNF path/service chain, of which there are a plurality of VNFs in the reference. Since there are service chains in the reference, each packet is transmitted through a chain that connects the VNFs. Drew does not disclose vnf instances being interconnected inside the computing node, communication links in the computing node, and the vnf instances in the computing node forming a chain in the computing node. However, Hyoudou discloses a computing node comprising the vnf instances, vnf instances being interconnected inside the computing node, communication links in the computing node, and the vnf instances in the computing node forming a chain in the computing node. [0031] First, a description will be given of a transmission suppression instruction by an NFV device according to the embodiment. FIG. 1 is an explanatory diagram of a transmission suppression instruction by an NFV device according to an embodiment. In FIG. 1, an NFV device 1 is an information processing apparatus in which three VNFs 30 denoted by VNF#1 to VNF#3 and a virtual switch 20 operate. [0032] The NFV device 1 includes two physical ports 10 denoted by pP#1 and pP#2. The virtual switch 20 includes eight virtual ports 21 denoted by vP#1 to vP#8. The VNF 30 includes two virtual network interface cards 31 denoted by vNIC#1 and vNIC#2. NW-A and NW-B are external networks 2. Drew and Hyoudou are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew to include the concept of having VNFs contained and interconnected within a computing node as taught by Hyoudou so as to minimize latency between interconnected VNFs. Drew does not disclose transmitting the routing information for each vnf instance to a corresponding vnf instance. However, Koo discloses transmitting the routing information for each vnf instance to a corresponding vnf instance. [0081] Referring to FIG. 6, the MANO located at a KT domain “kt” provides a descriptor VNFCD #10 for a peer VNF instance Z to a MANO located at a SKT domain “skt”, provides descriptors VNFCD #50 and VNFCD #71 to a peer VNF instance X in the KT domain, and provides descriptors VNFCD #40 and VNFCD #71 to a peer VNF instance Y in the same domain. In addition, in order to allow the peer VNF instances X, Y, Z to access shared VNF components in the VNF instances A, C and D, the MANO in the KT domain transmits an update command for routing information and other access authorities to the firewall, the switch and the VNF instances A, C and D. Drew and Koo are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew to include the concept of transmitting the routing information for each vnf instance to a corresponding vnf instance as taught by Koo so as to minimize latency between interconnected VNFs. Regarding Claim 31, Drew discloses: The method according to claim 30, wherein the chaining information relative to the vnf instances in the [0020] FIG. 1 illustrates an example of a VNF placement scenario 100. The VNF placement scenario 100 includes a physical network topology 102. The physical network topology 102 may include a graph of the physical infrastructure of a network including a core switch 104 root node, a plurality of aggregation switches 106-1 . . . 106-N, a plurality of servers 108-1 . . . 108-N, and a plurality of top-of-rack (TOR) switches 110-1 . . . 110-N. [0029] The initial VNF mapping may include a mapping of the routing of service chain traffic data flows through the physical infrastructure of the network. The initial VNF mapping may also include a mapping of the placement of corresponding VNFs, determined from the service chains, to the physical infrastructure of the network. Interpretation: Any chain between VNFs is a graph. The graph concept is illustrated by way of the service chains, of which there are many. Paragraph [0029] highlights the routing information (“routing of service chain traffic”). Drew does not disclose the vnf instances in the computing node forming a chain in the computing node via the communication links in the computing node. However, Hyoudou discloses the vnf instances in the computing node forming a chain in the computing node via the communication links in the computing node. [0031] First, a description will be given of a transmission suppression instruction by an NFV device according to the embodiment. FIG. 1 is an explanatory diagram of a transmission suppression instruction by an NFV device according to an embodiment. In FIG. 1, an NFV device 1 is an information processing apparatus in which three VNFs 30 denoted by VNF#1 to VNF#3 and a virtual switch 20 operate. [0032] The NFV device 1 includes two physical ports 10 denoted by pP#1 and pP#2. The virtual switch 20 includes eight virtual ports 21 denoted by vP#1 to vP#8. The VNF 30 includes two virtual network interface cards 31 denoted by vNIC#1 and vNIC#2. NW-A and NW-B are external networks 2. Drew and Hyoudou are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew to include the concept of having VNFs contained and interconnected within a computing node as taught by Hyoudou so as to minimize latency between interconnected VNFs. Regarding Claim 38, Claim 38 is rejected on the same grounds of rejection set forth in claim 30. Drew discloses: A device for transmitting packets between virtual network function instances (vnf instances) of a computing node comprising the vnf instances, the vnf instances being interconnected [0104] Mixed integer programming solution 796 depicted in FIG. 7 may depict a mixed integer programming solution 796 for β=0, for which the objective constitutes reducing node resource utilization over all the switches (e.g., core switch 704 and aggregation switched 706-1 . . . 706-N) and VNFs 712-1 . . . 712-N. Mixed integer programming solution 796 may utilize all eight servers 710-1 . . . 710-N to host the VNFs 712-1 . . . 712-N. The service chains 714-1 . . . 714-N may be split across multiple paths in the network infrastructure, in order to distribute traffic evenly. The highest computational resource usage over the network infrastructure is approximately sixty-three percent. Note: The VNFs are connected to aggregation switches, which are connected to a core switch. These switches are “nodes”, and the service chains are “links”. split chaining information relative to the vnf instances in the [0104] Mixed integer programming solution 796 depicted in FIG. 7 may depict a mixed integer programming solution 796 for β=0, for which the objective constitutes reducing node resource utilization over all the switches (e.g., core switch 704 and aggregation switched 706-1 . . . 706-N) and VNFs 712-1 . . . 712-N. Mixed integer programming solution 796 may utilize all eight servers 710-1 . . . 710-N to host the VNFs 712-1 . . . 712-N. The service chains 714-1 . . . 714-N may be split across multiple paths in the network infrastructure, in order to distribute traffic evenly. The highest computational resource usage over the network infrastructure is approximately sixty-three percent. the routing information comprising information for forwarding, by each vnf instance, packets output by the respective vnf instance based on identifier information in the packets [0013] As used herein, a service chain may include a sequence of actions such as requests for execution of a task. The sequence of actions may be a sequence of actions specified in a request, for example, from a network user or administrator. The actions may be specified in a stream of data packets traversing the network infrastructure. The stream of data packets may include units of data utilized in Internet Protocols (IP) transmissions for data navigating a network. The actions associated with the stream of data packets may correspond to or be accomplished through corresponding network functions. The service chains may specify an order in which the actions will be performed. Further, the service chains may specify the order in which a corresponding series of functions will be utilized to execute the specified order of actions. As such, the service chain may specify a sequence and/or order of VNFs to be visited by the network traffic for the given service chain and/or a plurality of service chains. [0079] In a software application utilizing the Transmission Control Protocol (TCP) for transferring network packets, TCP may ensure that all packets of a service chain will arrive at a destination. If any packet is dropped during transmission, TCP may resend the packets from the source until the packet reaches the destination. The expected latency computation may factor in a packet's expected queuing delay at each node included in the VNF placement as well as extra time incurred due to resent packets. E(T.sub.1.fwdarw.n) may represent the expected latency for a packet to visit the sequence of nodes as {1, 2, . . . , n} in N.sub.c, for n=1, 2, . . . , |N.sub.c|. Note: Paragraph [0013] is being used to demonstrate what is contained in a packet (units of data in IP transmissions). TCP is used then to ensure all packets are transmitted from the source until is reached at each node in the VNF path, of which there are a plurality of VNFs in the reference. and transmit, by each vnf instance [0013] As used herein, a service chain may include a sequence of actions such as requests for execution of a task. The sequence of actions may be a sequence of actions specified in a request, for example, from a network user or administrator. The actions may be specified in a stream of data packets traversing the network infrastructure. The stream of data packets may include units of data utilized in Internet Protocols (IP) transmissions for data navigating a network. The actions associated with the stream of data packets may correspond to or be accomplished through corresponding network functions. The service chains may specify an order in which the actions will be performed. Further, the service chains may specify the order in which a corresponding series of functions will be utilized to execute the specified order of actions. As such, the service chain may specify a sequence and/or order of VNFs to be visited by the network traffic for the given service chain and/or a plurality of service chains. [0079] In a software application utilizing the Transmission Control Protocol (TCP) for transferring network packets, TCP may ensure that all packets of a service chain will arrive at a destination. If any packet is dropped during transmission, TCP may resend the packets from the source until the packet reaches the destination. The expected latency computation may factor in a packet's expected queuing delay at each node included in the VNF placement as well as extra time incurred due to resent packets. E(T.sub.1.fwdarw.n) may represent the expected latency for a packet to visit the sequence of nodes as {1, 2, . . . , n} in N.sub.c, for n=1, 2, . . . , |N.sub.c|. Note: Paragraph [0013] is being used to demonstrate what is contained in a packet (units of data in IP transmissions). TCP is used then to ensure all packets are transmitted from the source until is reached at each node in the VNF path/service chain, of which there are a plurality of VNFs in the reference. Since there are service chains in the reference, each packet is transmitted through a chain that connects the VNFs. Drew does not disclose vnf instances being interconnected inside the computing node, communication links in the computing node, and the vnf instances in the computing node forming a chain in the computing node. However, Hyoudou discloses a computing node comprising the vnf instances, vnf instances being interconnected inside the computing node, communication links in the computing node, and the vnf instances in the computing node forming a chain in the computing node. [0031] First, a description will be given of a transmission suppression instruction by an NFV device according to the embodiment. FIG. 1 is an explanatory diagram of a transmission suppression instruction by an NFV device according to an embodiment. In FIG. 1, an NFV device 1 is an information processing apparatus in which three VNFs 30 denoted by VNF#1 to VNF#3 and a virtual switch 20 operate. [0032] The NFV device 1 includes two physical ports 10 denoted by pP#1 and pP#2. The virtual switch 20 includes eight virtual ports 21 denoted by vP#1 to vP#8. The VNF 30 includes two virtual network interface cards 31 denoted by vNIC#1 and vNIC#2. NW-A and NW-B are external networks 2. Drew and Hyoudou are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew to include the concept of having VNFs contained and interconnected within a computing node as taught by Hyoudou so as to minimize latency between interconnected VNFs. Drew does not disclose transmitting the routing information for each vnf instance to a corresponding vnf instance. However, Koo discloses transmitting the routing information for each vnf instance to a corresponding vnf instance. [0081] Referring to FIG. 6, the MANO located at a KT domain “kt” provides a descriptor VNFCD #10 for a peer VNF instance Z to a MANO located at a SKT domain “skt”, provides descriptors VNFCD #50 and VNFCD #71 to a peer VNF instance X in the KT domain, and provides descriptors VNFCD #40 and VNFCD #71 to a peer VNF instance Y in the same domain. In addition, in order to allow the peer VNF instances X, Y, Z to access shared VNF components in the VNF instances A, C and D, the MANO in the KT domain transmits an update command for routing information and other access authorities to the firewall, the switch and the VNF instances A, C and D. Drew and Koo are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew to include the concept of transmitting the routing information for each vnf instance to a corresponding vnf instance as taught by Koo so as to minimize latency between interconnected VNFs. Regarding Claim 39, Claim 39 is rejected on the same grounds of rejection set forth in claim 31. Claims 32-36 and 40-43 are rejected under 35 U.S.C. § 103 as being unpatentable over Drew in view of Hyoudou and Koo, held further in view of Li and Liang (U.S. Pat. Pub. 2022/0109633), herein referred to as “Li.” Regarding Claim 32, Drew in view of Hyoudou and Koo does not fully disclose all the limitations of Claim 32. However, Li discloses: The method according to claim 30, wherein the method is implemented by at least one network entity corresponding to at least one vnf instance implementing a session management function, the at least one vnf instance corresponding to a control plane network entity. [0054] Shown as traffic flow 212, the requirements or requests (or configuration information or parameters) from AF 202 (respectively, the OAM 210) may indicate that the traffic from the UE 102 should be routed through a service function chain 216 (e.g. deployed locally in one or multiple local instances of the DN 242) and the Final Destination 218 (e.g. an application server in the DN 242 or a UE). If the control plane function receiving the requirements or requests from the AF 202 or OAM 210 is a PCF 206, the control plane function (i.e. the PCF 206) may generate policies based on the requirement or request and provide 232 the polices to an SMF 204 that is serving a PDU Session associated to the traffic flow 212 or 214 or carrying the traffic flow 212 or 214. Per the polices received from the PCF 206, or per the configuration information/parameters from OAM 210, the SMF 204 may configure UPF(s) accordingly by sending 234 configuration parameters or rules to the UPF(s), such as attributes in N4 forwarding action rules and packet detection rules or simply sending the forwarding action rules and/or the packet detection rules to the UPF(s). The configuration parameters or rules may be generated by the SMF 204 based on polices received from the PCF 206 or the configuration information/parameters from OAM 210. Interpretation: The system (OAM) indicates to a network service function chain certain requirements. These requirements are based on the SMF and control plane function. Drew in view of Hyoudou, Koo, and Li are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew in view of Hyoudou and Koo to include the concept of having a SMF and control plane function as taught by Li so as to provide different routing options (service function chain, traffic steering, and application server discovery) (paragraph [0029]). Regarding Claim 33, Drew in view of Hyoudou and Koo does not fully disclose all the limitations of Claim 33. However, Li discloses: The method according to claim 30, wherein the method is implemented by at least one network entity corresponding to at least one vnf instance instantiated in user plane functions, in a local data network or in a data network in one of at least one wireless transmit-receive unit and at least one application server. [0004] The service function chain may be deployed locally close to the edge of the network (e.g. in one or multiple local instances of the data network (DN)) while the final destination may be located in the DN or may be a UE. When routing traffic through the service function chain, e.g. from a first DPF to a second DPF, the traffic may have to be routed via user plane functions (UPFs) in the 3GPP system, such that the traffic may be routed from the first DPF to a UPF and then to the second DPF and so on. Interpretation: The function chain can route traffic by way of a UPF in local network (local instance of a DN). Drew in view of Hyoudou, Koo, and Li are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew in view of Hyoudou and Koo to include the concept of having a UPF and a local data network as taught by Li so as to provide different routing options (service function chain, traffic steering, and application server discovery) (paragraph [0029]). Regarding Claim 34, Drew in view of Hyoudou and Koo does not fully disclose all the limitations of Claim 34. However, Li discloses: The method according to claim 30, wherein the routing information is configured by a session management function, onto user plane functions, using packet detection rules and forward action rules. [0012] Another aspect of the disclosure provides for a method of steering traffic of at least one packet data unit (PDU), by at least one user plane function (UPF). The method includes receiving rules, from a session management function (SMF), the rules including at least one packet detection rule (PDR) and at least one forwarding action rule (FAR); the at least one PDR indicating that the at least one PDU is from a data processing function (DPF) of a service function chain. The method further includes receiving the at least one PDU from the first DPF of a service function chain. The method further includes detecting according to the PDRs that the at least one PDU is from a first DPF of a service function chain. The method further includes sending the at least one PDU according to the at least one FAR. In some embodiments the step of sending the at least one PDU according to the at least one FAR includes sending the at least one PDU to a second DPF of the service function chain. Interpretation: Steering traffic of the PDU (routing information) in configured by the SMF by the UPF, which involved PDR and FAR. Drew in view of Hyoudou, Koo, and Li are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew in view of Hyoudou and Koo to include the concept of having a SMF, UPF, PDR, and FAR as taught by Li so as to provide different routing options (service function chain, traffic steering, and application server discovery) (paragraph [0029]). Regarding Claim 35, Drew in view of Hyoudou and Koo does not fully disclose all the limitations of Claim 35. However, Li discloses: The method according to claim 30, wherein the routing information is configured by a session management function, onto wireless transmit-receive units, using protocol configuration options and/or quality of service profiles, transmitted using non-access stratum messages, being any of a protocol data unit, session establishment and a protocol data unit session modification command. [0219] The AS change notification message may be sent from the SMF to the UE in the form of a NAS message, e.g. an SMF NAS message, or by being included in a NAS message sent to the UE. In this case, the NAS message is sent from the SMF to the AMF and then forwarded by the AMF to the UE via the RAN node serving the UE. In some embodiments, when sending the NAS message to the AMF, the SMF may send information associated to the NAS message to the AMF, e.g. information identifying the related PDU Session (such as PDU Session ID) and/or information indicating the purpose of the NAS message (such as an indication indicating an AS change or AS IP address change or includes a message notifying AS change or AS IP address change). The AMF may store the associated information locally, e.g. as part of the context of the UE. The AS change notification message may, alternatively, be sent via Short Message Service (SMS). In this case, the SMF may send the AS change notification message to the SMSF (short message service function) serving the UE, and then the SMSF may transfer the AS change notification message to the UE via a short message, e.g. the AS change notification message being included in the short message sent from the SMSF to the UE. The short message may include the AS change notification message or include information included in (or associated with) the AS change notification message. Interpretation: The PDU session (which contains IP data/routing information) is sent (configured) by the SMF. In this case the PDU session contains information (options) transmitted by way of an NAS message. Drew in view of Hyoudou, Koo, and Li are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew in view of Hyoudou and Koo to include the concept of having a SMF transmitted using an NAS message as taught by Li so as to provide different routing options (service function chain, traffic steering, and application server discovery) (paragraph [0029]). Regarding Claim 36, Drew in view of Hyoudou and Koo does not fully disclose all the limitations of Claim 36. However, Li discloses: The method according to claim 30, wherein the routing information is configured by a session management function, onto application servers, at a local data network, or a data network. [0054] Shown as traffic flow 212, the requirements or requests (or configuration information or parameters) from AF 202 (respectively, the OAM 210) may indicate that the traffic from the UE 102 should be routed through a service function chain 216 (e.g. deployed locally in one or multiple local instances of the DN 242) and the Final Destination 218 (e.g. an application server in the DN 242 or a UE). If the control plane function receiving the requirements or requests from the AF 202 or OAM 210 is a PCF 206, the control plane function (i.e. the PCF 206) may generate policies based on the requirement or request and provide 232 the polices to an SMF 204 that is serving a PDU Session associated to the traffic flow 212 or 214 or carrying the traffic flow 212 or 214. Per the polices received from the PCF 206, or per the configuration information/parameters from OAM 210, the SMF 204 may configure UPF(s) accordingly by sending 234 configuration parameters or rules to the UPF(s), such as attributes in N4 forwarding action rules and packet detection rules or simply sending the forwarding action rules and/or the packet detection rules to the UPF(s). The configuration parameters or rules may be generated by the SMF 204 based on polices received from the PCF 206 or the configuration information/parameters from OAM 210. Interpretation: The system (OAM) indicates to a network service function chain certain requirements. These requirements are based on the SMF and part of a local data network. Drew in view of Hyoudou, Koo, and Li are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew in view of Hyoudou and Koo to include the concept of having a SMF and a local data network as taught by Li so as to provide different routing options (service function chain, traffic steering, and application server discovery) (paragraph [0029]). Regarding Claim 40, Claim 40 is rejected on the same grounds of rejection set forth in claim 32. Regarding Claim 41, Claim 41 is rejected on the same grounds of rejection set forth in claim 33. Regarding Claim 42, Claim 42 is rejected on the same grounds of rejection set forth in claim 34. Regarding Claim 43, Claim 43 is rejected on the same grounds of rejection set forth in claim 35. Claims 37 and 44 are rejected under 35 U.S.C. § 103 as being unpatentable over Drew in view of Hyoudou and Koo, held further in view of Drake et. al. (U.S. Pat. Pub. 2018/0091420), herein referred to as “Drake.” Regarding Claim 37, Drew in view of Hyoudou and Koo does not fully disclose all the limitations of Claim 37. However, Drake discloses: The method according to claim 30, wherein one of said vnf instances implements a packet classifier function, the packet classifier function inserting, into packets input into the network service, the identifier information, based on a packet property. [0167] Computing device 200 may receive a packet. In some cases, the packet has already been classified to a service function chain. In some cases, a service function instance hosted by computing device 200 classifies the packet to the service function chain. Computing device 200 determines the packet is classified to the service function chain (706). For example, the computing device 200 may determine a service path identifier of a network service header matches the service path identifier in the service function chain route. Because the packet is classified to the service function chain, computing device 200 may use the service function chain route that defines the service function chain to determine the next service function instance to process the packet. Computing device 200 may determine that a service function item in the service function chain route specifies the service function type and the service identifier that, in combination, identify the next service function instance according to the received service function instance route (708). Computing device 200 therefore responsively sends the packet to a computing device that hosts the next service function instance (710). Interpretation: The service function (VNF per paragraph [0034]) classified the packet into the chain. The header of the packet along with the path identifier, assist in processing the packet. Drew in view of Hyoudou, Koo, and Drake are considered to be analogous because they pertain to communication networks. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Drew in view of Hyoudou and Koo to include the concept of having a packet classifier function as taught by Drake so as to simplify configuring or modifying a service function chain (paragraph [0007]). Regarding Claim 44, Claim 44 is rejected on the same grounds of rejection set forth in claim 37. Response to Arguments Applicant’s response filed on December 5, 2025 is acknowledged. The are no amended claims, no new claims, and no canceled claims. Claims 30-44 are pending. Applicant’s arguments with respect to claim 30 and 38 have been fully considered but are unpersuasive. First, in response to applicant's argument that the examiner's conclusion of obviousness is based upon improper hindsight reasoning, it must be recognized that any judgment on obviousness is in a sense necessarily a reconstruction based upon hindsight reasoning. But so long as it takes into account only knowledge which was within the level of ordinary skill at the time the claimed invention was made, and does not include knowledge gleaned only from the applicant's disclosure, such a reconstruction is proper. See In re McLaughlin, 443 F.2d 1392, 170 USPQ 209 (CCPA 1971). At particular issue is whether Drew in combination with Hyoudou “teaches away” from the claim scope. Hyoudou is introduced to disclose interconnected vnf instances forming a chain inside a computing node, which is interpreted broadly as the NFV device. Applicant takes the stance that, due to Drew’s load balancing, that Hyoudou serves as a conflict to that. Id. at 8. However, Hyoudou is brought in as a combinable reference to show the features of the stated claim instead of distribution. Thus, this argument is without merit. Applicant also takes issue with the Koo reference since Applicant states that since it is a “top-down update from a central controller” it does not read on the claims. Id. at 9. The update command is transmitted for the purpose of routing information for VNF instances, and therefore reads on the claim. Second, Applicant contends that the concept of the invention is the “splitting of chaining . . . information into routing information per each vnf instance.” Id. at 9. Applicant further argues that the splitting action creates elementary graphs. However, this specific definition is not present in claim 30. As a suggestion to the Applicant, this feature should be stated in the claim. Conclusion THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JESSE P. SAMLUK whose telephone number is (571)270-5607. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Derrick Ferris can be reached on 571-272-3123. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JESSE P. SAMLUK/Examiner, Art Unit 2411 /DERRICK W FERRIS/Supervisory Patent Examiner, Art Unit 2411
Read full office action

Prosecution Timeline

Jul 28, 2022
Application Filed
Nov 20, 2024
Non-Final Rejection — §103
Feb 26, 2025
Response Filed
May 29, 2025
Final Rejection — §103
Aug 20, 2025
Request for Continued Examination
Aug 26, 2025
Response after Non-Final Action
Sep 03, 2025
Non-Final Rejection — §103
Dec 05, 2025
Response Filed
Feb 13, 2026
Final Rejection — §103
Mar 20, 2026
Interview Requested
Apr 02, 2026
Examiner Interview Summary
Apr 02, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12513738
PREAMBLE DETECTION DURING A RANDOM ACCESS PROCEDURE
2y 5m to grant Granted Dec 30, 2025
Patent 12464525
TRANSMITTING METHOD AND RECEIVING METHOD FOR CONTROL INFORMATION, USER EQUIPMENT AND BASE STATION
2y 5m to grant Granted Nov 04, 2025
Patent 12375389
SAFETY NET ENGINE FOR MACHINE LEARNING-BASED NETWORK AUTOMATION
2y 5m to grant Granted Jul 29, 2025
Patent 12376156
METHODS AND APPARATUSES FOR A RANDOM ACCESS CHANNEL (RACH) STRUCTURE
2y 5m to grant Granted Jul 29, 2025
Patent 12231971
USER EQUIPMENT AND BASE STATION INVOLVED IN A HANDOVER
2y 5m to grant Granted Feb 18, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
45%
Grant Probability
93%
With Interview (+47.7%)
3y 3m
Median Time to Grant
High
PTA Risk
Based on 51 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month