Prosecution Insights
Last updated: April 19, 2026
Application No. 17/517,080

COMPUTING DEVICE WITH ETHERNET CONNECTIVITY FOR VIRTUAL MACHINES ON SEVERAL SYSTEMS ON A CHIP

Final Rejection §103
Filed
Nov 02, 2021
Examiner
CHU JOY, JORGE A
Art Unit
2195
Tech Center
2100 — Computer Architecture & Software
Assignee
Elektrobit Automotive GmbH
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
314 granted / 408 resolved
+22.0% vs TC avg
Strong +37% interview lift
Without
With
+37.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
41 currently pending
Career history
449
Total Applications
across all art units

Statute-Specific Performance

§101
11.0%
-29.0% vs TC avg
§103
55.3%
+15.3% vs TC avg
§102
3.2%
-36.8% vs TC avg
§112
19.6%
-20.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§103
DETAILED ACTION Claims 1, 4, 6-15, 18, and 20-28 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 6-14 is rejected under 35 U.S.C. 103 as being unpatentable over Sanzgiri et al. (US 2013/0074066 A1) in view of Nainar et al. (US 2017/0279712 A1) in view of Pandey (US 2010/0232443 A1), in further view of Kothari et al. (US 9,654,396 B2). Sanzgiri, Nainar, Pandey and Kothari were cited in the previous Office Action. Regarding claim 1, Sanzgiri teaches the invention substantially as claimed including a computing device comprising two or more systems on a chip (Fig. 3, host module 310 and 320; [0029] systems on a chip (SOCs)), each system on a chip comprising one or more virtual machines (Fig. 3, VMs 350(1..m) and 360(1..n) from host modules 310 and 320), wherein one system on a chip provides a connection to an Ethernet network (Fig. 1; [0015] local area network; [0022] Virtual Ethernet Module (VEM)), and wherein each system on chip comprises an instance of a virtual switch (Fig. 3, virtual switches 330 and 340), and to provide a virtual Ethernet link to each virtual machine of the respective system on a chip ([0022] The virtual switches 280 and 285 manage any interfaces needed for the VMs. In one example, the virtual switches 280 and 285 may be a software-based Virtual Ethernet Module (VEM) which runs in conjunction with the hypervisor to provide VM services, e.g., switching operations, Quality of Service (QoS) functions, as well as security and monitoring functions.). Sanzgiri does not explicitly teach a distributed virtual switch, wherein the virtual machines are connected via a virtual Ethernet link, wherein each instance of a distributed virtual switch is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective system on a chip and wherein the switch is a PCIe switch with or without a non-transparent bridge functionality, and wherein each instance of the distributed virtual switch is configured to discover the instances of the distributed virtual switch of other systems on a chip via the switch and to establish a dedicated communication channel to each other instance of the distributed virtual switch. However, Nainar teaches a distributed virtual switch, wherein each instance of a distributed virtual switch is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective system on a chip ([0048] FIG. 4 is a simplified block diagram illustrating a communication system 400 for distributed service chaining in a network environment according to one or more examples of the present Specification. FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420.; [0068]; [0097] SoC). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Nainar with the teachings of Sanzgiri to utilize a distributed virtual switch to allow a VM to communicate through a network. The modification would have been motivated by the desire of combining known elements to yield predictable results. Sanzgiri nor Nainar explicitly teaches wherein the virtual machines are connected via a virtual Ethernet link and wherein the switch is a PCIe switch with or without a non-transparent bridge functionality, and wherein each instance of the distributed virtual switch is configured to discover the instances of the distributed virtual switch of other systems on a chip via the switch and to establish a dedicated communication channel to each other instance of the distributed virtual switch. However, Pandey teaches wherein the virtual machines are connected via a virtual Ethernet link ([0055] At step 182, the eSwitch 120 receives an Ethernet data frame, defined as a frame that is typically bridged without changing the state of the bridging device. The data frame is received on a function (physical or virtual) of the IOV device 24 over a virtual link 36 (e.g., PCIe link) or on a downlink port 44 of the switching fabric 100 of the external switch 16. For example, the data frame may originate from the hypervisor 30 or directly from a virtual machine 32, with one of the virtual machines (e.g., 32-1) being the source of the data frame, and another virtual machine (e.g., 32-2) on the same physical machine 12 being the destination. As other examples, a data frame can arrive destined to another virtual machine on another eSwitch through a core switch, or destined to another virtual machine on another IOV device connected to the same eSwitch. To determine whether the original packet is an intra-server communication, the IOV device 24 compares the destination MAC address with the MAC addresses in its lookup table 42. A matching address tells the IOV device 24 to treat the arriving data frame differently from an outgoing data targeted to a remote destination (a physical machine other than the physical machine from which the data frame originates).) and wherein the switch is a PCIe switch with or without a non-transparent bridge functionality ([0041] [0041] FIG. 3 shows an embodiment of a logical eSwitch 120 produced in response to configuring the bridging function of the IOV device 24 from the external switching device 16. The logical eSwitch 120 represents an aggregation of bridging capability comprised of the bridging function 38 of the IOV device 24 and the switching capability of the switching fabric 100 of the external switching device 16. Although only one bridging function is shown, the logical eSwitch 120 can include a bridging function for each IOV device with which the external switching device is in communication. In brief overview, the bridging function 38 receives frames from the virtual machines 32 over virtual links 36. Based on the configuration of the IOV device, the bridging function 38 performs the switching for some received frames, while passing other received frames, signified by arrows 124, through to the switching fabric 100 for switching. Passed-through frames arrive at the external switching device through physical port 44-1.). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Pandey with the teachings of Sanzgiri and Nainar to utilize a switch with a bridging function to communicate VMS. The modification would have been motivated as it is well known in the art to bridge VMs to communicate among each other. Sanzgiri, Nainar nor Pandey explicitly teach wherein each instance of the distributed virtual switch is configured to discover the instances of the distributed virtual switch of other systems on a chip via the switch and to establish a dedicated communication channel to each other instance of the distributed virtual switch. However, Kothari teaches wherein each instance of the distributed virtual switch is configured to discover the instances of the distributed virtual switch of other systems on a chip via the switch and to establish a dedicated communication channel to each other instance of the distributed virtual switch (Col. 3, lines 34-54: The techniques herein provide a complete peer-to-peer DVS with peer discovery mechanism… Each of the peer nodes has a switch that supports a flow table and an action table. The device facilitates a connection between a switch of a first peer node and a switch of a second peer node, and maintains the flow table and the action table of each of the peer nodes, such that the flow tables and the action tables are kept in synchronization with one another across each of the peer nodes via a distributed hash table.; Col. 4, lines 23-37: The switch 220 may be operable to communicate with a second peer node in the network, or more particularly, a switch of the second peer node. In this regard, the switch 220 may establish a secure communication link, e.g., secure data path tunnel, between itself and the switch of the second peer node.; Col. 6, line 65 through Col. 7, line 6). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Kothari of a DVS instance that is able to discover other DVS instances in other hosts with the SoCs as taught by Sanzgiri, Nainar and Pandey to allow for synchronization of available DVSs in a network and establishing secure connections. The modification would have been motivated by the desire of providing for a peer-to-peer, controller-less, distributed virtual switch, which uses such a syncing mechanism. As a result, there is no single point of failure since redundancy is built in the system. Moreover, new data plane engines and switches can join/leave the system gracefully (See Kothari’s Col. 7, lines 34-39). Regarding claim 6, Pandey teaches wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to handle frame transmission requests to local virtual machines using data transfer ([0055] At step 182, the eSwitch 120 receives an Ethernet data frame, defined as a frame that is typically bridged without changing the state of the bridging device. The data frame is received on a function (physical or virtual) of the IOV device 24 over a virtual link 36 (e.g., PCIe link) or on a downlink port 44 of the switching fabric 100 of the external switch 16. For example, the data frame may originate from the hypervisor 30 or directly from a virtual machine 32, with one of the virtual machines (e.g., 32-1) being the source of the data frame, and another virtual machine (e.g., 32-2) on the same physical machine 12 being the destination). Regarding claim 7, Pandey teaches wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to serve frame transmission requests to virtual machines on a target system on a chip by forwarding the request to the instance of the distributed virtual switch on the target system on a chip and providing frame metadata including a data source address, a destination address, or a VLAN tag ([0055] At step 182, the eSwitch 120 receives an Ethernet data frame, defined as a frame that is typically bridged without changing the state of the bridging device. The data frame is received on a function (physical or virtual) of the IOV device 24 over a virtual link 36 (e.g., PCIe link) or on a downlink port 44 of the switching fabric 100 of the external switch 16. For example, the data frame may originate from the hypervisor 30 or directly from a virtual machine 32, with one of the virtual machines (e.g., 32-1) being the source of the data frame, and another virtual machine (e.g., 32-2) on the same physical machine 12 being the destination. As other examples, a data frame can arrive destined to another virtual machine on another eSwitch through a core switch, or destined to another virtual machine on another IOV device connected to the same eSwitch. To determine whether the original packet is an intra-server communication, the IOV device 24 compares the destination MAC address with the MAC addresses in its lookup table 42. A matching address tells the IOV device 24 to treat the arriving data frame differently from an outgoing data targeted to a remote destination (a physical machine other than the physical machine from which the data frame originates)). Regarding claim 8, Nainar teaches wherein the instances of the distributed virtual switch are configured to provide a spatial isolation of the communication related to the virtual machines, or to provide a temporal isolation between the virtual machines with regard to Ethernet communication ([0048]; [0068] VEMs 420 can include virtual interfaces (e.g., virtual equivalent of physical network access ports) that maintain network configuration attributes, security, and statistics across mobility events, and may be dynamically provisioned within virtualized networks based on network policies stored in DVS 414 as a result of VM provisioning operations by a hypervisor management layer. VEMs 422 may follow virtual network interface cards (vNICs) when VMs move from one physical server to another. The movement can be performed while maintaining port configuration and state, including NetFlow, port statistics, and any Switched Port Analyzer (SPAN) session. By virtualizing the network access port with DPs 424(2)-424(6), transparent mobility of VMs across different physical servers and different physical access-layer switches within an enterprise network may be possible. SFPs 424(1)-424(3) may provide intelligent traffic steering (e.g., flow classification and redirection), and fast path offload for policy enforcement of flows. SFPs 424(1)-424(3) may be configured for multi-tenancy, providing traffic steering and fast path offload on a per-tenant basis. Although only three SFPs 424(1)-424(3) are illustrated in FIG. 4, any number of SFPs may be provided within the broad scope of the embodiments of communication system 400.; [0097]). Regarding claim 9, Sanzgiri teaches wherein the instances of the distributed virtual switch (DVS) are configured to scan outgoing and incoming Ethernet traffic from and to each virtual machine for metadata ([0015] VMs have virtual network interface cards (vNICs) that connect to the virtual switch 130 much like physical devices connect to physical switches via physical cables. The vNICs are managed by host devices. Traffic received by the virtual switch 130 from the VMs over the vNICs as well as the traffic transmitted to the VMs by virtual switch 130 complies with policies configured on the vNICs. These policies specify, for instance, the virtual local area network (VLAN) or VLANs for the interface, access control lists (ACLs), Quality of Service (QoS) policies, and a variety of controls for the features supported by the virtual switch 130. A common way to apply a configuration to an interface is for the network administrator to encapsulate policies into port profiles and assign names to these port profiles. The virtual switch software exports these names to a VMA running on a server within data center 125 where they appear as port groups.; [0017] In some contexts port profiles are applied such that all VMs in the target virtual network get the same port profile. While this results in the correct connectivity or "plumbing" (e.g., the VMs are connected to the same VLAN or virtual network segment) it does not allow individual vNICs in the virtual network to be further customized. Accordingly, it is not possible to customize a port profile for any given vNIC. For example, automatically generated port profiles make it impossible to specify a better QoS, e.g., a QoS profile, for a specific VM or make it impossible to assign a particular ACL to a VM that may better correspond to the VM's function. As another example, if an administrator desires that an Internet Protocol (IP) source guard feature be applied to untrusted VMs, there is still no mechanism to distinguish trusted interfaces from untrusted ones. In other words, the automated nature of port profile assignment results in all the interfaces having to be treated uniformly by the virtual switch in a single, "one-size-fits-all" configuration template set up ahead of time by the network administrator.; [0022] The virtual switches 280 and 285 manage any interfaces needed for the VMs. In one example, the virtual switches 280 and 285 may be a software-based Virtual Ethernet Module (VEM) which runs in conjunction with the hypervisor to provide VM services, e.g., switching operations, Quality of Service (QoS) functions, as well as security and monitoring functions.). In addition, Nainar discusses DVS in at least [0048]. Regarding claim 10, Sanzgiri teaches wherein the instances of the distributed virtual switch (DVS) are configured to scan ingress traffic and egress traffic and to perform plausibility checks ([0015]; [0017]; [0022] The virtual switches 280 and 285 manage any interfaces needed for the VMs. In one example, the virtual switches 280 and 285 may be a software-based Virtual Ethernet Module (VEM) which runs in conjunction with the hypervisor to provide VM services, e.g., switching operations, Quality of Service (QoS) functions, as well as security and monitoring functions). In addition, Nainar discusses DVS in at least [0048]. Regarding claim 11, Nainar teaches wherein the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network has exclusive access to an Ethernet network device ([0068]; [0097] All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.). Regarding claim 12, Sanzgiri teaches wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to serve frame transmission request to the Ethernet network by forwarding the request to the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network ([0015] VMs have virtual network interface cards (vNICs) that connect to the virtual switch 130 much like physical devices connect to physical switches via physical cables. The vNICs are managed by host devices. Traffic received by the virtual switch 130 from the VMs over the vNICs as well as the traffic transmitted to the VMs by virtual switch 130 complies with policies configured on the vNICs. These policies specify, for instance, the virtual local area network (VLAN) or VLANs for the interface, access control lists (ACLs), Quality of Service (QoS) policies, and a variety of controls for the features supported by the virtual switch 130.; [0022] The virtual switches 280 and 285 manage any interfaces needed for the VMs. In one example, the virtual switches 280 and 285 may be a software-based Virtual Ethernet Module (VEM) which runs in conjunction with the hypervisor to provide VM services, e.g., switching operations, Quality of Service (QoS) functions, as well as security and monitoring functions.). Regarding claim 13, Chandrashekhar teaches wherein the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network is configured to manage fetching data targeted to this Ethernet network from local virtual machines and from instances of the distributed virtual switch of remote systems on a chip ([0026] In some embodiments, virtual switches and routers implement distributed virtual (logical) switches and/or distributed virtual (logical) routers (collectively virtual (logical) forwarding elements) along with different host computers. Distributed virtual switches and/or distributed virtual routers in some embodiments are implemented as a single logical switch or logical router, respectively, across the different host computers. In some embodiments, a distributed virtual (logical) forwarding element performs the logical packet processing (i.e., processing for the logical network including the virtual (logical) forwarding elements) at a first hop virtual switch and/or virtual router.; [0043] As a result of such route additions, packets that are destined for local VMs are forwarded by a virtual router such as virtual router 114 to those VMs, while packets destined for VMs that are not local to the host computer are sent over a network interface (e.g., network interface 118) of L3 network 120 and via the L3 network 120 (e.g., virtual or physical routers inside L3 network 120) to the appropriate host computer or gateway, based on the association of the non-local VM's IP address to a network interface of L3 network 120, and ultimately to the non-local VM.). Regarding claim 14, Chandrashekhar teaches wherein the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network is configured to serve received frames from the Ethernet network to local virtual machines using data transfer and to remote virtual machines by forwarding the frame metadata to the instance of the distributed virtual switch of the target system on a chip ([0043] As a result of such route additions, packets that are destined for local VMs are forwarded by a virtual router such as virtual router 114 to those VMs, while packets destined for VMs that are not local to the host computer are sent over a network interface (e.g., network interface 118) of L3 network 120 and via the L3 network 120 (e.g., virtual or physical routers inside L3 network 120) to the appropriate host computer or gateway, based on the association of the non-local VM's IP address to a network interface of L3 network 120, and ultimately to the non-local VM.). Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Sanzgiri, Nainar, Pandey and Kothari, as applied to claim 1 above, in further view of Chandrashekhar et al. (US 2019/0036868 A1). Chandrashekhar was cited in the previous Office Action. Regarding claim 4, Sanzgiri teaches in [0022] “a software-based Virtual Ethernet Module” but neither Sanzgiri, Nainar, Pandey nor Kothari expressly teach wherein the instances of the distributed virtual switch are software components that are executed in a privileged mode. However, Chandrashekhar teaches wherein the instances of the distributed virtual switch are software components that are executed in a privileged mode ([0026] In some embodiments, virtual switch 113, virtual routers 114 and 115, and agent 117, as well as physical device drivers, may execute in privileged virtual machine(s), which are often referred to variously as a “Domain zero,” “root-partition,” or “parent-partition.” Virtual routers 114 and 115, and virtual switch 113 in some embodiments are implemented in a single module implementing both switching and routing functionality like traditional physical routers. In some embodiments, virtual switches and routers implement distributed virtual (logical) switches and/or distributed virtual (logical) routers (collectively virtual (logical) forwarding elements) along with different host computers. Distributed virtual switches and/or distributed virtual routers in some embodiments are implemented as a single logical switch or logical router, respectively, across the different host computers. In some embodiments, a distributed virtual (logical) forwarding element performs the logical packet processing (i.e., processing for the logical network including the virtual (logical) forwarding elements) at a first hop virtual switch and/or virtual router.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chandrashekhar with the teachings of Sanzgiri, Nainar, Pandey and Kothari to implement a distributed virtual switch among different VMMs/parent partitions. The modification would have been motivated by the desire of distributing network management load among available resources. Claims 15 and 18-28 are rejected under 35 U.S.C. 103 as being unpatentable over Sanzgiri, Nainar, Pandey, Kothari, in further view of Reddy et al. (US 2010/0278076 A1). Sanzgiri, Nainar, Pandey and Reddy were cited in the previous Office Action. Regarding claim 15, it is a system type claim having similar limitations as claim 1 above. Therefore, it is rejected under the same rationale. The additional limitations A vehicle, characterized in that the vehicle comprises a computing device comprising two or more systems on a chip are taught by Reddy in at least [0023] FIG. 1 is a block diagram illustrating an exemplary network system 2. Network system 2 includes a network 3 that includes a set of network switches 4A through 4N (collectively, "switches 4") and a set of network devices 6A through 6G (collectively, "devices 6"). Devices 6 represent other devices located within the topology of network 3, and may be routers, personal computers, servers, network security devices, television set-top boxes, mobile telephones, mainframe computers, super computers, network devices integrated into vehicles, or other types of network devices. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Reddy with the teachings of with the teachings of Sanzgiri, Nainar, Pandey and Kothari to utilize network techniques in vehicles. The modification would have been motivated by the desire applying known computing techniques/methods in vehicle development. Regarding claim 18, it is a system type claim having similar limitations as claim 4 above. Therefore, it is rejected under the same rationale. Regarding claim 20, it is a system type claim having similar limitations as claim 6 above. Therefore, it is rejected under the same rationale. Regarding claim 21, it is a system type claim having similar limitations as claim 7 above. Therefore, it is rejected under the same rationale. Regarding claim 22, it is a system type claim having similar limitations as claim 8 above. Therefore, it is rejected under the same rationale. Regarding claim 23, it is a system type claim having similar limitations as claim 9 above. Therefore, it is rejected under the same rationale. Regarding claim 24, it is a system type claim having similar limitations as claim 10 above. Therefore, it is rejected under the same rationale. Regarding claim 25, it is a system type claim having similar limitations as claim 11 above. Therefore, it is rejected under the same rationale. Regarding claim 26, it is a system type claim having similar limitations as claim 12 above. Therefore, it is rejected under the same rationale. Regarding claim 27, it is a system type claim having similar limitations as claim 13 above. Therefore, it is rejected under the same rationale. Regarding claim 28, it is a system type claim having similar limitations as claim 14 above. Therefore, it is rejected under the same rationale. Response to Arguments Applicant's arguments filed 11/19/2025 have been fully considered but they are not persuasive. In Remarks, Applicant argues: (I) However, it is respectfully suggested that paragraph [0055] of Pandey, nor the references as a whole, disclose, teach, or even suggest the explicit use of a PCIe switch. While paragraph [0055] does describe an eSwitch 120 that receives an Ethernet data frame via "a virtual link 36 (e.g., PCIe link)", the reference does not describe a switch that utilizes the PCI Express standard. (II) While Kothari et al. does appear to disclose a peer-to-peer digital virtual switch ("DVS"), the reference is silent as to the "establish[ment] of a dedicated communication channel to each other instance of the [DVS]" (emphasis added). Simply put, Kothari et al. does not disclose anything about establishing a dedicated communication channel. Advantages of a dedicated communication channel is described in paragraph [0016] of the present application. These advantages include reduction in the need to copy Ethernet frames, which reduce the CPU load at the system on a chip and increases data throughput. The combination of Kothari et al. with Sanzgiri et al., Nainar et al., and Pandey fail to provide such an advantage. In view of the above, Examiner submits the following: As to point (I) Examiner respectfully disagrees with the Applicant. Pandey as cited teaches an eSwitch 120 which is defined in [0041] as “the logical eSwitch 120 can include a bridging function for each IOV device with which the external switching device is in communication”. Further the IOV device is defined in [0028] as “The physical network interface or IOV device 24 is generally a network I/O device that provides support in hardware, software, or a combination thereof for any form of I/O virtualization (IOV). Examples of the IOV device 24 include, but are not limited to, PCI-SIG-compliant SR-IOV devices and non-SR-IOV devices, PCI-SIG-compliant MR-IOV devices, multi-queue NICs (network interface controllers), I/O adapters, converged NICs, and converged network adapters (CNA). These IOV devices provide various deployment options for configuring the network topology of a virtualized data center. They can provide physical functions to a non-virtualized operating system (OS), which are visible as multiple sub-NICs to the OS, and virtual functions to virtual machines (VMs), which are visible as virtual NICs (vNICs) to the VMs or as virtual Host Bus Adapters (vHBAs) for storage. These physical and virtual functions available to the CPU of a server are seen as virtual ports or v-ports, when exposed to an external switching device.” As such, Applicant’s argument is not persuasive and the rejection is maintained. As to point (II) Examiner respectfully disagrees with the Applicant for at least the following reason. Kothari as cited teaches that each peer has an individual switch that is used to establish secure data tunnels between itself and another switch in the peer-to-peer DVS system which can be done to other switches in peer nodes. Accordingly, Kothari reasonably teaches the argued limitation "establish[ment] of a dedicated communication channel to each other instance of the [DVS]". Further, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., These advantages include reduction in the need to copy Ethernet frames, which reduce the CPU load at the system on a chip and increases data throughput) are not recited in the rejected claims. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JORGE A CHU JOY-DAVILA whose telephone number is (571)270-0692. The examiner can normally be reached Monday-Friday, 6:00am-5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aimee J Li can be reached at (571)272-4169. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JORGE A CHU JOY-DAVILA/Primary Examiner, Art Unit 2195
Read full office action

Prosecution Timeline

Nov 02, 2021
Application Filed
Mar 29, 2024
Non-Final Rejection — §103
Jul 10, 2024
Response Filed
Oct 27, 2024
Final Rejection — §103
Mar 31, 2025
Request for Continued Examination
Apr 02, 2025
Response after Non-Final Action
May 14, 2025
Non-Final Rejection — §103
Nov 19, 2025
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602244
OFFLOADING PROCESSING TASKS TO DECOUPLED ACCELERATORS FOR INCREASING PERFORMANCE IN A SYSTEM ON A CHIP
2y 5m to grant Granted Apr 14, 2026
Patent 12596565
USER ASSIGNED NETWORK INTERFACE QUEUES
2y 5m to grant Granted Apr 07, 2026
Patent 12591821
DYNAMIC ADJUSTMENT OF WELL PLAN SCHEDULES ON DIFFERENT HIERARCHICAL LEVELS BASED ON SUBSYSTEMS ACHIEVING A DESIRED STATE
2y 5m to grant Granted Mar 31, 2026
Patent 12585490
MIGRATING VIRTUAL MACHINES WHILE PERFORMING MIDDLEBOX SERVICE OPERATIONS AT A PNIC
2y 5m to grant Granted Mar 24, 2026
Patent 12579065
LIGHTWEIGHT KERNEL DRIVER FOR VIRTUALIZED STORAGE
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+37.3%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month