DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/9/2025 has been entered.
Claims 1-20 are pending and they are presented for examination.
Response to Arguments
Applicant's arguments filed regarding claim 1 (page 9-10), “While Breban et al. does use the word “Ethernet”, the reference does not disclose, teach, or suggest any interface with an Ethernet network. Specifically, the first paragraph on page 524 merely states that “[S]ince the mostly used network in server environments is Ethernet, the virtualization methods for the CAN interface are derived from state-of-the-art techniques used for the Ethernet interface.” The reference then goes on to describe a CAN interface in explicit detail. Those of ordinary skill in the art appreciate that an Ethernet network and a CAN (Controller Area Network) are quite different. The other references cited by the examiner also fail to disclose, teach, or suggest that each SoC provides a connection to an Ethernet network, that each SoC is connected via a virtual Ethernet link, and that each SoC includes an instance of a distributed virtual switch which is configured to provide a virtualized access to the Ethernet network. As such, Breaban et al. and Nainar et al., even if properly combinable, do not disclose, teach, or suggest each and every element of claim 1, it is respectfully suggested that the rejection of claim 1 has been traversed and that this claim is allowable.”.
The examiner would like to point out to Breanban (hereafter Gabriela) in view of Maroney and further in view of Nainar teaches each SoC providing connection to an ethernet network such that each SoC is connected via a virtual Ethernet link. For example, SR-IOV of both Gabriela and Maroney discloses virtualization of PCIe device and virtualized PCIe protocol used for point-to-point communication.
[Gabriela page 524], “To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet.”
Maroney similarly discloses SR-IOV with a root SoC and end point SoCs with virtual machines to utilizing point-to-point data links for communication via use of virtualized PCIe (i.e. ethernet).
[Maroney paragraph 14], “Each of the multiple interface ports supports single root virtualization. For example, multiple single root input/output virtualization (SR-IOV) enabled interface ports can be provided by the memory sub-system to enable access to multiple host systems without a need for a separate switch. An interface port can be a PCIe port or a physical port.”
Nainar also teaches each SoC comprising one or more virtual machines and each SoC comprising instance of a distributed virtual switch to provide virtualized access to the Ethernet network.
[Nainar paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.
Furthermore, PCIe is a well-known commonly used as Ethernet adapter. As shown below: (https://en.wikipedia.org/wiki/PCI_Express) “PCIe is commonly used to connect graphics cards, sound cards, Wi-Fi and Ethernet adapters, and storage devices such as solid-state drives and hard disk drives.” “Conceptually, the PCI Express bus is a high-speed serial replacement of the older PCI/PCI-X bus.[8] One of the key differences between the PCI Express bus and the older PCI is the bus topology; PCI uses a shared parallel bus architecture, in which the PCI host and all devices share a common set of address, data, and control lines. In contrast, PCI Express is based on point-to-point topology, with separate serial links connecting every device to the root complex (host).” “This allows the PCI Express bus to serve both cost-sensitive applications where high throughput is not needed, and performance-critical applications such as 3D graphics, networking (10 Gigabit Ethernet or multiport Gigabit Ethernet), and enterprise storage (SAS or Fibre Channel).”
Therefore, argument is not persuasive.
Claim Rejections - 35 USC § 112
Claim 1 (similarly claims 10, 15) recite: “a root system on chip (SoC)”, “at least one end point SoC”, “each SoC”, “at least one SoC” of the SoCs”. Since, the claim distinguishes various SoCs (i.e. root SoC, at least one end point SoC, at least one SoCs). The examiner is unclear if “each SoC” is referring to all SoCs (root SoC, at least one end point SoC, at least one SoC) or each SoC from just the at least one end point SoC”.
Claim 8 (similarly claims 12, 13) recite: “the instances of the distributed virtual switch”. There is insufficient antecedent basis for this limitation in the claim. It is unclear what particular instances “the instances of the distributed virtual switch” are referring to.
Claim 12 recite: “the SoC”. There is insufficient antecedent basis for this limitation in the claim. It is unclear what SoC, “the SoC” is referring to.
Claims 2-14 and 16-20 are rejected based on rejection of its corresponding dependent claim.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1, 2, 15 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Time synchronization for an emulated CAN device on a Multi-Processor System on Chip (Gabriela Breanban) (ScienceDirect July 2017) (hereafter Gabriela) in view of Maroney et al. (Pub 20200192848) and further in view of Nainar et al. (Pub 20170279712) (hereafter Nainar).
As per claim 1, Gabriela teaches:
A computing device comprising a root system on chip (SoC) and at least one end point SoC that is connected to the root SoC with a point-to-point data link, ([Page 523], Multi-Processor System on Chip (MPSoC), one can choose to include a hardware controller and search for virtualization solutions, or, as an alternative, a given communication service can be obtained by implementing it in software on top of an existing interface. We call the latter solution software emulation. The emulated interface can then be further shared through virtualization… To the best of our knowledge, the possibility of designing a CAN interface on a MPSoC platform that scales depending on the number of applications and cores has not been addressed in literature. [Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface.) each SoC comprising one or more virtual machines, wherein at least one of the SoCs provides a connection to an Ethernet network, ([Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] Consequently, since the mostly used network in server environments is Ethernet, the virtualization methods for the CAN interface are derived from state-of-the-art techniques used for the Ethernet interface [9] . Virtual platforms have been introduced for isolating re- sources on a multi-processor platform and allocating them to individual applications [8]. [Page 525], In this case, we use a dedicated core to implement a CAN device, which operates as a CAN gateway at 100 kbit/s bit rate. As this core is not shared with other applications, the CAN controller runs bare-metal. Each of the other cores runs two applications. To send and receive CAN messages, the cores use the NoC for the communication with the dedicated CAN core.) and wherein the one or more virtual machines of each SoC are connected via a virtual Ethernet link, and in that each SoC comprises an instance of a distributed virtual switch, which is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective SoC. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 526], PTP can use several communication protocols such as Ethernet, PROFINET, UDP, etc.)
Although Gabriela teaches virtualization of ethernet link and silently discloses root SoC connected with endpoint SoC via point-to-point data link. [page 524]
Gabriela does not explicitly recite root system on chip (SoC), end point SoC (i.e. SR-IOV); and a distributed virtual switch.
Maroney teaches root system on chip (SoC), end point SoC (i.e. SR-IOV) (single root virtualization) ([Paragraph 2], The present disclosure generally relates to a memory system, and more specifically, relates to memory sub-system with multiple ports having single root virtualization. [Paragraph 11], Aspects of the present disclosure are directed to supporting multiple ports having single root input/output (I/O) virtualization in a memory sub-system. [Paragraph 12], In a conventional memory sub-system, a single interface port can be used to transmit data between the memory sub-system and a host system. Multiple hosts (e.g., different system on a chip (SOC) devices) with multiple virtual machines can interact with the memory sub-system. A virtual machine can be an emulation of a physical host system or other such physical resources of a host system. Thus, the memory sub-system can be used to store and retrieve data for the different virtual machines that are provided by the multiple host systems. In order to manage the transmission of data from the memory devices of the memory sub-system to the different virtual machines at the different host systems, the storage resources of the memory sub-system can be shared through the use of a single interface port that utilizes a single root input/output virtualization (SR-IOV). In some embodiments, the SR-IOV can provide the isolation of the resources of an interface, such as the Peripheral Component Interconnect Express (PCIe), which is used to read data from and write data to the memory sub-system by the different virtual machines. For example, the SR-IOV can provide different virtual functions (VFs) that are each assigned or used by a separate virtual machine. [Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes… [Paragraph 63], At operation 530, the processing logic detects a first virtual machine (VM) and a second VM running on the host SOC. In implementations, each VM of the host SOC can be assigned a dedicated VF of the PCIe port, in order for the VM to access a corresponding portion of the storage space of the memory sub-system.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Gabriela wherein Multiprocessor System on Chip(s) with SR-IOV (Single Root Input/Output Virtualization) specification are used for point-to-point communication, each system on a chip (SoC) comprises virtual machines which uses virtualized links (i.e. I/O virtualization of ethernet) to provide virtualized access to the ethernet network, into teachings of Maroney wherein SR-IOV comprises root SoC, end point SoCs having virtual machines and also having point-to-point communication, because this would enhance the teachings of Gabriela wherein by implementing root SoC and end point SoC with point to point communication, it allows sharing of resource through use of a single interface port that utilizes SR-IOV, wherein SR-IOV can provide isolation of resources and provide different virtual functions that are each assigned or used by separate virtual machines. [Maroney paragraph 12]
Although Gabriela teaches virtualization of ethernet link.
Gabriela does not explicitly disclose a distributed virtual switch.
Nainar teaches a distributed virtual switch and virtualization of Ethernet link via one or more virtual Ethernet module (VEMs). ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
Nainar also teaches each SoC comprising one or more virtual machines, wherein at least one of the SoCs provides a connection to an Ethernet network,
and wherein the one or more virtual machines of each SoC are connected via a virtual Ethernet link, and in that each SoC comprises an instance of a distributed virtual switch, which is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective SoC. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Gabriela and Maroney wherein Multiprocessor System on Chip(s) with SR-IOV (Single Root Input/Output Virtualization) (root SoC/end point SoC) specification are used for point-to-point communication, each system on a chip (SoC) comprises virtual machines which uses virtualized links (i.e. I/O virtualization of ethernet) to provide virtualized access to the ethernet network, into teachings of Nainar wherein each SoC comprising one or more VMs provides connection to Ethernet network via virtual Ethernet link with each SoC having instance of a distributed virtual switch to provide virtualized access to VMs of respective SoCs, because this would enhance the teachings of Gabriela and Maroney wherein by incorporating the distributed virtual switch (DVS) instances on each SoCs would facilitate packet forwarding/redirecting/traffic steering by the DVS instances and provide connectivity to various SoCs. [Nainar paragraph 48, 49, 53]
As per claim 2, rejection of claim 1 is incorporated:
Nainar teaches wherein each instance of the distributed virtual switch is configured to provide a virtual Ethernet link to each virtual machine of the respective SoC. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
Gabriela also teaches ([Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] Consequently, since the mostly used network in server environments is Ethernet, the virtualization methods for the CAN interface are derived from state-of-the-art techniques used for the Ethernet interface [9] . Virtual platforms have been introduced for isolating re- sources on a multi-processor platform and allocating them to individual applications [8]. [Page 525], In this case, we use a dedicated core to implement a CAN device, which operates as a CAN gateway at 100 kbit/s bit rate. As this core is not shared with other applications, the CAN controller runs bare-metal. Each of the other cores runs two applications. To send and receive CAN messages, the cores use the NoC for the communication with the dedicated CAN core.)
As per claims 15 and 16, these are vehicle claims corresponding to the computing device claims 1 and 2. Therefore, rejected based on similar rationale. [Gabriela discloses a vehicle/automotive field]
Claim(s) 3-14 and 17-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Gabriela in view of Maroney, in view of Nainar and further in view of Reddy et al. (Pub 20100278076) (hereafter Reddy).
As per claim 3, rejection of claim 2 is incorporated:
Gabriela teaches wherein instance of the distributed virtual switch at the root SoC is configured to discover the instance of the distributed virtual switch of the at least one end point SoC via the point-to-point data link and to establish a dedicated communication channel to each related instance of the distributed virtual switch of the at least one end point SoC. ([Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] Consequently, since the mostly used network in server environments is Ethernet, the virtualization methods for the CAN interface are derived from state-of-the-art techniques used for the Ethernet interface [9] . Virtual platforms have been introduced for isolating resources on a multi-processor platform and allocating them to individual applications [8]. [Page 525], In this case, we use a dedicated core to implement a CAN device, which operates as a CAN gateway at 100 kbit/s bit rate. As this core is not shared with other applications, the CAN controller runs bare-metal. Each of the other cores runs two applications. To send and receive CAN messages, the cores use the NoC for the communication with the dedicated CAN core. [Page 523], The implementation of the protocol governing an I/O interface is usually done in hardware and therefore, sharing the I/O interface translates into sharing the hardware controller that drives the interface. When sharing a resource among applications with strict and diverse requirements, as in automotive, an important property of the sharing method is isolation. Isolated resource sharing is equivalent to virtualization and it means dividing the physical resource into multiple separate virtual resources that don’t interfere and allocating each one to an application. On the other hand, when deciding the I/O interfaces for a Multi-Processor System on Chip (MPSoC), one can choose to include a hardware controller and search for virtualization solutions, or, as an alternative, a given communication service can be obtained by implementing it in software on top of an existing interface. We call the latter solution software emulation. The emulated interface can then be further shared through virtualization.)
Maroney also teaches root and end point SoCs. ([Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes…)
Nainar teaches the instance of the distributed virtual switch and establishing communication to SoCs. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
However, Gabriela, Maroney and Nainar do not explicitly disclose discover the instance of the distributed virtual switch of the at least one end point SoC via the point-to-point data link and to establish a dedicated communication channel to each related instance of the distributed virtual switch of the at least one end point SoC.
Reddy teaches ([Paragraph 5], In general, techniques are described by which a plurality of layer two ("L2") network switches automatically discover and configure themselves to operate as a single virtual L2 network switch. A virtual switch, as referred to herein, means a collection of individual L2 switch devices that are physically interconnected and configured (i.e., "stacked") to operate like as single L2 network switch as if the individual L2 switch devices were located within the same physical chassis. [Paragraph 6], Based on the discovered connection topology, the devices proceed to auto-provision themselves to operate as a virtual switch.)
It would have been obvious to a person with ordinary skill in the art, before the effective filing date of the invention, to combine the teachings of Gabriela, Maroney and Nainar wherein Multiprocessor System on Chip(s) with SR-IOV (Single Root Input/Output Virtualization) (root SoC/end point SoC) specification are used for point-to-point communication, each system on a chip (SoC) comprises virtual machines which uses virtualized links (i.e. I/O virtualization of ethernet) to provide virtualized access to the ethernet network and each SoC comprising one or more VMs provides connection to Ethernet network via virtual Ethernet link with each SoC having instance of a distributed virtual switch to provide virtualized access to VMs of respective SoCs, into teachings of Reddy wherein auto-discovery of distributed virtual switch with dedicated communication channel(s) is/are established, because this would enhance the teachings of Gabriela, Maroney and Nainar wherein by automatically discovering connection topology(ies) allows auto-provisioning and isolation of connections. [Reddy paragraph 5, 6, 27, 49]
As per claim 4, rejection of claim 3 is incorporated:
Gabriela teaches wherein, for each virtual Ethernet link, the instance of the distributed virtual switch are configured to handle frame transmission requests to local virtual machines using data transfer. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard. [Page 528], The NoC is scheduled using a pipelined TDM table. This means that across the path, each router forwards the data from one of its in- puts to one of its outputs in a given TDM slot, such that for a TDM frame having n slots, router i forwards the data during slot j and router i + 1 forwards the same data in the following slot, (j + 1) mod n . In the figure, the NoC TDM table has 3 slots and the connection between the sender tile and the gateway tile uses slot 3 in the first router and it increases with 1 in every upcoming router… [Page 531], In this paper we proposed how multiple applications can share a CAN port in a MPSoC platform. The shared CAN port can be on the local processor tile, or on a remote one. As part of our hard- ware and software design process, we tune the number of applications)
Nainar also teaches the instance of the distributed virtual switch ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
As per claim 5, rejection of claim 3 is incorporated:
Gabriela teaches wherein, for each virtual Ethernet link, the instance of the distributed virtual switch at the root SoC is configured to serve frame transmission requests to virtual machines on a target SoC of the at least one end point SoC by forwarding the frame transmission request to the instance of the distributed virtual switch at the target SoC and providing frame metadata including a Peripheral Component Interconnect Express (PCIe) source address of an actual frame. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard.)
Maroney also teaches root and end point SoCs. ([Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes…)
Nainar also teaches the instance of the distributed virtual switch and forwarding the frame transmission request to the instance of the distributed virtual switch at the target SoC and providing frame metadata ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
As per claim 6, rejection of claim 5 is incorporated:
Gabriela teaches wherein, for each virtual Ethernet link, the instance of the distributed virtual switch at the at least one end point SoC serves a frame transmission request to a remote virtual machine by forwarding the frame transmission request to the instance of the distributed virtual switch at the root SoC and providing the frame metadata including the PCIe source address of the actual frame. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard. [Page 528], The NoC is scheduled using a pipelined TDM table. This means that across the path, each router forwards the data from one of its in- puts to one of its outputs in a given TDM slot, such that for a TDM frame having n slots, router i forwards the data during slot j and router i + 1 forwards the same data in the following slot, (j + 1) mod n . In the figure, the NoC TDM table has 3 slots and the connection between the sender tile and the gateway tile uses slot 3 in the first router and it increases with 1 in every upcoming router)
Maroney also teaches root and end point SoCs. ([Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes…)
Nainar also teaches the instance of the distributed virtual switch and forwarding the frame transmission request to the instance of the distributed virtual switch at the root SoC and providing frame metadata ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
As per claim 7, rejection of claim 6 is incorporated:
Gabriela teaches wherein the instance of the distributed virtual switch at the root SoC handles a frame transmission request received from the at least one end point SoC to a remote virtual machine by further forwarding the frame transmission request to the instance of the distributed virtual switch at the target end point SoC. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard. [Page 528], The NoC is scheduled using a pipelined TDM table. This means that across the path, each router forwards the data from one of its in- puts to one of its outputs in a given TDM slot, such that for a TDM frame having n slots, router i forwards the data during slot j and router i + 1 forwards the same data in the following slot, (j + 1) mod n . In the figure, the NoC TDM table has 3 slots and the connection between the sender tile and the gateway tile uses slot 3 in the first router and it increases with 1 in every upcoming router.)
Maroney also teaches root and end point SoCs. ([Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes…)
Nainar also teaches the instance of the distributed virtual switch and forwarding the frame transmission request to the instance of the distributed virtual switch at the target SoC ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
As per claim 8, rejection of claim 7 is incorporated:
Nainar teaches wherein, for each virtual Ethernet link, the instance of the distributed virtual switch fetch data targeted to a particular virtual Ethernet link provided to each virtual machine of one of the SoCs responsive to a request from the instances of the distributed virtual switch at another SoC. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
Gabriela also teaches ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard. [Page 528], The NoC is scheduled using a pipelined TDM table. This means that across the path, each router forwards the data from one of its in- puts to one of its outputs in a given TDM slot, such that for a TDM frame having n slots, router i forwards the data during slot j and router i + 1 forwards the same data in the following slot, (j + 1) mod n . In the figure, the NoC TDM table has 3 slots and the connection between the sender tile and the gateway tile uses slot 3 in the first router and it increases with 1 in every upcoming router.)
As per claim 9, rejection of claim 8 is incorporated:
Gabriela teaches wherein the instance of the distributed virtual switch at the root SoC forwards fetch requests not targeting the instance of the distributed virtual switch at the root SoC to the instance of the distributed virtual switch at the at least one end point SoC. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard. [Page 528], The NoC is scheduled using a pipelined TDM table. This means that across the path, each router forwards the data from one of its in- puts to one of its outputs in a given TDM slot, such that for a TDM frame having n slots, router i forwards the data during slot j and router i + 1 forwards the same data in the following slot, (j + 1) mod n . In the figure, the NoC TDM table has 3 slots and the connection between the sender tile and the gateway tile uses slot 3 in the first router and it increases with 1 in every upcoming router.)
Maroney also teaches end point SoCs. ([Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes…)
Nainar also teaches instance of the distributed virtual switch and forwards fetch request not targeting the instance of the distributed virtual switch at the root SoC to the instance of the distributed virtual switch at the at least one end point SoC. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips. [Paragraph 52], Service controller 416 may segment the user configured service chain in DVS 414. According to various embodiments, VEMs 422(1)-422(3) may generate headers for forwarding packets according to the configured service chain such that substantially all services in the service chain may be provided in a single service loop irrespective of the number of services, with respective VEMs 422(1)-422(3) making independent decisions (e.g., without referring to other VEMs or other network elements) about the next hop decisions in the service chain packet forwarding. As used herein, the term “service loop” refers to a path of the packet from a starting point (e.g., WL 420(1)) through various service nodes (e.g., SN 418(2), SN 418(4), SN 418(5)) of the service chain until termination at the starting point (e.g., WL 420(1)).)
As per claim 10, rejection of claim 9 is incorporated:
Gabriela teaches wherein the instance of the distributed virtual switch of the at least one end point SoC is configured to provide a spatial isolation of the communication related to the virtual machines of each SoC, to provide a temporal isolation between the virtual machines with regard to Ethernet communication, to scan outgoing and incoming Ethernet traffic from and to each virtual machine, or to scan ingress traffic and egress traffic and to perform plausibility checks. ([Page 527], The CAN MAC layer was implemented in software in the C pro- gramming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard… To be able to run the software CAN controller together with other applications on the same processor, we use the CoMik microkernel. CoMik divides the physical processor into multiple virtual processors scheduled in TDM fashion. Each virtual processor gets a fraction of the processor capacity based on the number of allocated TDM slots and it is fully temporally isolated from the other virtual processors. The TDM table duration determines the maxi- mum sustainable CAN bit rate, as the software controller has to be fast enough to write or read every CAN bit in its allocated slot. [Page 524], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard.)
Maroney also teaches end point SoCs. ([Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes…)
Nainar also teaches instance of the distributed virtual switch and scanning/monitoring traffic (egress/ingress) and to perform plausibility checks. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 15], For example, a non-reactive service function (NRSF) includes any function that does not, or in the context of the specific network, cannot modify a packet. NRSFs may include, for example, traffic monitoring functions, accounting or billing functions, transparent cache functions, and lawful intercept functions by way of nonlimiting example. In some cases, NRSFs may include “testbed” SFs that are intended to be reactive SFs in the future, but that are currently undergoing testing and thus should not be permitted to modify “live” flows. Rather, they may simply perform “dummy” operations on duplicate flows and log the results so that the function can be evaluated. Thus, while these functions may be intended to modify packets in a general sense, in the context of the specific network, they may not be permitted to modify a packet.)
As per claim 11, rejection of claim 10 is incorporated:
Gabriela teaches wherein the instance of the distributed virtual switch, of the at least one end point SoC, providing the connection to the Ethernet network has exclusive access to an Ethernet network device. ([Page 525], In this case, we use a dedicated core to implement a CAN device, which operates as a CAN gateway at 100 kbit/s bit rate. As this core is not shared with other applications, the CAN controller runs bare-metal. Each of the other cores runs two applications. To send and receive CAN messages, the cores use the NoC for the communication with the dedicated CAN core.)
Maroney also teaches end point SoCs. ([Paragraph 42], In an illustrative example, hosts systems 210-240 can be system on a chip (SOC) hosts and memory sub-system 110 can have four PCIe endpoint ports. Each interface port can have one lane and can auto detect each link to connect to each host SOC RC (root complex). In implementations, interface port link/lanes combinations can include: 4×ports, 1 lane; 3×ports, 1 lane; 2×ports, 2 lanes; and 1×port, 4 lanes…)
Nainar also teaches instance of the distributed virtual switch, providing the connection to the Ethernet network has exclusive access to an Ethernet network device. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420.)
As per claim 12, rejection of claim 11 is incorporated:
Gabriela teaches wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to serve frame transmission request to the Ethernet network by forwarding the frame transmission request to the instance of the distributed virtual switch of the SoC providing the connection to the Ethernet network. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard.)
Nainar also teaches the instances of the distributed virtual switch and forwarding the frame transmission request to the instance of the distributed virtual switch of the SoC providing the connection to the Ethernet network. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
As per claim 13, rejection of claim 12 is incorporated:
Gabriela teaches wherein the instance of the distributed virtual switch of the SoC providing the connection to the Ethernet network is configured to fetch data targeted to the Ethernet network from local virtual machines and from instances of the distributed virtual switch of another SoC. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard. [Page 528], The NoC is scheduled using a pipelined TDM table. This means that across the path, each router forwards the data from one of its in- puts to one of its outputs in a given TDM slot, such that for a TDM frame having n slots, router i forwards the data during slot j and router i + 1 forwards the same data in the following slot, (j + 1) mod n . In the figure, the NoC TDM table has 3 slots and the connection between the sender tile and the gateway tile uses slot 3 in the first router and it increases with 1 in every upcoming router.)
Nainar teaches the instance of the distributed virtual switch and fetch data targeted to the Ethernet network from local virtual machines and from instances of the distributed virtual switch of another SoC. ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
As per claim 14, rejection of claim 13 is incorporated:
Gabriela teaches wherein the instance of the distributed virtual switch of the SoC providing the connection to the Ethernet network is configured to serve received frames from the Ethernet network to local virtual machines using data transfer and to remote virtual machines by forwarding the frame metadata to the instance of the distributed virtual switch of the another SoC. ([Page 524], To reduce the performance overhead, Sander et al. offer the solution of hardware controller virtualization [17], based on Single Root I/O virtualization (SR-IOV). SR-IOV is an extension of the Peripheral Component Interconnect Express (PCIe) protocol and it is the state-of-the-art hardware I/O virtualization method for Ethernet. The implementation is done by extending a CAN controller to add virtualization support and connecting it to a multi-core processor via a PCIe interface. [Page 524], In terms of virtualization, the latest proposed methods in automotive systems are inspired by server environments where Virtual Machines (VMs) define an isolated set of resources[6] [https://dl.acm.org/doi/pdf/10.1145/1165389.945462 Xen and the Art of Virtualization] [Page 527], The CAN MAC layer was implemented in software in the C programming language and it consists of creating the CAN frame in the 2.0A format, as defined by the ISO 11898 standard [4] , including bit stuffing, CRC computation and filtering of the received messages. We call the software implementation of the CAN MAC layer emulation since it acts as a CAN controller, which transmits the CAN frames sent by the application and returns back to it the received frames according to the configuration of the reception filter. To ensure a safe transfer of the data between the application and the controller, a simplified version of C-Heap is used. Further, we have implemented the driver API according to the AUTOSAR standard. [Page 528], The NoC is scheduled using a pipelined TDM table. This means that across the path, each router forwards the data from one of its in- puts to one of its outputs in a given TDM slot, such that for a TDM frame having n slots, router i forwards the data during slot j and router i + 1 forwards the same data in the following slot, (j + 1) mod n . In the figure, the NoC TDM table has 3 slots and the connection between the sender tile and the gateway tile uses slot 3 in the first router and it increases with 1 in every upcoming router… [Page 531], In this paper we proposed how multiple applications can share a CAN port in a MPSoC platform. The shared CAN port can be on the local processor tile, or on a remote one. As part of our hard- ware and software design process, we tune the number of applications)
Nainar also teaches ([Paragraph 48], FIG. 4 illustrates a network 180 (generally indicated by an arrow) comprising a distributed virtual switch (DVS) 414, which is provided as a non-limiting example of a platform for providing a service-chaining network. DVS 414 can include a service controller 416, which may be an SDN-C controller, such as the one provided by router 200 of FIG. 2, or any other suitable platform. A plurality of service nodes (SN) 418 (e.g., SNs 418(1)-418(5)) may provide various network services to packets entering or leaving network 180. A plurality of virtual machines (VMs) may provide respective workloads (WLs) 420 (e.g., WL 420(1)-420(5)) on DVS 414, for example, by generating or receiving packets through DVS 414. One or more virtual Ethernet modules (VEMs) 422 (e.g., VEMs 422(1)-422(3)) may facilitate packet forwarding by DVS 414. In various embodiments, DVS 414 may execute in one or more hypervisors in one or more servers (or other computing and networking devices) in network 180. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. Each hypervisor may be embedded with one or more VEMs 422 that can perform various data plane functions such as advanced networking and security, switching between directly attached virtual machines, and uplinking to the rest of the network. Each VEM 422(1)-422(3) may include respective service function paths (SFPs) 424(1)-424(3) that can redirect traffic to SNs 418 before DVS 414 sends the packets into WLs 420. [Paragraph 57], (Examples of metadata include classification information used for policy enforcement and network context for forwarding post service delivery). According to embodiments of communication system 400, each NSH may include a service path identifier identifying the service chain to which a packet belongs, and a location of the packet on the service chain, which can indicate the service hop (NSH aware node to forward the packet) on service overlay 426. [Paragraph 97], All or part of any hardware element disclosed herein may readily be provided in a system-on-a-chip (SoC), including central processing unit (CPU) package. An SoC represents an integrated circuit (IC) that integrates components of a computer or other electronic system into a single chip. Thus, for example, router 200 may be, in whole or in part, in an SoC. The SoC may contain digital, analog, mixed-signal, and radio frequency functions, all of which may be provided on a single chip substrate. Other embodiments may include a multi-chip-module (MCM), with a plurality of chips located within a single electronic package and configured to interact closely with each other through the electronic package. In various other embodiments, the computing functionalities disclosed herein may be implemented in one or more silicon cores in Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), and other semiconductor chips.)
As per claims 17-20, these are vehicle claims corresponding to the computing device claims 3-6. Therefore, rejected based on similar rationale. [Gabriela discloses a vehicle/automotive field]
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DONG U KIM whose telephone number is (571)270-1313. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bradley Teets can be reached at 5712723338. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/DONG U KIM/Primary Examiner, Art Unit 2197