DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 7-10, 13-14, 21-24, and 27-28 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claims 7-8 and 21-22,
The claims recite “the parser engine is to parse the unknown header according to the given protocol based on the indication of the given protocol included in the parsing information yielding the second parsed data.” However, it is not clear what is meant by the term “unknown header,” including to what entity the header is unknown, what it means to be unknown, and how it would be recognized as unknown. The specification states as follows:
Still further in accordance with an embodiment of the present disclosure the parser engine is to parse multiple headers of the header section yielding the first parsed data until reaching an unknown header, the steering engine is to perform a multi-field lookup based on the first parsed data to identify the unknown header as a header of a given protocol, the steering engine is to generate the parsing information to include an indication of the given protocol and an indication of a location of the unknown header, the parser engine is to find the unknown header based on the indication of the location in the parsing information, and the parser engine is to parse the unknown header the given protocol based on the indication of the given protocol included in the parsing information yielding the second parsed data. Specification, page 35, line 30 - page 36, line 3, emphasis added.
Although this paragraph describes what will be done with the “unknown header,” it is not clear how it came to be unknown and what that terminology means, as indicated above.
Claims 8 and 22 depend from Claims 7 and 21, and also recite the “unknown header,” making them indefinite for the same reasons.
Regarding Claims 9 and 23,
Claims 9 and 23 both recite “the steering engine is to find a trailer of the decrypted packet; the steering engine is to find a next protocol of a next header of the header section based on the found trailer.” Although this subject matter was not found in the prior art, the claims are not recited in a manner that indicates clear-cut steps in a process. For example, to claim that the steering engine “is to find a trailer of the decrypted packet” and “is to find a next protocol of a next header of the header section based on the found trailer” is not standard process claiming. Instead, the two claims seem to be recited as a theoretical intent specifying what needs to happen, rather than as a process. It is also not clear what is meant by the term “find,” which would imply that the system knows a priori what it is looking for. But since no background is provided to explain the context of the need to find the trailer (which is presumably associated with security protocols) or to find the “next protocol,” the claims are indefinite. If the process is related to the detailed explanation regarding figure 11, which is provided on page 32 of the specification, then that context must be included in the recitations to achieve definiteness. If not, then an alternative context must be provided and the claims must be recited in a more clear-cut manner.
Regarding Claims 10 and 24,
Claims 10 and 24 both recite “the steering engine is to compute a weighted sum of the flags yielding an indication of a location of a given part of the header section.” These claims are recited in the same intentional manner as Claims 9 and 23, rather than as normal process steps. In addition, it is not clear what flags are being referenced. More clarity is required to achieve definiteness.
Regarding Claims 13 and 27,
Both claims recite “the header section includes multiple segments with corresponding addresses.” It is not clear what is meant by the term “corresponding addresses,” and the specification does not provide further information, but merely states as follows:
Moreover in accordance with an embodiment of the present disclosure the first parsed data includes a segment identification field, the header section includes multiple segments with corresponding addresses, the steering engine is to compute an indication of a location of a current segment of the multiple segments based on the segment identification field … Specification, page 4, lines 20-24, emphasis added.
Further in accordance with an embodiment of the present disclosure the first parsed data includes a segment identification field, and the header section includes multiple segments with corresponding addresses, the method further including computing an indication of a location of a current segment of the multiple segments based on the segment identification field … Specification, page 7, lines 18-22, emphasis added.
In neither case does the specification provide any information about the “corresponding addresses,” but merely recites the exact information in the claims with no further explanation. A person of ordinary skill in the art (POSITA) would wonder that the addresses correspond to.
Regarding Claims 14 and 28,
The claims both recite “wherein the segment identification field is a segments left value, the method further comprising reducing the segment left value by one.” It is not clear what is meant by the term “segment left value.”
Still further in accordance with an embodiment of the present disclosure the segment identification field is a segments left value, the method further including reducing the segment left value by one. Specification, page 7, lines 29-31, emphasis added.
In some embodiments, the steering engine 21 is configured to add the given destination address (included in the current segment) to destination address field 1514 in the header section 1502 of the packet (block 1412), reduce the segment left value of the segment identification field 1512 by one (block 1414 ), and cause the packet to be forwarded to a device identified by the destination address field 1514 (block 1416). Specification, page 35, line 30 - page 36, line 3, emphasis added.
Although it seems possible that the “segment left value” is being decremented, insufficient information is provided to explain how the process works, thereby making the claims indefinite.
Regarding Claims 8, 11, 14, 22, 25, and 28,
Because the claims depend from rejected base claims, they are also rejected.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-6 and 15-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Urman et al. (US 2021/0328923 A1, hereinafter known as Urman).
Regarding Claim 1,
Urman teaches:
“A network device, comprising: an interface to receive packets over a network” (paragraph [0040]). [A network device includes an interface which receives a data packet including a header section, and at least one parser which receives data of the header section of the packet, and parses the data of the header section yielding a first header portion, a ([0040]).]
“a parser engine to: receive data of a header section of a packet” (paragraphs [0040], [0053]; fig. 1, elements 18, 20, 22). [The network device includes at least one parser which receives data of the header section of the packet, and parses the data of the header section yielding a first header portion ([0040]). Header sections of the received packets are parsed by the hardware parsers 18 which are controlled by the controller 22, typically under instruction of the packet processing engine 20 ([0053]).] (NOTE: A hardware parser combined with the packet processing engine are equivalent to the “parser engine.”)
“parse at least one first part of the header section yielding first parsed data” (paragraph [0040], [0053]; fig. 1, elements 18, 24). [At least one parser receives data of the header section of the packet and parses the data of the header section yielding a first header portion and a second header portion ([0040]). Hardware parsers 18 parse the header sections according to data loaded into the parser configuration registers 24 ([0053]).] (NOTE: A header portions resulting from the parsing are equivalent to the “first parsed data.”)
“a steering engine to: receive the first parsed data” (paragraph [0054]; fig. 1, elements 16, 20). [The parsed information is stored in the buffer 16 for retrieval by the packet processing engine 20 and/or sent to the packet processing engine 20 ([0054]).] (NOTE: The packet processing engine is equivalent to the “steering engine” and the parsed information is equivalent to the “first parsed data.”)
“generate parsing information for use in parsing at least one second part of the header section” (paragraphs [0040], [0041]). [A memory of the network device stores match-and-action tables, each of which includes respective indices and a respective steering action entry corresponding to each of the respective indices ([0040]). The packet processing engine also computes a cumulative lookup value based on the first header portion and the second header portion responsively to the first steering action entry; the first steering action entry indicates that the cumulative lookup value should be computed based on the second header portion, which is combined with the first header portion in a register ([0041]).] (NOTE: The match-and-action tables and the cumulative lookup value are equivalent to the “parsing information” which is used for “parsing at least one second part of the header section.”)
“provide the parsing information to the parser engine” (paragraphs [0053], [0040], [0041]; fig. 1, elements 18, 24). [Hardware parsers 18 parse the header sections according to data loaded into the parser configuration registers 24 ([0053]). A packet processing engine of the network device receives the first header portion, the second header portion, fetches from the memory a first match-and-action table, and matches a first index having a corresponding first steering action entry in the first match-and-action table responsively ([0040]). The packet processing engine also computes a cumulative lookup value based on the first header portion and the second header portion ([0041]).] (NOTE: The hardware parser combined with the packet processing engine are equivalent to the “parser engine” and the match-and-action table and the cumulative lookup value are equivalent to the “parsing information.”)
“wherein: the parser engine is to parse the at least one second part of the header section based on the parsing information yielding second parsed data; and the steering engine is to perform an action based on the second parsed data” (paragraphs [0053], [0040], [0055]; fig. 1, elements 18, 20, 24, 28). [Hardware parsers 18 parse the header sections according to data loaded into the parser configuration registers 24 ([0053]). At least one parser receives data of the header section of the packet and parses the data of the header section yielding a first header portion and a second header portion ([0040]). The packet processing engine 20 uses the match and action tables 28 to determine how each packet should be processed according to the parsed information generated by the hardware parsers 18; the match and action tables 28 include indexes to match to the parsed information, and associated actions to be performed when a match is found ([0055]).] (NOTE: The packet processing engine is equivalent to the “steering engine,” the second passed header to “second parsed data,” and the action performed by the “steering engine” when a match is found is equivalent to the “action based on the second parsed data.”)
Regarding Claim 15,
Urman teaches:
“receiving packets over a network” (paragraph [0040]). [A network device includes an interface which receives a data packet including a header section, and at least one parser which receives data of the header section of the packet, and parses the data of the header section yielding a first header portion, a ([0040]).]
“parsing at least one first part of a header section of a packet yielding first parsed data” (paragraphs [0040], [0053]; fig. 1, elements 18, 20, 22). [The network device includes at least one parser which receives data of the header section of the packet, and parses the data of the header section yielding a first header portion ([0040]). Header sections of the received packets are parsed by the hardware parsers 18 which are controlled by the controller 22, typically under instruction of the packet processing engine 20 ([0053]).] (NOTE: A hardware parser combined with the packet processing engine are equivalent to the “parser engine,” since both are involved in the data parsing.)
“parse at least one first part of the header section yielding first parsed data” (paragraph [0040], [0054]; fig. 1, elements 18, 24). [At least one parser receives data of the header section of the packet and parses the data of the header section yielding a first header portion and a second header portion ([0040]). Hardware parsers 18 parse the header sections according to data loaded into the parser configuration registers 24 ([0054]).] (NOTE: A header portions resulting from the parsing are equivalent to the “first parsed data.”)
“generating parsing information in a steering engine for use in parsing at least one second part of the header section” (paragraphs [0040], [0041]; fig. 1, elements 16, 20). [At least one parser which receives data of the header section of the packet, and parses the data of the header section yielding a first header portion, and a second header portion; packet processing engine of the network device receives the first header portion, the second header portion, fetches from the memory a first match-and-action table, and matches a first index having a corresponding first steering action entry in the first match-and-action table responsively ([0040]). The packet processing engine also computes a cumulative lookup value based on the first header portion and the second header portion responsively to the first steering action entry ([0041]).] (NOTE: The packet processing engine is equivalent to the “steering engine.” The parsed data, the match-and-action table, and the cumulative lookup information based on the headers are equivalent to the “parsing information.”)
“providing the parsing information to a parser engine” (paragraphs [0053], [0054], [0040], [0041]; fig. 1, elements 18, 24, 26, 32). [Hardware parsers 18 parse the header sections according to data loaded into the parser configuration registers 24, and cache memory 26 caches a selection of parsing configuration data sets 32 ([0053]). The parsed information is stored in the buffer 16 for retrieval by the packet processing engine 20 and/or sent to the packet processing engine 20 ([0054]). A packet processing engine of the network device receives the first header portion, the second header portion, fetches from the memory a first match-and-action table, and matches a first index having a corresponding first steering action entry in the first match-and-action table responsively ([0040]). The packet processing engine also computes a cumulative lookup value based on the first header portion and the second header portion ([0041]).] (NOTE: The hardware parser and the packet processing engine combined are equivalent to the “parser engine,” since both are involved in the data parsing, and the match-and-action table and the cumulative lookup value are equivalent to the “parsing information.”)
“parsing the at least one second part of the header section based on the parsing information yielding second parsed data; and performing an action in the steering engine based on the second parsed data” (paragraphs [0040], [0055]; fig. 1, elements 18, 20, 28). [At least one parser receives data of the header section of the packet and parses the data of the header section yielding a first header portion and a second header portion, fetches from the memory a first match-and-action table, and matches a first index having a corresponding first steering action entry in the first match-and-action table responsively ([0040]). The packet processing engine 20 uses the match and action tables 28 to determine how each packet should be processed according to the parsed information generated by the hardware parsers 18; the match and action tables 28 include indexes to match to the parsed information, and associated actions to be performed when a match is found ([0055]).] (NOTE: The packet processing engine is equivalent to the “steering engine,” the second parsed header to “second parsed data,” and the associated action performed by the “steering engine” when a match is found to the “action based on the second parsed data.”)
Regarding Claims 2 and 16,
Urman teaches all the limitations of parent Claims 1 and 15.
Urman teaches:
“wherein the steering engine is to: perform a computation based on the first parsed data; and generate the parsing information based on a result of the computation” (paragraphs [0048], [0055]; fig. 1, elements 18, 20, 28). [The cumulative lookup value may be comprised of two, three or more hash values of header portions depending on the configuration of the tables and the steering stage; the packet processing engine computes the cumulative lookup value based on the header portions ([0048]). The packet processing engine 20 uses the match and action tables 28 to determine how each packet should be processed according to the parsed information generated by the hardware parsers 18 ([0055]).] (NOTE: The packet processing engine is equivalent to the “steering engine,” the cumulative lookup value to “perform a computation,” and using the match and action tables to determine how each packet should be processed according to the parsed information to the “generate the parsing information based on a result of the computation.”)
Regarding Claims 3 and 17,
Urman teaches all the limitations of parent Claims 1 and 15.
Urman teaches:
“wherein the parsing information includes: an indication of a location in the header section of a given header to parse; and an indication of a protocol of the given header” (paragraphs [0069], [0034]; fig. 1, elements 18, 20, 28). [When the parser receives the header section with the tunneling bit, the parser processes the header according to tunneling, and data produced from the parsing process are saved to a location in the buffer defined by the tunneling ([0069]). The network interface controller (NIC) implements a hash function using header information, including the protocol in use ([0034]).]
Regarding Claims 4 and 18,
Urman teaches all the limitations of parent Claims 1 and 15.
Urman teaches:
“wherein the parsing information includes an indication of a protocol of a given header” (paragraph [0034]; fig. 1, elements 18, 20, 28). [The network interface controller (NIC) implements a hash function using header information including the protocol in use ([0034]).]
Regarding Claims 5 and 19,
Urman teaches all the limitations of parent Claims 1 and 15.
Urman teaches:
“wherein the parsing information includes: an indication of a location in the header section of a given field to parse; and a length of the given field” (paragraph [0065], [0069]). [The "header size field" in the header gives the size of the header ([0065]). When the parser receives the header section with the tunneling bit, the parser processes the header according to tunneling, and data produced from the parsing process are saved to a location in the buffer defined by the tunneling ([0069]).]
Regarding Claims 6 and 20,
Urman teaches all the limitations of parent Claims 1 and 15.
Urman teaches:
“wherein the parsing information includes: an indication of a first header of the header section; an offset in the header section from the first header to a second header to parse; and an indication of a protocol of the second header” (paragraphs [0008], [0065], [0066]; fig. 4, elements 40-1, 50, 52,58, 62). [A parser coupled to receive data of the header section of the packet is configured to parse the data of the header section yielding a first header portion and a second header portion ([0008]). The header size field in the header gives the size of the header, and a header size offset field 52, which provides the offset of a "header size field" in the header, which the flexible hardware parser 40-1 is configured to parse. ([0065]). The next header offset field 58 provides the relative offset of a next header identification field in the header giving the identification of the next header to be parsed in the header section; the data subset 50 also includes a next protocol table 62, which maps next header identifications with protocols ([0066]).] (NOTE: The first header portion of the header section is equivalent to “an indication of a first header of the header section,” the header size offset field to the “offset in the header section from the first header to a second header,” and the next protocol table to the “protocol of the second header.”)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claims 12-13 and 26-27 are rejected under 35 U.S.C. 103 as being unpatentable over Urman et al. (US 2021/0328923 A1, hereinafter known as Urman) in view of Levy et al. (US 2015/0110113 A1, hereinafter known as Levy).
Regarding Claims 12 and 26,
Urman teaches all the limitations of parent Claims 1 and 15.
Urman teaches:
“the steering engine is to compute/computing an indication of a location …” (paragraph [0069]). [When the next parser receives the header section with the tunneling bit, the parser processes the header according to a location in the buffer defined in the tunneling behavior ([0069]).] (NOTE: The processing of the header according to a location is equivalent to “compute/computing an indication of a location.”)
“the steering engine is to generate/generating the parsing information to include the indication of the location of the current segment” and “the parser engine is to find and parse /finding and parsing the current segment based on the indication of the location in the parsing information yielding the second parsed data” (paragraphs [0076], [0077]). [The flexible hardware parser is coupled to retrieve the next header ID, which is located in the header of the header section at the next header offset, from the header section responsively to the retrieved next header offset; the flexible hardware parser is coupled to transfer the header section to one of the hardware parsers, which is configured to parse the next header of the header section ([0076]). During the multi-stage steering process on the network device, hardware parsers are coupled to receive data of the header sections of the packets, and to parse the data of the header section of each packet yielding a plurality of header portions, for example, a first header portion, a second header portion, and optionally a third header portion ([0077]).] (NOTE: The parsing of the header that is found in the header section is equivalent to “generating the parsing information to include the indication of the location of the current segment,” the particular header portion being parsed to the “find and parse /finding and parsing the current segment based on the indication of the location,” and the parsing of the second header portion to “parsing information yielding the second parsed data.”)
Urman does not teach:
“the first parsed data includes a segment identification field.”
“the header section includes multiple segments.”
“a current segment of the multiple segments based on the segment identification field.”
Levy teaches:
“the first parsed data includes a segment identification field” and “a current segment of the multiple segments based on the segment identification field” (paragraph [0042]). [The retrieved entry of the profile table includes one or more key segment fields that identify one or more fields to be extracted from a header of the packet ([0042]).]
“the header section includes multiple segments” (paragraphs [0017], [0031]; fig. 2A, elements 50, 51, 52, 54, 56). [The network device is generally a computer networking device that connects network segments ([0017]). A packet format 50 corresponds to an Ethernet packet encapsulating an internet protocol (IP), such as an IP version 4 (IPv4) or IP version 6 (IPv6), packet encapsulating a transmission control protocol (TCP) packet; a header 51 of the packet format 50 includes an Ethernet header 52, followed by an IP (e.g., IPv4 or IPv6) header 54, followed by a TCP header 56 ([0031]).] (NOTE: The header sections 52, 54, and 56 are equivalent to “header section includes multiple segments.”)
Both Urman and Levy teach systems which perform parsing of data packets, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the Urman disclosure, the ability to divide a packet header into multiple segments, as taught by Levy. Such inclusion would have increased the flexibility of the parsing system by providing the ability to support multiple protocols, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).
Regarding Claims 13 and 27,
Urman teaches all the limitations of parent Claims 1 and 15.
Urman teaches:
“the header section includes multiple segments with corresponding addresses” (paragraph [0034]). [Header information includes source IP address/port number, destination IP address/port number and the protocol in use ([0034]).]
“the steering engine is to compute/computing an indication of a location …” (paragraph [0069]). [When the next parser receives the header section with the tunneling bit, the parser processes the header according to a location in the buffer defined in the tunneling behavior ([0069]).] (NOTE: The processing of the header according to a location is equivalent to “compute/computing an indication of a location.”)
“the steering engine is to generate/generating the parsing information to include the indication of the location of the current segment” and “the parser engine is to find and parse /finding and parsing the current segment based on the indication of the location in the parsing information yielding the second parsed data” (paragraphs [0076], [0077]). [The flexible hardware parser is coupled to retrieve the next header ID, which is located in the header of the header section at the next header offset, from the header section responsively to the retrieved next header offset; the flexible hardware parser is coupled to transfer the header section to one of the hardware parsers, which is configured to parse the next header of the header section ([0076]). During the multi-stage steering process on the network device, hardware parsers are coupled to receive data of the header sections of the packets, and to parse the data of the header section of each packet yielding a plurality of header portions, for example, a first header portion, a second header portion, and optionally a third header portion ([0077]).] (NOTE: The parsing of the header that is found in the header section is equivalent to “generating the parsing information to include the indication of the location of the current segment,” the particular header portion being parsed to the “find and parse /finding and parsing the current segment based on the indication of the location,” and the parsing of the second header portion to “parsing information yielding the second parsed data.”)
“the steering engine is to add/ adding the given destination address to a destination address field in the header section” (paragraphs [0034], [0038]). [Header information includes destination IP address/port number ([0034]). The steering action entry indicates a destination port and/or node to which to send the packet ([0038]).]
“the steering engine is to cause/causing the packet to be forwarded to a device identified by the destination address field” (paragraph [0022]). [The first steering action entry includes forwarding the packet to at least one selected destination ([0022]).]
Urman does not teach:
“the first parsed data includes a segment identification field the first parsed data includes a segment identification field.”
“the header section includes multiple segments.”
“a current segment of the multiple segments based on the segment identification field.”
Levy teaches:
“the first parsed data includes a segment identification field” and “a current segment of the multiple segments based on the segment identification field” (paragraph [0042]). [The retrieved entry of the profile table includes one or more key segment fields that identify one or more fields to be extracted from a header of the packet ([0042]).]
“the header section includes multiple segments” (paragraphs [0017], [0031]; fig. 2A, elements 50, 51, 52, 54, 56). [The network device is generally a computer networking device that connects network segments ([0017]). A packet format 50 corresponds to an Ethernet packet encapsulating an internet protocol (IP), such as an IP version 4 (IPv4) or IP version 6 (IPv6), packet encapsulating a transmission control protocol (TCP) packet; a header 51 of the packet format 50 includes an Ethernet header 52, followed by an IP (e.g., IPv4 or IPv6) header 54, followed by a TCP header 56 ([0031]).] (NOTE: The header sections 52, 54, and 56 are equivalent to “header section includes multiple segments.”)
Both Urman and Levy teach systems which perform parsing of data packets, and those systems are comparable to that of the instant application. Because the two cited references are analogous to the instant application, it would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains, to include in the Urman disclosure, the ability to divide a packet header into multiple segments, as taught by Levy. Such inclusion would have increased the flexibility of the parsing system by providing the ability to support multiple protocols, and would have been consistent with the rationale of using known techniques to improve similar devices (methods, or products) in the same way to show a prima facie case of obviousness (MPEP 2143(I)(C)) under KSR International Co. v. Teleflex Inc., 127 S. Ct. 1727, 82 USPQ2d 1385, 1395-97 (2007).
Regarding Claims 14 and 28,
Because the claims depend from rejected base Claims 13 and 27, they are also rejected.
Examiner Note
Claims 11 and 25 depend from Claims 10 and 24, which are rejected under 35 U.S.C. 112(b). The two claims recite the following subject matter, which was found in Kfir et al. (US 2019/0215384 A1), paragraph [0004]:
wherein the given part is an optional type–length–value (TLV) header.
Potentially Allowable Subject Matter
Claims 7-11, 14, 21-25, and 28 all recite subject matter that was not found in the prior art, but they are also rejected under 35 U.S.C. 112(b), and therefore do not recite subject matter that would be allowable until those rejections are resolved.
The subject matter not found in the prior art for Claims 7 and 21 is as follows:
the steering engine is to perform a multi-field lookup based on the first parsed data to identify the unknown header as a header of a given protocol.
The subject matter not found in the prior art for Claims 8 and 22 is as follows:
the steering engine is to compute additional parsing information including an indication of a location of the next header and an indication of the next protocol.
It should be noted that Claims 8 and 22 depend from Claims 7 and 21, and are therefore also rejected under 35 U.S.C. 112(b). The subject matter for
The subject matter not found in the prior art for Claims 9 and 23 is as follows:
the steering engine is to find a next protocol of a next header of the header section based on the found trailer.
The subject matter not found in the prior art for Claims 10 and 24 is as follows:
the steering engine is to compute a weighted sum of the flags yielding an indication of a location of a given part of the header section.
It should be noted that Claims 11 and 25 depend from Claims 10 and 24, and are therefore also rejected under 35 U.S.C. 112(b). As was noted in the previous comment, prior art was found to reject those claims.
The subject matter not found in the prior art for Claims 14 and 28 is as follows:
the steering engine is to reduce the segment left value by one.
It should be noted that Claims 14 and 28 depend from Claims 13 and 27, which are rejected under 35 U.S.C. 103.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The additional prior art references listed on Form PTO-892 and not used in the prior art rejections are also relevant to this application.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHYLLIS A BOOK whose telephone number is (571)272-0698. The examiner can normally be reached M-F 10:00 am - 7:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, GLENTON BURGESS can be reached at 571-272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHYLLIS A BOOK/Primary Examiner, Art Unit 2454