Prosecution Insights
Last updated: April 19, 2026
Application No. 17/709,639

METHODS, APPARATUS, AND ARTICLES OF MANUFACTURE TO IMPROVE BANDWIDTH FOR PACKET TIMESTAMPING

Non-Final OA §103§112
Filed
Mar 31, 2022
Examiner
LEE, CHAE S
Art Unit
2415
Tech Center
2400 — Computer Networks
Assignee
Intel Corporation
OA Round
2 (Non-Final)
87%
Grant Probability
Favorable
2-3
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
315 granted / 363 resolved
+28.8% vs TC avg
Moderate +14% lift
Without
With
+14.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
18 currently pending
Career history
381
Total Applications
across all art units

Statute-Specific Performance

§101
4.8%
-35.2% vs TC avg
§103
71.3%
+31.3% vs TC avg
§102
2.8%
-37.2% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 363 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Reopen Prosecution - After Notice of Allowance Prosecution on the merits of this application is reopened on claims 1, 3-8, 10-15, 17-22, 24, 25, 36-39 considered unpatentable for the reasons indicated below: The amended claims dated 8/20/2025 contain intended use feature that does not place meaningful limits on the scope of the claims. Also there are 112b issues in the amended claims that need to be addressed. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1, 3-8, 10-15, 17-22, 24, 25 and 36-39 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. In the independent claims 1, 8, 15 and 22, the limitation “where a timestamp is to be stored separately from a second address in the shared storage circuitry” is unclear as to whether it is describing the shared storage circuitry or the first address. Also, the limitation “where the descriptor is stored” is unclear whether it is describing the second address or the shared storage circuitry. Hence they are indefinite including all the dependent claims. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-6, 8, 10-13, 15, 17-20, 22, 24, 25 and 36-38 are rejected under 35 U.S.C. 103 as being unpatentable over “Kasichainula” (US 2021/0014177) in view of Nakibly et al. (US 10,298,496, hereinafter “Nakibly”). For claims 1, 8, 15 and 22, Kasichainula discloses An apparatus (FIG. 4 illustrates an example embodiment of a network interface controller (NIC) 400); see Kasichainula par. 0091 and Fig. 4) comprising: a cache (a descriptor cache 410; see Kasichainula par. 0092 and Fig. 4 ); machine readable instructions (the disclosed program code (or data to create the program code) are intended to encompass such machine readable instructions and/or program(s); see Kasichainula par. 0189); and at least one processor circuit to be programmed by the machine-readable instructions to (The processors (or cores) of the processor circuitry 1402 may be coupled with or may include memory/storage and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the platform 1400; see Kasichainula par. 0180 and Fig. 14): parse a descriptor of data (The NIC fetches the descriptor, parses it, and then initiates DMA for fetching the data payload from memory; see Kasichainula par. 0122; Upon receiving the descriptor, the MAC 604 parses the descriptor and schedules DMA (e.g., via DMA engine 608) for the packet payload; see Kasichainula par. 0143 and Fig. 6) to determine a pointer indicative of a first address in shared storage circuitry (The tail pointer is advanced by the NIC driver whenever it updates the descriptors with address pointers to the fresh data payload. When the tail pointer is ahead of head pointer, the NIC starts fetching the descriptors from the corresponding transfer ring buffer (TRB) and stores them in its local prefetch buffer. In the synthetic test, the test software executes the "read timestamp counter" (RDTSC) instruction just before it updates the tail pointer in the NIC and thus captures the starting point of the data transfer with reference to the time stamp counter clock; see Kasichainula par. 0122) where a timestamp is to be stored separately from a second address in the shared storage circuitry where the descriptor is stored (Examiner’s note: the phrase “is to be” indicates a future event. Hence the manner of the storage of the timestamp and the descriptor is an intended use that does not place meaningful limits on the scope of the claim), the timestamp to indicate a time at which the data was transmitted to a second device (Examiner’s note: not given patentable weight because the claim does not require the transmission of data to a second device) (the packet latencies and jitter for each packet are computed by the NIC and updated in newly defined fields in the transmit descriptor named "transmit latency" and "transmit jitter," along with timestamp corresponding to when the packet was transmitted, as shown in FIG. 7, which illustrates an example of the transmit descriptor write-back format; see Kasichainula par. 0130); cause storage of the pointer in the cache (The descriptors are stored in a prefetch cache inside the NIC until it is time to parse the descriptor and fetch data payload; see Kasichainula par. 0125); and Kasichainula does not explicitly disclose indicate that the descriptor may be overwritten. Nakibly discloses indicate that the descriptor may be overwritten (Cache control logic 220 can determine to preserve that entry, and instead evict another entry that may have a zero lock counter value (or a value indicating that entry is unlocked)….cache control logic 220 can deassert the valid bit in that entry (e.g., set the valid bit to zero), which would allow the content of the entry including queue ID, the memory descriptors, the update indicator, and the lock counter to be allocated to another queue ID and be overwritten with data corresponding to requests associated with that queue ID; see Nakibly page 17 col. 10 lines 4-23). It would have been obvious to the ordinary skilled in the art before the effective filing date to use Nakibly's arrangement in Kasichainula's invention to improve the performance of the network adapter, and lead to more efficient usage of the networking resources provided by the network interface controller (see Nakibly col. 3 lines 21-24). Specifically for claim 8, Kasichainula discloses Network interface circuitry (NIC) comprising (FIG. 4 illustrates an example embodiment of a network interface controller (NIC) 400); see Kasichainula par. 0091 and Fig. 4). Specifically for claim 15, Kasichainula discloses At least one non-transitory computer readable medium comprising instructions to cause at least one processor circuit to: (the instructions 1482 provided via the memory circuitry 1404 and/or the storage circuitry 1408 of FIG. 14 are embodied as one or more non-transitory computer readable storage media (see e.g., NTCRSM 1460) including program code, a computer program product or data to create the computer program, with the computer program or data, to direct the processor circuitry 1402 of platform 1400 to perform electronic operations in the platform 1400; see Kasichainula par. 0187). For claims 3, 10, 17 and 24, Kasichainula does not explicitly disclose The apparatus of claim 1, wherein one or more of the at least one processor circuit is to, in response to transmission of the data to the second device, cause storage of the timestamp at the first address in the shared storage circuitry indicated by the pointer. Nakibly discloses The apparatus of claim 1, wherein one or more of the at least one processor circuit is to, in response to transmission of the data to the second device, cause storage of the timestamp at the first address in the shared storage circuitry indicated by the pointer (Prefetch cache 212 can also store other management information not shown in FIG. 2B. For example, prefetch cache 212 can also store a least-recently-used (LRU) indicator ( e.g., a timestamp) for each of entries 212a-212i. If prefetch cache 112 is full, cache control logic 220 can determine which entry to deallocate or evict based on a LRU eviction policy. For example, control logic 220 can evict entries that least recently provided memory descriptors to descriptor cache 214 in comparison with other entries. cache control logic 220 can evict the least-recently-used entry among the unlocked entries based on the LRU information. In some embodiments, if all of entries in descriptor cache 214 are locked (e.g., have non-zero lock counter values), an entry can be selected for eviction based on the LRU information and the lowest lock counter value. In some Reference is now made to FIG. 2C, which shows an example structure of descriptor cache 214, according to certain aspects of the disclosure. Descriptor cache 214 may include multiple entries where each entry is associated with a queue ID. In the example shown in FIG. 2C, descriptor cache 214 may include entries 214a and 214b. Each entry in descriptor cache 214 may store a queue ID, a set of memory descriptors (or other configuration data) associated with that queue ID, and the head pointer associated with the set of memory descriptors; see Nakibly col. 9 lines 8-41). It would have been obvious to the ordinary skilled in the art before the effective filing date to use Nakibly's arrangement in Kasichainula's invention to improve the performance of the network adapter, and lead to more efficient usage of the networking resources provided by the network interface controller (see Nakibly col. 3 lines 21-24). For claims 4, 11, 18 and 25, Kasichainula does not explicitly disclose The apparatus of claim 1, wherein, the pointer is a first pointer, the cache is to store a second pointer indicative of a third address in the shared storage circuitry where a status of transmission of the data is to be stored, and one or more of the at least one processor circuit is, in response to the transmission of the data to the second device, cause storage of the timestamp at the first address in the shared storage circuitry and the status at the third address in the shared storage circuitry. Nakibly discloses The apparatus of claim 1, wherein, the pointer is a first pointer, the cache is to store a second pointer indicative of a third address in the shared storage circuitry where a status of transmission of the data is to be stored, and one or more of the at least one processor circuit is, in response to the transmission of the data to the second device, cause storage of the timestamp at the first address in the shared storage circuitry and the status at the third address in the shared storage circuitry (Prefetch cache 212 can also store other management information not shown in FIG. 2B. For example, prefetch cache 212 can also store a least-recently-used (LRU) indicator (e.g., a timestamp) for each of entries 212a-212i. If prefetch cache 112 is full, cache control logic 220 can determine which entry to deallocate or evict based on a LRU eviction policy; see Nakibly col. 9 lines 8-14; Prefetch cache 412 also stores the memory addresses associated with the memory descriptors in queue 408, as well as the queue ID of queue 408. Descriptor cache 414 can store a set of memory descriptors obtained from prefetch cache 412 for a particular packet processing task, one or more head pointers 408b associated with the set of memory descriptors, as well as the queue ID of the queue from which the set of memory descriptors are fetched. Cache control logic 420 can receive requests from packet processor(s) 402, obtain the requested memory descriptors from packet descriptor cache 414, and then transmit the requested memory descriptors to packet processor(s) 402 in response to the requests. In some embodiments, packet descriptor cache 414 can be dedicated to an individual port, and the memory descriptors stored in packet descriptor cache 414 can be retrieved from the next level memory that is shared between the multiple ports such as the main memory and/or a shared mid-level cache; see Nakibly col. 13 lines 25-42). It would have been obvious to the ordinary skilled in the art before the effective filing date to use Nakibly's arrangement in Kasichainula's invention to improve the performance of the network adapter, and lead to more efficient usage of the networking resources provided by the network interface controller (see Nakibly col. 3 lines 21-24). For claims 5, 12, 19 and 37, Kasichainula does not explicitly disclose The apparatus of claim 1, wherein the cache is a first cache, and one or more of the at least one processor circuit is to cause storage of the pointer in the first cache according to an index, the based on at least a queue of a second cache of the apparatus and a position of the data in the queue, the queue corresponding to a traffic class of the data. Nakibly discloses The apparatus of claim 1, wherein the cache is a first cache, and the processor circuit is to cause storage of the pointer in the first cache according to an index, the index based on at least a queue of a second cache of the apparatus and a position of the data in the queue, the queue corresponding to a traffic class of the data (A cache-hit can be detected if at least one entry of the descriptor cache is storing the queue ID corresponding to the queue ID included in the request received at operation 502. In a multiport system, a cache-hit can be detected if an entry in the descriptor cache is storing the queue ID and the packet index corresponding to the request. A cache-miss can be determined if, for example, the system does not find an entry in the descriptor cache that has the matching queue ID, or a matching packet index for a multiport system. If the system determines that there is a cache-miss (at operation 504), the system can allocate a new entry in the descriptor cache at operation 506. When allocating a new entry, the update bit for the new entry is initially deasserted. For a multiport system, the packet index of the new entry is set to the packet index corresponding to the request. The system also obtains the current head pointer and tail pointer of the queue corresponding to the queue ID at operation 508. The system then determines whether the prefetch cache (e.g., prefetch cache 212 of FIG. 2A, prefetch cache 412 of FIG. 4A) is storing the requested memory descriptors at operation 510; see Nakibly col. 16 lines 8-61). It would have been obvious to the ordinary skilled in the art before the effective filing date to use Nakibly's arrangement in Kasichainula's invention to improve the performance of the network adapter, and lead to more efficient usage of the networking resources provided by the network interface controller (see Nakibly col. 3 lines 21-24). For claims 6, 13, 20 and 38, Kasichainula does not explicitly disclose The apparatus of claim 1, wherein the cache is a first cache, the descriptor includes an offset indicative of a first time at which the data is to be transmitted, and one or more of the at least one processor circuit is to cause storage of the data in a second cache of the apparatus at a second time, the second time different from the first time. Nakibly discloses The apparatus of claim 1, wherein the cache is a first cache, the descriptor includes an offset indicative of a first time at which the data is to be transmitted, and one or more of the at least one processor circuit is to cause storage of the data in a second cache of the apparatus at a second time, the second time different from the first time (At time 306, based on the detection of cache-hit, cache control logic 220 updates the current head pointer associated with the requested memory descriptors. The update can be based on the head pointer value associated with the prior request. For example, cache control logic 220 can obtain the 30 updated head pointer value ("Xll") by offsetting the previous current head pointer value ("Xl") with the number of memory descriptors (3) included in the first request as the first request progresses to the execution stage. Cache control logic 220 can also search for the memory descriptors associated with the updated head pointer in prefetch cache 212 using the queue ID and the updated head pointer value. Cache control logic 220 then obtains the memory descriptors ("All," "B11," and "Cll") from prefetch cache 212 based on the updated head pointer and the queue ID, and stores the 40 memory descriptors together with the updated head pointer value in entry 214a. Cache control logic 220 also asserts the update indicator, to indicate that the memory descriptors stored in entry 214a are the most up-to-date and are ready to be consumed by packet processor 202; see Nakibly col. 11 lines 26-45). It would have been obvious to the ordinary skilled in the art before the effective filing date to use Nakibly's arrangement in Kasichainula's invention to improve the performance of the network adapter, and lead to more efficient usage of the networking resources provided by the network interface controller (see Nakibly col. 11 lines 21-24). For claim 36, Kasichainula discloses The apparatus of claim 1, wherein the shared storage circuitry is first shared storage circuitry, and the at least one processor circuit includes one or more of: at least one of a central processor unit (CPU), a graphics processor unit (GPU), or a digital signal processor (DSP), the at least one of the CPU, the GPU, or the DSP having control circuitry to control data movement within the at least one processor circuit, arithmetic and logic circuitry to perform one or more first operations corresponding to instructions, and one or more registers to store a first result of the one or more first operations; a Field Programmable Gate Array (FPGA), the FPGA including first logic gate circuitry, a plurality of configurable interconnections, and second storage circuitry, the first logic gate circuitry and the interconnections to perform one or more second operations, the second storage circuitry to store a second result of the one or more second operations (The processor(s) of processor circuitry 1402 may include, for example, one or more processor cores (CPUs), application processors, GPUs, RISC processors, Acorn RISC Machine (ARM) processors, CISC processors, one or more DSPs, one or more FPGAs, one or more PLDs, one or more ASICs, one or more baseband processors, one or more radio-frequency integrated circuits (RFIC), one or more microprocessors or controllers, or any suitable combination thereof. The processors (or cores) of the processor circuitry 1402 may be coupled with or may include memory/storage and may be configured to execute instructions stored in the memory/storage to enable various applications or operating systems to run on the platform 1400; see Kasichainula par. 0180); or Application Specific Integrated Circuitry (ASIC) including second logic gate circuitry to perform one or more third operations, one or more of the at least one processor circuit to perform at least one of the first operations, the second operations, or the third operations to instantiate the machine-readable instructions (In some embodiments, the memory circuitry 1404 and/or storage circuitry 1408 may be divided into one or more trusted memory regions for storing applications or software modules of the TEE 1490. Although the instructions 1482 are shown as code blocks included in the memory circuitry 1404 and the computational logic 1483 is shown as code blocks in the storage circuitry 1408, it should be understood that any of the code blocks may be replaced with hardwired circuits, for example, built into an FPGA, ASIC, or some other suitable circuitry; see Kasichainula par. 0193-0194). Allowable Subject Matter Claims 7, 14, 21, 39 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and also if they overcome the 112b rejections above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. -Marcondes et al. (US 2008/0144624). Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHAE S LEE whose telephone number is (571)272-8236. The examiner can normally be reached 8:30AM - 5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jeffrey Rutkowski can be reached at (571) 270-1215. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /CHAE S LEE/Primary Examiner, Art Unit 2415
Read full office action

Prosecution Timeline

Mar 31, 2022
Application Filed
May 19, 2022
Response after Non-Final Action
May 16, 2025
Non-Final Rejection — §103, §112
Aug 11, 2025
Applicant Interview (Telephonic)
Aug 11, 2025
Examiner Interview Summary
Aug 20, 2025
Response Filed
Mar 10, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604330
MULTI-BEAM TECHNIQUES FOR SMALL DATA TRANSFER OVER PRECONFIGURED UPLINK RESOURCES
2y 5m to grant Granted Apr 14, 2026
Patent 12598565
SYNCHRONIZATION COMMUNICATION WAVEFORMS FOR SIDELINK UNLICENSED (SL-U)
2y 5m to grant Granted Apr 07, 2026
Patent 12598043
UPLINK SYMBOLS FOR DEMODULATION REFERENCE SIGNAL ON OPEN RADIO ACCESS NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12592803
USER EQUIPMENT AND METHOD THEREOF FOR WIRELESS COMMUNICATION
2y 5m to grant Granted Mar 31, 2026
Patent 12587965
LOW-POWER MODES FOR VULNERABLE ROAD USER EQUIPMENT
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
87%
Grant Probability
99%
With Interview (+14.5%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 363 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month