Prosecution Insights
Last updated: April 19, 2026
Application No. 17/885,243

SYSTEM ARCHITECTURES FOR BIG DATA PROCESSING

Non-Final OA §103
Filed
Aug 10, 2022
Examiner
CAO, DIEM K
Art Unit
2196
Tech Center
2100 — Computer Architecture & Software
Assignee
The Trustees of the University of Pennsylvania
OA Round
1 (Non-Final)
80%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 80% — above average
80%
Career Allow Rate
531 granted / 663 resolved
+25.1% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
29 currently pending
Career history
692
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
46.7%
+6.7% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
20.5%
-19.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 663 resolved cases

Office Action

§103
DETAILED ACTION Claims 1-20 are pending. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Election/Restrictions Applicant’s election without traverse of group I in the reply filed on 8/28/2025 is acknowledged. Claims 21-76 are canceled by Applicant. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “a first interconnect configured to” and “a second interconnect configured to” perform various functions in claim 8 (see specification: “A first interconnect can be a storage interconnect and a second interconnect is a memory interconnect”; paragraphs [0006]-[0007]). Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, 4, 8-16 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Haywood et al. (US 2023/0305891 A1) in view of Khatri et al. (US 2022/0138013 A1). As to claim 1, Haywood teaches a method, comprising: receiving a request to utilize at least one of a memory (during server boot-up or sometime after, memory allocation and access requests; paragraph [0016], [0014], wherein the request is received at a computing system comprising a local memory and local storage (memory pooling-server may include various hierarchically accessed storage devices … memory 115 is implemented by DRAM, storage technology; paragraph [0015]); determining availability of a remote memory at one or more remote nodes accessible by the computing system (each memory virtualizer is informed, at any given time, of volume and location of allocable memory on specific remote servers; paragraph [0018]); determining a distribution among the local memory, and one or more remote nodes to fulfill the request (The process-hosting operating system may apply policy-based rules or algorithms to determine … a blend of local and remote memory is to be allocated to a given process; paragraph [0013], [0019]-[0020], [0025] and [0028]); and based on the determination, utilizing at least one of: a memory associated with a first set of one or more remote nodes via a first interconnect (an MP server maps those extra (beyond the local physical memory) LPAs to the memory virtualizer components which, in turn, associates the LPAs with remote memory; paragraph [0019]-[0020] and allocation engine coordinates with one or more allocation engines within remote MP servers to fulfill the allocation request in whole or in part out of the collective memory pool; paragraph [0028] and interconnect fabric 103; see Fig. 1 and paragraph [0014], [0015], [0017] and Fabric interface 331L, then transmits the interconnect fabric 103; paragraph [0038]). Haywood does not teach request to utilize at least a storage, determine availability of remote storage, and local storage is used to fulfill the request. However, Khatri teaches request to utilize at least a memory and a storage (workload requirement for the workload; paragraph [0041] includes memory and storage devices; paragraphs [0042]-[0043]), network storage devices are used to fulfill the request (network-attached storage devices; paragraphs [0023] and [0041]), and local storage is used to fulfill the request (during runtime of the computing system (e.g., during the performance of the workload discussed above), the SCP system may confirm that … storage key management policies are being complied with; paragraph [0060]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Khatri to the system of Haywood because Khatri teaches a workload compliance governor system that ensures the components in a disaggregated system to perform a workload comply with any of variety of workload requirements prior to perform the workload (paragraphs [0003] and [0005]). As to claim 2, Haywood as modified by Khatri teaches the method of claim 1, further comprising utilizing at least one of the local memory and the local storage to fulfill the request (see Haywood: The process-hosting operating system may apply policy-based rules or algorithms to determine … a blend of local and remote memory is to be allocated to a given process; paragraph [0013], [0019]-[0020], [0025]) and (see Khatri: workload requirement for the workload; paragraph [0041] includes memory and storage devices; paragraphs [0042]-[0043]). As to claim 4, Haywood as modified by Khatri teaches the method of claim 1, wherein the request comprises access to at least one of a local memory and a local storage via a primary processing unit (see Haywood: CPU, requests for allocate/access memory; paragraph [0016]). As to claim 8, Haywood teaches a system, comprising: at least one processing unit (one or more CPUs 111; paragraph [0014] and Fig. 1); a local memory (memory, DRAM, DIMMS; paragraph [0014] and Fig. 1); a local storage (local storage; paragraph [0034]); a first interconnect configured to access a remote memory at a first set of one of more remote nodes (interconnect fabric 103; paragraph [0015] and Fig. 1); and instructions that when executed on the at least one processing unit, cause the system to at least (instruction received by CPU; paragraph [0014] and Fig. 1): receive a request to utilize at least one of a memory (during server boot-up or sometime after, memory allocation and access requests; paragraph [0016], [0014]); determine availability of at least one of the remote memory at the first and second set of remote nodes (each memory virtualizer is informed, at any given time, of volume and location of allocable memory on specific remote servers; paragraph [0018]); determine a distribution among the local memory, and one or more remote nodes to fulfill the request (The process-hosting operating system may apply policy-based rules or algorithms to determine … a blend of local and remote memory is to be allocated to a given process; paragraph [0013], [0019]-[0020], [0025] and [0028]); and utilize at least one of: the remote memory associated with a first set of one or more remote nodes(beyond the local physical memory) LPAs to the memory virtualizer components which, in turn, associates the LPAs with remote memory; paragraph [0019]-[0020] and allocation engine coordinates with one or more allocation engines within remote MP servers to fulfill the allocation request in whole or in part out of the collective memory pool; paragraph [0028] and interconnect fabric 103; see Fig. 1 and paragraph [0014], [0015], [0017] and Fabric interface 331L, then transmits the interconnect fabric 103; paragraph [0038]). Haywood does not teach request to utilize at least a storage, a second interconnect configured to access a remote storage at a second set of one or more remote nodes, determine availability of remote storage, and local storage is used to fulfill the request. However, Khatri teaches request to utilize at least a memory and a storage (workload requirement for the workload; paragraph [0041] includes memory and storage devices; paragraphs [0042]-[0043]), a second interconnect configured to access a remote storage at a second set of one or more remote nodes (NIC subsystem; paragraph [0036] and [0031]), network storage devices are used to fulfill the request (network-attached storage devices; paragraphs [0023] and [0041]), and local storage is used to fulfill the request (during runtime of the computing system (e.g., during the performance of the workload discussed above), the SCP system may confirm that … storage key management policies are being complied with; paragraph [0060]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Khatri to the system of Haywood because Khatri teaches a workload compliance governor system that ensures the components in a disaggregated system to perform a workload comply with any of variety of workload requirements prior to perform the workload (paragraphs [0003] and [0005]). As to claim 9, Haywood as modified by Khatri teaches the system of claim 8, wherein the at least one processing unit is one or more of an Intelligence Processing Unit (IPU) and Central Processing Unit (CPU) (see Haywood: CPU, requests for allocate/access memory; paragraph [0016]). As to claim 10, Haywood as modified by Khatri teaches wherein the first interconnect is a memory (storage) interconnect (see Haywood: PCIe; paragraph [0015]) and the second interconnect is a storage (memory) interconnect (see Khatri: NIC subsystem; paragraph [0036] and [0031]). As to claim 11, Haywood as modified by Khatri teaches the system of claim 8, wherein the first set of one or more remote nodes utilizes an RDMA network (see Haywood: direct memory access to remote memory; paragraph [0038]). As to claim 12, Haywood as modified by Khatri teaches the system of claim 8, wherein the second set of one or more remote nodes utilizes a Peripheral Component Interconnect Express (PCIe) network (see Haywood: PCIe; paragraph [0015]). As to claim 13, Haywood as modified by Khatri teaches the system of claim 8, wherein at least one of a soft PCIe switch and a hard PCIe switch enables access to the second set of the one or more remote nodes (see Haywood: switches; paragraph [0039]). As to claim 14, see rejection of claim 2 above. As to claim 15, see rejection of claim 4 above. As to claim 16, Haywood as modified by Khatri does not teach the system of claim 8, wherein the system is a search engine. However, Haywood does not limit the system to certain type of system. Therefore, the system taught by Haywood could be a search engine. As to claim 19, Haywood as modified by Khatri teaches operating a system according to claim 8 (see Haywood: Fig. 3 illustrates an exemplary memory allocation flow within fabric-interconnected memory-pooling servers 101-1 and 101-2 (local and remote servers, respectively, as discussed in reference to Fig. 1); see Fig. 3 and paragraphs [0025]-[0028]). As to claim 20, Haywood as modified by Khatri does not teach the method of claim 19, wherein the operating comprises executing a search. However, Haywood does not limit the system to certain type of operation. Therefore, the system taught by Haywood could execute a search in response to a request, for example, a search engine performs a search. Claims 3, 5-7 and 17-18 are rejected under 35 U.S.C. 103 as being unpatentable over Haywood et al. (US 2023/0305891 A1) in view of Weerasinghe et al. (Disaggregated FPGAs: Network Performance Comparison against Bare-Metal Servers, Virtual Machine and Linux Containers). As to claim 3, Haywood as modified by Khatri does not clearly teach reducing a latency period for the request using the first and/or second interconnect. However, Weerasinghe teaches reducing a latency period for the request using the first and/or second interconnect (by bypassing the primary processing unit; abstract and section IV). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Weerasinghe to the system of Haywood and Khatri because Weerasinghe teaches an architecture that decouples the FPGA from the CPU of the server by connecting the FPGA directly to the data center network, which results in improve the latency and throughput performance of the system (abstract). As to claim 5, Haywood as modified by Khatri does not teach disaggregating the local memory from the primary processing unit of the computing system using the first interconnect; and disaggregating the local storage from the primary processing unit using the second interconnect. However, Weerasinghe teaches disaggregating the memory of FPGAs from the primary processing unit (CPU, processor) using the interconnect (See Figs. 1 and 2 and sections III and IV). Weerasinghe further teaches the FPGAs is implemented in a data center (abstract). Haywood also teaches the data center including multiple servers (paragraph [0014]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to apply the teaching of Weerasinghe to the system of Haywood and Khatri because Weerasinghe teaches an architecture that decouples the FPGA from the CPU of the server by connecting the FPGA directly to the data center network, which results in improve the latency and throughput performance of the system (abstract). As to claim 6, Haywood as modified by Khatri and Weerasinghe teaches the method of claim 5, wherein the local memory and the local storage are disaggregated into a Field Programmable Gate Array (FPGA)-independent storage and memory (see Weerasinghe: Figs. 1 and 2). As to claim 7, Haywood as modified by Khatri and Weerasinghe teaches the method of claim 4, further comprising reducing data access overhead by bypassing the primary processing unit (see Weerasinghe: abstract and section IV). As to claim 17, see rejection of claim 5 above. As to claim 18, see rejection of claim 6 above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Das et al. (US 2017/0277655 A1) teaches memory sharing for working data using RDMA. Stabrawa et al. (US 9,836,217 B2) teaches a method for dynamically provisionable and allocatable memory external to a requesting apparatus. Erickson et al. (US 2021/0389887 A1) teaches concurrent remote-local memory allocation operations. Haywood et al. (US 2021/0390066 A1) teaches a local memory allocation device may select remote node to provide a memory allocation of memory. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DIEM K CAO whose telephone number is (571)272-3760. The examiner can normally be reached Monday-Friday 8:00am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, April Blair can be reached at 571-270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DIEM K CAO/Primary Examiner, Art Unit 2196 DC December 1, 2025
Read full office action

Prosecution Timeline

Aug 10, 2022
Application Filed
Nov 05, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596576
TECHNIQUES TO EXPOSE APPLICATION TELEMETRY IN A VIRTUALIZED EXECUTION ENVIRONMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12596585
DATA PROCESSING AND MANAGEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12561178
SYSTEM AND METHOD FOR MANAGING DATA RETENTION IN DISTRIBUTED SYSTEMS
2y 5m to grant Granted Feb 24, 2026
Patent 12547445
AUTO TIME OPTIMIZATION FOR MIGRATION OF APPLICATIONS
2y 5m to grant Granted Feb 10, 2026
Patent 12541396
RESOURCE ALLOCATION METHOD AND SYSTEM AFTER SYSTEM RESTART AND RELATED COMPONENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
80%
Grant Probability
99%
With Interview (+19.4%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 663 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month