Prosecution Insights
Last updated: April 19, 2026
Application No. 17/491,094

PRIVATE SPLIT CLIENT-SERVER INFERENCING

Non-Final OA §101§103
Filed
Sep 30, 2021
Examiner
ALSHAHARI, SADIK AHMED
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
Qualcomm Incorporated
OA Round
3 (Non-Final)
35%
Grant Probability
At Risk
3-4
OA Rounds
4y 5m
To Grant
82%
With Interview

Examiner Intelligence

Grants only 35% of cases
35%
Career Allow Rate
12 granted / 34 resolved
-19.7% vs TC avg
Strong +47% interview lift
Without
With
+47.1%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
24 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
41.7%
+1.7% vs TC avg
§102
4.1%
-35.9% vs TC avg
§112
16.7%
-23.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 34 resolved cases

Office Action

§101 §103
DETAILED ACTION Status of Claims Claim(s) 1-4, 7-10, and 13-16 are pending and are examined herein. Claim(s) 1-4, 7-10, and 13-16 have been Amended. Claim(s) 5-6, 11-12, and 17-23 previously Canceled. Claim(s) 1-4, 7-10, and 13-16 remain rejected under 35 U.S.C. § 101 and 35 U.S.C. § 103. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Acknowledgment is made of the applicant’s claim for priority to U.S. Provisional Patent Application No. 63/086,362, filed on October 1, 2020. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 09/23/2025 has been entered. Response to Amendment The amendment filed on September 23, 2025 has been entered. Claims 1-4, 7-10, and 13-16 are pending in the application. Applicant’s amendments to the claims have been fully considered and are addressed in the rejections below. Response to Arguments Applicant's arguments, with respect to the rejection under 35 U.S.C. § 101 filed on 09/23/2025, have been fully considered but they are not persuasive. Applicant argues: Applicant cites sections from the specifications paragraphs [0005], [0023] and [0037], and asserts that the claims embody the solution described in the Specification, by reciting: (1) generating an initial feature vector based on a client-side split inference model component; (2) generating a modified feature vector by modifying a null-space component of the initial feature vector; (3) providing the modified feature vector to a convolution layer of a server-side split inference model component on a remote server; and ( 4) receiving an inference from the remote server. Further, Applicant submits the aforementioned features do not recite mental processes because they cannot be practically performed in the human mind, nor with pen and paper. (See Remarks pp. 6-10). The examiner respectfully disagrees with the applicant’s argument. While the specification discusses a technical problem related to privacy-preserving machine learning inference and proposes a potential solution involving split inference, the claims as currently drafted do not recite sufficient additional limitations to integrate the identified abstract idea into a practical application. Specifically, the claim recites a process that includes: generating an initial feature vector based on a client-side split inference model component, generating a modified feature vector by determining a null-space component of the initial feature vector based at least in part on a weight matrix and modifying the null-space component, transmitting the modified feature vector to a server-side split inference model component, and receiving an inference results. The primarily focus of the claim is the generation of a modified feature vector by determining and modifying a null-space component based on a weight matrix. The steps recite mathematical transformation and relationships involving vector representation and decomposition, which fall within the abstract idea grouping of mathematical concept and mental processes. (See spec [0052]-[0064]). The additional elements recited in the claim, including a client-side split inference model component at the client device and a server-side split inference model component on a remote server, amount to no more than the use of generic computer components or software instructions to perform the abstract idea on a computer, or merely limit the use of the abstract idea to a particular technological environment. Further, the limitations directed to “providing” the modified feature vector and “receiving” an inference result amount to adding insignificant extra-solution activities to the judicial exception, as they represents data gathering and outputting steps incidental to the abstract process. Such additional limitations do not impose a meaningful limit on the judicial exception. See MPEP § 2106.05(f), (g), and (h). Although the claim references inferencing using a split inference model and corresponding split components on client device and remote server, the claim does not recite any specific details of the model split architecture, training or inference model implementation. Moreover, the claim does not define the technical implementation details of how the modified feature vector generation and null-space component determination using such components that would demonstrate an improvement to the functioning of a computer or another technology. Instead, the claimed additional elements are recited at a high level of generality and fail to transform the claim into patent-eligible subject matter. Applicant further argues that the claims are not directed to abstract idea of a mental processes because they cannot be practically performed in the human mind, nor with pen and paper. Applicant references the USPTO Memo dated August 4, 2025. Applicant further arguers that the claims merely involve mathematical concepts rather than recites them, comparing it to Example 39 of the USPTO. (See remarks Pp. 10-11) The examiner respectfully disagrees. The Examiner notes that the claim as currently drafted is not solely directed under the mental process grouping, but also under the mathematical concept grouping. The claim recites operations involving feature vector, null-space component determination comprises performing singular value decomposition, and null-space component modification using mathematical operation. These concepts fall within the abstract idea of mathematical concept and mental processes grouping. Furthermore, it is noted that the use of a physical aid (e.g., pencil and paper or a slide rule) to help perform a mental step (e.g., a mathematical calculation) does not negate the mental nature of the limitation. See MPEP § 2106.04(a)(2)(III). Additionally, Applicant’s assertion that the claims “encompass AI in a way that cannot be practically performed in the human mind” is unpersuasive because the claims do not recite specific detail of AI architecture, training mechanism, or specific AI computational technique. Instead, the elements recited in the claim merely apply abstract mathematical operations (i.e., vector decomposition and null-space determination) to generate a modified feature vector using computer components in the context of a split inference model. The recited client-side and server-side split inference model components are described at a high level of generality and function only as computer components and environments in which the abstract idea is applied. The claim does not recite any specific technical implementation of how the null-space component is technically determined, how the weight matrix is structured or obtained, or how modifying the null-space component results in an improvement to the functioning of computer or the split inference model itself. Rather, the limitations are directed to manipulating mathematical representation of data and merely limit the use of the abstract idea to a particular technological environment. With respect to the applicant’s analogy to Example 39 of the USPTO, the Examiner notes that example 39 and the current claim differ in their context and claim process. Example 39 involves iterative training and dynamically updating training set in facial detection. In contrast, the current claim is mainly focus on the concept of generating a modified feature vector by determining the null-space component and modifying the null-space component. Additionally, dependent claims reciting the use of singular value decomposition (SVD) to determine the null-space component and modifying or removing null-space features, merely apply mathematical operation to determine the null-space and then modifying or removing the null-space features. While the claims do not express equations using mathematical symbols or formulas, the specification defines these operations using mathematical relationships and calculations. (See spec [0052]-[0064]). Accordingly, the split inference model component amount to no more than using a computer as a tool to perform the abstract idea or limiting the abstract idea to a particular technological environment, which does not integrate the judicial exception into a practical application. Applicant further argues that additional limitations to the claims that reflect an improvement in technology, such as an improvement in the functioning of a computer or an improvement to other technology or technical field, integrate the judicial exception into a practical application and thus impose a meaningful limit on the judicial exception. Specifically, Applicant asserts that the claim reflect the solution described in the specification (e.g., [0025], [0031], and [0037]), asserting that “the claimed solutions reflects "modifying the signal space component of the client-side split inference model component output" to "further obfuscate private attributes" such that "[t]he amount of signal space modification is tunable based on the level of privacy desired by the client.".” (See remark Pp. 11-13). The examiner respectfully disagrees. As noted above, the claims, as currently drafted, do not include additional elements that integrate the identified abstract idea into a practical application or amount to significantly more. While the applicant refers to modifying a signal space component to obfuscate private attributes and tune privacy, the described improvement comes from the abstract idea itself. According to MPEP § 2106.05(a), it is important to note that a judicial exception alone cannot provide the improvement. The improvement can be provided by one or more additional elements. MPEP § 2106.04(d), further recites, the "improvements" analysis in Step 2A determines whether the claim pertains to an improvement to the functioning of a computer or to another technology without reference to what is well-understood, routine, conventional activity. In the present case, the claims fail to recite sufficient additional elements that specify a technical implementation amounting to an improvement to the functioning of a computer or other technology. Rather, the claims recite generic processing components. For example, the elements including “a client-side split inference model” and “a server-side split inference component” of the split inference model merely define software components configured on a computer. These elements amount to merely using generic computer components and/or computer instructions at a high level of generality. Additionally, the recited steps of providing the modified feature vector to a convolutional layer of a server-side split inference model on a remote server and receiving inference results amount to insignificant extra-solution activity, as described in MPEP § 2106.05(g). These limitation merely define a generic computer function being executed to perform routine data transmission (i.e., sending inputs and receiving outputs), which represents activities incidental to the primary process or product that are merely a nominal or tangential addition to the claim. Therefore, the additional elements, when considered individually or in combination, do not demonstrate integration into a practical application as required under Step 2A, Prong Two, nor do they amount to significantly more under Step 2B. In view of the above, Applicant’s arguments are not persuasive, and the rejection under 35 U.S.C. § 101 is maintained. Applicant's arguments with respect to the rejection under 35 U.S.C. § 103, filed on 09/23/2025 (see Remarks pp. 14-16), have been fully considered but are not persuasive and are moot in view of the new grounds of rejection necessitated by amendments. The Examiner refers to the updated rejection under 35 U.S.C. § 103 for more details. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter (Step 1). If the claim does fall within one of the statutory categories, the second step in the analysis is to determine whether the claim is directed to a judicial exception (Step 2A). The Step 2A analysis is broken into two prongs. In the first prong (Step 2A, Prong 1), it is determined whether or not the claims recite a judicial exception (e.g., mathematical concepts, mental processes, certain methods of organizing human activity). If it is determined in Step 2A, Prong 1 that the claims recite a judicial exception, the analysis proceeds to the second prong (Step 2A, Prong 2), where it is determined whether or not the claims integrate the judicial exception into a practical application. If it is determined at step 2A, Prong 2 that the claims do not integrate the judicial exception into a practical application, the analysis proceeds to determining whether the claim is a patent-eligible application of the exception (Step 2B). If an abstract idea is present in the claim, any element or combination of elements in the claim must be sufficient to ensure that the claim integrates the judicial exception into a practical application, or else amounts to significantly more than the abstract idea itself. Applicant is advised to consult MPEP 2106 for more details of the analysis. Under Step 1 analysis, Claims 1-4 recite a method (i.e., a process); Claims 7-10 recite a processing system (i.e., a machine); and Claims 13-16 recite a non-transitory computer readable medium (i.e., an article of manufacture). Therefore, each of the claims falls into one of the four statutory categories (i.e., process, machine, article of manufacture, or composition of matter). Claims 1-4, 7-10, and 13-16 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more, and hence is not patent-eligible subject matter. Regarding Currently Amended Claim 1, Step 2A Prong 1: The claim recites an abstract idea enumerated in the 2019 PEG. generating, at a client device, an initial feature vector based on a client-side split inference model component of the split inference model; (The “generating” step covers the abstract idea of mathematical concept and mental process. Examiner’s note: the “generating” limitation, as drafted, and under its broadest reasonable interpretation, covers concepts that can be practically performed in the human mind and/or with physical aid (e.g., pen and paper). But for the recitation of “a client-side split inference model component of the split inference model” is nothing more than using a computer component or instructions to perform the abstract idea on a computer. Other than reciting the use of a computer component, nothing in the claim precludes the generating step from being performed in the human mind with the aid of pen and paper. See MPEP § 2106.04(a)(2)(III).) generating, at the client device, a modified feature vector based on: determining a null-space component of the initial feature vector based at least in part on a weight matrix associated with a server-side split inference model component of the split inference model; and modifying the null-space component of the initial feature vector to generate the modified feature vector; (The “generating” step covers the abstract idea of mathematical concept and mental process. The “generating” limitations, as drafted, and under their broadest reasonable interpretation, cover concepts that would fall under the mental process and mathematical concept grouping. Examiner’s note: the “determining a null-space component” and generating a modified feature vector” steps recite mathematical relationships and operations, including vector decomposition and manipulating information based on weight matrix, which constitute abstract mathematical concepts. Such operations can be practically performed in the human mind with the aid of pen and paper. (See spec [0052]-[0064]). The recitation of “at the client device” and “a server-side split inference model component of the split inference model” merely using a computer component or instructions to perform the abstract idea on a computer. Other than reciting the computer components, nothing in the claim precludes the determining and generating the modified feature vector from being performed manually by a human. See MPEP § 2106.04(a)(2)(I) & (III).) Step 2A Prong 2: Under this prong, we evaluate whether the claim recites additional elements that integrate the abstract idea into a practical application by considering the claim as a whole. The judicial exception is not integrated into a practical application. Additional Elements Analysis: The claim recites the additional elements such as “a client-side split inference model component of the split inference model” and “a server-side split inference model component on a remote server”, which amount to merely reciting the words "apply it" (or an equivalent) with the judicial exception, or merely including instructions to implement an abstract idea on a computer, or merely using a computer as a tool to perform an abstract idea, as discussed in MPEP § 2106.05(f) and/or amount to generally linking the use of a judicial exception to a particular technological environment or field of use, as discussed in MPEP § 2106.05(h). Examiner’s Note: These elements merely apply abstract mathematical operations (i.e., vector decomposition and null-space determination) using computer components in the context of a machine learning model. The claim recites the additional limitations such as “providing, by the client device, the modified feature vector to a convolution layer of the server-side split inference model component of the split inference model on a remote server;” and “receiving, by the client device, an inference from the remote server in response to the modified feature vector.” The steps of “providing” and “receiving” amount to no more than adding insignificant extra-solution activities to the judicial exception, as discussed in MPEP § 2106.05(g). These limitations merely define a data gathering and/or outputting steps in conjunction with the abstract idea. Such data gathering and/or outputting steps do not impose meaningful limit on the scope of the claim (i.e., all uses of the recited judicial exception require such data gathering or data output). Step 2B: Under this prong, the claim must include additional elements that amount to significantly more than the judicial exception. These elements must not be well-understood, routine, or conventional in the relevant field. When viewed individually and as an ordered combination, the claim does not include any such additional elements that are sufficient to amount to significantly more (i.e., inventive concept). Additional Elements Analysis: As explained above, the additional elements such as “providing” and “receiving” steps amount to insignificant extra-solution activities to the judicial exception. These additional elements merely represents generic computer functions (i.e., data gathering and/or outputting). These steps do not impose meaningful limitations to the abstract idea (generating the modified feature vector) as they merely represent generic computer functions recited at a high level of generality. The courts have recognized generic computer functions such as “receiving or transmitting data over a network …” as well‐understood, routine, and conventional functions in the field. See MPEP § 2106.05(d). Even when considered in combination with the judicial exception, the additional elements are not sufficient integrate the judicial exception into a practical application or amount to significantly more (i.e., an inventive concept). Therefore, claim 1 does not recite patent-eligible subject matter. Regarding Currently Amended Claim 2, Step 2A Prong 1: Claim 2, which incorporates the rejection of claim 1, recites further limitation such as: determining the null-space component comprises performing a singular value decomposition of the weight matrix. (This limitation is part of the abstract idea recited claim 1. Claim 2 further suggest using a singular value decomposition to determine the null-space component. This step falls under the mathematical Concepts and/or Mental Process. In other words, the claim involves the use of mathematical transformation (e.g., SVD) to determine or calculate the null-space values. (See spec [0052]-[0064]). See MPEP § 2106.04(a)(2)(I) & (III).) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 2 is ineligible. Regarding Currently Amended Claim 3, Step 2A Prong 1: Claim 3, which incorporates the rejection of claim 1, recites further limitation such as: wherein modifying the null-space component comprises modifying a plurality of null-space features with randomly generated noise. (This limitation is part of the abstract idea recited claim 1. Claim 3 further suggests that modifying the feature vector by randomly adding noise to the features, which, under its broadest reasonable interpretation, covers concepts that can be performed in the human mind and/or with physical aid (e.g., pen and paper). This step encompasses the abstract idea of mental process and mathematical concept groupings.) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 3 is ineligible. Regarding Currently Amended Claim 4, Step 2A Prong 1: Claim 4, which incorporates the rejection of claim 1, recites further limitation such as: wherein modifying the null-space component comprises removing a plurality of null-space feature values from the initial feature vector. (This limitation is part of the abstract idea recited claim 1. Claim 4 merely suggests modifying the feature vector by removing null-space features, which under its broadest reasonable interpretation, encompasses the mental process and/or mathematical concepts, see MPEP § 2106.04(a)(2)(III).) Step 2A Prong 2: The claim does not recite additional element that integrates the judicial exception into a practical application. Step 2B: The claim does not recite additional elements that amount to significantly more than the judicial exception. Therefore, claim 4 is ineligible. Regarding Currently Amended Claim 7, The claim recites similar limitations as corresponding claim 1. Therefore, the same analysis (subject matter eligibility analysis) that was utilized for claim 1, as described above, is equally applicable to claim 7. The only difference is that claim 1 is drawn to a method, and claim 7 is drawn to a processing system. Step 2A Prong 2: The judicial exception is not integrated into a practical application. Claim 7 further recites: a memory comprising computer-executable instructions; and a processor configured to execute the computer-executable instructions … (These additional elements merely describe generic computer components and/or computer instructions to perform the method. Merely adding a generic computer, generic computer components, or a programmed computer to perform generic computer functions does not automatically overcome an eligibility rejection. See MPEP § 2106.05(f). Step 2B: the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As explain above in Step 2A, Prong Two, the additional elements merely describe generic computer components that are configured to perform the aforementioned abstract idea. Mere instruction to apply the exception on computer cannot provide an inventive concept. Therefore, claim 7 is ineligible. Regarding Currently Amended Claim 8, The claim recites similar limitations as corresponding claim 2. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 2, as described above, is equally applicable to claim 8. Therefore, claim 8 is ineligible. Regarding Currently Amended Claim 9, The claim recites similar limitations as corresponding claim 3. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 3, as described above, is equally applicable to claim 9. Therefore, claim 9 is ineligible. Regarding Currently Amended Claim 10, The claim recites similar limitations as corresponding claim 4. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 4, as described above, is equally applicable to claim 10. Therefore, claim 10 is ineligible. Regarding Currently Amended Claim 13, The claim recites similar limitations as corresponding claim 1. Therefore, the same analysis (subject matter eligibility analysis) that was utilized for claim 1, as described above, is equally applicable to claim 13. The only difference is that claim 1 is drawn to a method, and claim 13 is drawn to a non-transitory computer-readable medium. Therefore, claim 13 is ineligible. Regarding Currently Amended Claim 14, The claim recites similar limitations as corresponding claim 2. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 2, as described above, is equally applicable to claim 14. Therefore, claim 14 is ineligible. Regarding Currently Amended Claim 15, The claim recites similar limitations as corresponding claim 3. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 3, as described above, is equally applicable to claim 15. Therefore, claim 15 is ineligible. Regarding Currently Amended Claim 16, The claim recites similar limitations as corresponding claim 4. Therefore, the same subject matter eligibility analysis including (abstract idea) that was utilized for claim 4, as described above, is equally applicable to claim 16. Therefore, claim 16 is ineligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 3-4, 7, 9-10, 13, and 15-16 are rejected under 35 U.S.C. 103 as being unpatentable over Wang et al., (NPL: "Not just privacy: Improving performance of private deep learning in mobile cloud." (2019)) in view of Xu et al., (IDS: "Cleaning the null space: A privacy mechanism for predictors." (2017)), and further in view of Lyu et al., (IDS: “FORESEEN: Towards differentially private deep inference for intelligent Internet of Things.” (2020)). Regarding Currently Amended Claim 1, Wang discloses the following: A method of inferencing with a split inference model, comprising: (Wang, [P. 2, Section: 1] “To address the above problems, we take both privacy and performance into consideration, and propose a privAte infeRence framework based on Deep nEural Networks in mobile cloud, named Arden. Arden partitions a deep neural network across the mobile device and the cloud data center. On the mobile device side1 , the raw data is transformed by the shallow portions of the DNN to extract their lower-level features. The local neural networks are derived from the pretrained neural networks to avoid local training, and can be regarded as a feature extractor for different inference tasks based on the idea of transfer learning [44]. In order to preserve privacy, we introduce the differential privacy mechanism [10] on the local side, which adds deliberate noise into the data before uploading. The large portions of the DNN are deployed in the cloud data center to run the complex and resource-hungry inference tasks. We propose a noisy training method to make the DNN robust to the additional noise in the data revealed by mobile devices, and so to improve the inference performance. Our main contributions are listed as follows: A framework enabling deep learning on mobile devices. We take the privacy, performance, and overhead into consideration, and design a framework to partition the very large deep neural networks in mobile cloud environment. All the resource-hungry tasks, e.g., network training and complex inference are offloaded to the cloud data centers. The works in the cloud data center are transparent to end mobile devices, which enables the online model upgrade.”) [Examiner’s Note: See Figure 3: The overview of Arden.] generating, at a client device, an initial feature vector based on a client-side split inference model component of the split inference model; (Wang, [P.1, Section: Abstract] “To benefit from the cloud data center without the privacy risk, we design, evaluate, and implement a cloud-based framework Arden which partitions the DNN across mobile devices and cloud data centers.” [P. 2, Section: 1] “the shallow portions of a DNN are deployed on mobile devices while the complex and large parts are offloaded to the cloud data center. ... On the mobile device side1 , the raw data is transformed by the shallow portions of the DNN to extract their lower-level features. The local neural networks are derived from the pretrained neural networks to avoid local training, and can be regarded as a feature extractor for different inference tasks based on the idea of transfer learning [44].” [Pp. 3-4, Section: 3.1] “In the inference phase, the sensitive data is transformed by the local neural network to extract the general features embedded in it.” [P. 2, Section: 3.2] “To preserve privacy, the data transformation on the local side is perturbed. One of the key techniques in Arden is how to inject the perturbation that satisfies the differential privacy and measure the privacy budget of the perturbation. Regarding the local neural network as a deterministic function x r   =   M ( x s ) , where x s represents sensitive input data, ... Algorithm 1 outlines the differentially private data transformation on the local side. For each sensitive data xs , some data items are masked by the nullification operation. Then the data is fed into the local neural network for feature extraction.”) [Examiner’s Note: The proposed framework toward DNN-based private inference in mobile cloud. It breaks down large, complex deep models for cooperative, privacy-preserving analytics. This represents split inference model on the client-side (i.e., mobile) and server-side (i.e., cloud center). The local neural network located at the mobile-side is used to extract the general features embedded in it x r = M ( x s ) (i.e., initial feature vector) using feature extraction module M of the partitioned Privacy-Preserving pretrained Inference model.] generating, at the client device, a modified feature vector based on: ... modifying the null-space component of the initial feature vector to generate the modified feature vector; (Wang, [P. 2, Section: 1] “In order to preserve privacy, we introduce the differential privacy mechanism [10] on the local side, which adds deliberate noise into the data before uploading. ... A differentially private local transformation mechanism. To preserve privacy, we propose a new mechanism to perturb the local data transformation based on the differential privacy mechanism. Compared with the existing ones, our proposed mechanism can be customized to protect specific data items, and fits well with the stacking structure of neural networks.” [P. 3, Section: 3.1] “In the inference phase, the sensitive data is transformed by the local neural network to extract the general features embedded in it. For preserving privacy, the transformation is perturbed by both nullification and random noise which are consistent with differential privacy.” [P. 2, Section: 3.2] “To preserve privacy, the data transformation on the local side is perturbed. One of the key techniques in Arden is how to inject the perturbation that satisfies the differential privacy and measure the privacy budget of the perturbation. Regarding the local neural network as a deterministic function x r   =   M ( x s ) , where x s represents sensitive input data, ... Therefore, we propose a more sophisticated mechanism including nullification and layer-wise perturbation. The corresponding privacy budget analysis is given in detail. ... Nullification: Given the input sensitive data x s that consists of N data items, nullification performs item-wise multiplication of x s with In, where In is a binary matrix constituted of 0 and 1 with the same dimensions as x s . In can be either specified by end users to nullify the highly sensitive data items or generated randomly.”) [Examiner’s Note: The generated perturbed representation on the local device represents the generated modified feature vector. The modification/perturbation is performed based on nullification operation (i.e., modifying null-space component) using algorithm 1. Thus, the output perturbed representation is the modified feature vector. Null-space component is broadly interpreted as component to be nullified/removed, Wang nullifies component of the extracted feature vector to generate a modified vector.] providing, by the client device, the modified feature vector to a convolution layer of the server-side split inference model component of the split inference model on a remote server; (Wang, [P. 2, Section: 1] “In order to preserve privacy, we introduce the differential privacy mechanism [10] on the local side, which adds deliberate noise into the data before uploading. The large portions of the DNN are deployed in the cloud data center to run the complex and resource-hungry inference tasks.” [P. 4, Section: 3.1] “For preserving privacy, the transformation is perturbed by both nullification and random noise which are consistent with differential privacy. Then, the perturbed representations are transmitted to the cloud for further complex inference.” [P. 4, Section: IV. FOG-BASED PRIVACY-PRESERVING DEEP LEARNING] “For image classification tasks, three widely used convolutional deep neural networks (Conv-Small, Conv-Middle, and Conv-Large) are implemented in Arden [22, 36]. We derive the local neural network from Conv-Small, which is pretrained on CIFAR-100 dataset [20]. ... we use ConvMiddle as the cloud-side DNN.” [Pp. 7-8, Section: V-B] “Algorithm 1. In particular, the first two convolutional layers including the pooling layers are taken as the feature extractor that is deployed on end devices for representation extraction, the perturbed representation is further processed by the remaining fully connected layers of the classifier module. For MVMC dataset, we use the Binary Neural Network (BNN) in [30].”) [Examiner’s Note: Wang teaches transmitting the perturbed representation generated by the local neural network of the mobile-side to the cloud neural network of the cloud-side for inference. The cloud-side neural network is implemented in the cloud data center (i.e., remote server) and consist of a convolutional layer (e.g., ConvMiddle) of the ADREN (DNN-based private inference in mobile cloud). This reads on the claim limitation of providing the modified feature vector to a convolution layer of the server-side split inference model.] While Wang teaches the process of generating and providing a perturbed representation using ADREN framework (i.e., split inference model) and implicitly provides an inference results by the cloud-side neural network. Wang does not appear to explicitly teach the following: determining a null-space component of the initial feature vector based at least in part on a weight matrix associated with a server-side split inference model component of the split inference model; receiving, by the client device, an inference from the remote server in response to the modified feature vector. However, Wang in view of Xu teaches the following: determining a null-space component of the initial feature vector based at least in part on a weight matrix associated with a server-side split inference model component of the split inference model; (Xu, [Abstract] “We describe two algorithms aimed at providing such privacy when the predictors have a linear operator in the first stage. The desired effect can be achieved by zeroing out feature components in the approximate null space of the linear operator.” [P. 1, Section: Problem Statement] “the cleaner, the ally, and the adversary. In the example discussed earlier the cleaner is the client and the ally is the company. The cleaner has a single feature vector denoted by x. In addition to x the cleaner has knowledge of the linear operator of the predictor used by the ally. The cleaner cleans x and produces x˜. We write the computation performed by the cleaner as: x ~ =   c l e a n ( x ) (1) The ally has a predictor f d that can predict desired information, denoted by y d , from the uncleaned data x. This can be expressed as: y d   =   f d ( x ) .” [P. 2, Section: Our results] “Our cleaning algorithms require that the predictor fd() starts with a linear operator. This means that it can be written as: f d ( x )   =   f 1 ( A d T x ) , where A d is a matrix. .... We describe two cleaning algorithms that work intuitively as follows. The algorithms take as input the feature vector x and the matrix A d (used by the ally in the initial linear step). The cleaning of x is achieved by subtracting from it projections on the approximate null space of A d T . These projections are irrelevant to the prediction of the desired information.” [P. 3, Section: Algorithm 1] “The first algorithm is shown in Fig. 2. It identifies t eigenvectors and a fraction α t + 1 of another eigenvector that are in the approximate null space of A d . Zeroing out the projection of the feature vector x on these eigenvectors produces the desired cleaned feature vector.”) [Examiner’s Note: Xu explicitly computes eigenvectors/eigenvalues to identify which components of the feature vector lies in the null-space (Steps 1-3 of Algorithm 1). This is a determination process of the null-space. The vector x represents the initial feature vector before privacy-preserving transformation (i.e., the feature vector to be cleaned). Xu uses the matrix representing the linear operator of the ally (i.e., company-side model). The null-space determination is computed based on the weight matrix A d (i.e., Compute eigenvectors/eigenvalues of B d   =   A d   A d T .). The split architecture of Xu is described as follows: Cleaner (Client device) that determines the null-space and modifies the feature vector and Ally (company e.g., server-side) performs inference with predictor f d . The weight matrix belongs to the ally’s predictor (which acts as the “weight matrix associated with server-side component”). The cleaner uses knowledge of weight matrix A d to identify the null-space. ] modifying the null-space component of the initial feature vector to generate the modified feature vector; (Xu, [P. 2, Section: Our results, Col. 2] “We describe two cleaning algorithms that work intuitively as follows. The algorithms take as input the feature vector x and the matrix A d (used by the ally in the initial linear step). The cleaning of x is achieved by subtracting from it projections on the approximate null space of A d T . These projections are irrelevant to the prediction of the desired information. Both algorithms remove components in the exact null space. They behave differently in identifying the approximate null space.” [Pp. 2-3, Col. 1] “Unlike these studies we consider the privacy of information that can be extracted from a single feature vector. Clearly, to increase privacy we can add noise to the feature vector or use fewer features.” [P. 3, Col. 2, Section: Cleaning] “In this section we describe the algorithms for cleaning the feature components in the approximate null space of the linear predictor” See Figure 2: Algorithm 1 for cleaning a feature vector x.) [Examiner’s Note: Xu teaches the process of identifying null space of feature vector x and produce a “cleaned feature vector x ~ ”. The cleaned feature vector reads on the modified feature vector.] Therefore, at the effective filing date, it would have been prima facie obvious to one of ordinary skill in the art to modify the Arden private inference framework of Wang to incorporate the Privacy Mechanism as taught by Xu. One would have been motivated to make such a combination in order to zero out projections on the null-space of predictors, thereby enhancing privacy without effecting prediction (Xu [P. 6]). As noted above, while Wang implicitly teaches providing, by the cloud-side neural network, an inference results to mobile device of the end user. Wang in view of Xu do not appear to explicitly states: receiving, by the client device, an inference from the remote server in response to the modified feature vector. However, it would have been obvious in view of Lyu. Hereinafter, Lyu, in combination with Wang in view of Xu, teaches: receiving, by the client device, an inference from the remote server in response to the modified feature vector. (Lyu, [Abstract] “In FORESEEN, the intermediate fog nodes and the cloud collaboratively perform noisy training of deep neural networks (DNNs), while each end device and its connected fog node collaboratively perform fast, private yet accurate inference.” [Pp. 5-6, Section: IV-B] “As shown in Fig. 3, end devices and the nearby fog node collaborate on private inference in the following steps: 1) End devices extract differentially private test representation, and send it to the nearby fog node; 2) Fog node processes the received representation and returns the produced fog inference back to end devices.”) [Examiner’s Note: Lyu’s Foreseen defines a partitioned DNN into end device and remote fog nodes (i.e., remote server). The fog nodes processes perturbed representation x ^ r (i.e., the modified feature vector) and returns the produced inference results back to the end devices. ] Wang, Xu, Lyu are analogous art because they are from the same field of endeavor and their disclosure generally relates to (privacy-preserving deep learning framework). Accordingly, at the effective filing date, it would have been prima facie obvious to one ordinarily skilled in the art of machine learning to modify the combination of Wang and Xu to incorporate the proposed FORESEEN framework as taught by Lyu. One would have been motivated to make such a combination in order to protect the inference privacy contained in the features sent from end devices. Doing so would provide private and efficient inference (Lyu [Section VI]). Regarding Currently Amended Claim 3, the combination of Wang, Xu, and Lyu teaches the elements of claim 1 as outlined above, and further teaches: wherein modifying the null-space component comprises modifying a plurality of null-space features with randomly generated noise. (Wang, [P. 4, Col. 2, Section: 3.2] “Algorithm 1 outlines the differentially private data transformation on the local side. For each sensitive data xs , some data items are masked by the nullification operation. Then the data is fed into the local neural network for feature extraction. At a specific layer l, for the output of Ml (the neural network from the first layer to the l-th layer), we bound its infinity norm by B, and inject noise to protect privacy. ... Nullification: Given the input sensitive data x s that consists of N data items, nullification performs item-wise multiplication of x s with In, where In is a binary matrix constituted of 0 and 1 with the same dimensions as x s . In can be either specified by end users to nullify the highly sensitive data items or generated randomly. ... We add random noise sampled from the Laplace distribution into the bounded output   x l ' to protect the privacy. Different from the existing works where the noise is added into the final output of the deterministic function, the noise is added during the transformation in Algorithm 1.”) [Examiner’s Note: Wang teaches modifying a plurality of components with randomly generated noise. Specifically, Algorithm 1 shows that multiple components (identified by nullification operation) are modified by adding random noise sampled from the Laplace distribution. This process of adding randomly generated noise to multiple features components directly correspond to “modifying a plurality of null-space features with randomly generated noise.”] Regarding Currently Amended Claim 4, the combination of Wang, Xu, and Lyu teaches the elements of claim 1 as outlined above, and further teaches: wherein modifying the null-space component comprises removing a plurality of null-space feature values from the initial feature vector. (Xu, [Abstract] “We describe two algorithms aimed at providing such privacy when the predictors have a linear operator in the first stage. The desired effect can be achieved by zeroing out feature components in the approximate null space of the linear operator.” [P. 1, Section: 1, Col. 2] “Our idea is to use knowledge about the predictor of desired information to help hide confidential information. In particular, if the predictor has a linear operator in the first stage then there is a transformation of the data that reveals feature components not needed for the prediction. Specifically, combinations of features that lie in the operator null space do not affect the prediction and need not be provided. Removing this information is what we call cleaning.” [P. 2, Col. 2, Section: Our results] “We describe two cleaning algorithms that work intuitively as follows. The algorithms take as input the feature vector x and the matrix Ad (used by the ally in the initial linear step). The cleaning of x is achieved by subtracting from it projections on the approximate null space of AT d . These projections are irrelevant to the prediction of the desired information. Both algorithms remove components in the exact null space. They behave differently in identifying the approximate null space.”) Regarding Currently Amended Claim 7, The claim recites substantially similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. Claim 1 is directed to a method, and claim 7 is directed to A processing system, comprising: a memory comprising computer-executable instructions; and a processor configured to execute the computer-executable instructions. Wang describe the implementation of the proposed ARDEN framework using memory and processing unit (CPU). Wang specifically teaches [Pp.9-10, Section: 4.5] “We implement the Arden in a demo system composed by HUAWEI HONOR 8 and DELL INSPIRON 15. The mobile device is equipped with ARM Cortex-A53@2.3GHz and ARM Cortex-A53@1.81GHz. The laptop is equipped with Core i7-7700HQ@2.80GHz and NVIDIA GTX 1050Ti. The mobile device is connected to the laptop through the IEEE 802.11 wireless network. We use TensorFlow to generate the deployable model for the Android system.” Regarding Currently Amended Claim 9, The claim recites substantially similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding Currently Amended Claim 10, The claim recites substantially similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Regarding Currently Amended Claim 13, The claim recites similar limitation as corresponding claim 1 and is rejected for similar reasons as claim 1 using similar teachings and rationale. The only difference is that claim 13 is directed to a non-transitory computer-readable medium. Regarding Currently Amended Claim 15, The claim recites substantially similar limitations as corresponding claim 3 and is rejected for similar reasons as claim 3 using similar teachings and rationale. Regarding Currently Amended Claim 16, The claim recites substantially similar limitations as corresponding claim 4 and is rejected for similar reasons as claim 4 using similar teachings and rationale. Claim(s) 2, 8, and 14 are rejected under 35 U.S.C. 103 as being unpatentable over the combination of Wang, Xu, and Lyu as outlined above, and further in view of Chou et al., (NPL: "Low-complexity privacy-preserving compressive analysis using subspace-based dictionary for ECG telemonitoring system." (2018)). Regarding Currently Amended Claim 2, the combination of Wang, Xu, and Lyu teaches the elements of claim 1 as outlined above. As outlined above, Xu, in combination with Wang and Lyu, teaches the limitation of determining a null-space component of the initial feature vector based at least in part on a weight matrix associated with a server-side split inference model component. The combination of Wang, Xu, and Lyu does not appear to explicitly teach: wherein determining the null-space component comprises performing a singular value decomposition of the weight matrix. However, it would have been obvious in view of Chou. Hereinafter, Chou, in combination with Wang, Xu, and Lyu, teaches the limitation: wherein determining the null-space component comprises performing a singular value decomposition of the weight matrix. (Chou, [P. 804, Col. 1, Section: III] “we will propose the modification methods. 2) Subspace Learning: Consider a dataset of n vectors X   [ x 1 x 2   · · ·   x n   ] ,   x i   ∈   R N . The subspace of the dataset can be found through subspace learning, which is composed of two parts: signal space learning and signal space division. The signal space is learnt by PCA and further divided by LPP and SVD.” [P. 804, Col. 2] “From the assumptions of Section III-A-1, The left null space of discriminative subspace needs to be found. On the other hand, though we can project data in signal space to discriminative subspace by W L P P , the projections learnt aren’t orthonormal. To make the unsuitable matrix WLPP satisfy the assumptions, we propose the modification methods by performing singular value decomposition (SVD) of matrix W L P P . We can not only derive the left null space ofWLPP, but also find the orthonormal basis of W L P P , ….etc.”) Accordingly, it would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, having the combination of Wang, Xu, and Lyu before them, to incorporate the Privacy-Preserving method as taught by Chou. One would have been motivated to make such a combination in order to separate the signal space into discriminative and complementary subspaces. This will protect privacy from an information-theoretic perspective while delivering the classification capability (Chou [Abstract]). Regarding Currently Amended Claim 8, The claim recites similar limitation as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale.. Regarding Currently Amended Claim 14, The claim recites substantially similar limitations as corresponding claim 2 and is rejected for similar reasons as claim 2 using similar teachings and rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: (Pub. No.: US 20210319098 A1) – “Oleg POGORELIK” relates to “Securing systems employing artificial intelligence.” (Pub. No.: US 20180336463 A1) – “Joshua Simon Bloom” relates to “Systems and methods for domain-specific obscured data transport.” NPL: Zhao, Jianxin, et al. "Privacy-preserving machine learning based data analytics on edge devices." (2018). Any inquiry concerning this communication or earlier communications from the examiner should be directed to SADIK ALSHAHARI whose telephone number is (703)756-4749. The examiner can normally be reached Monday - Friday, 9 a.m. 6 p.m. ET. Examiner interviews are available via telephone, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached on (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.A.A./Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

Sep 30, 2021
Application Filed
Feb 24, 2025
Non-Final Rejection — §101, §103
May 16, 2025
Response Filed
Jul 22, 2025
Final Rejection — §101, §103
Sep 23, 2025
Request for Continued Examination
Oct 02, 2025
Response after Non-Final Action
Jan 12, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596930
SENSOR COMPENSATION USING BACKPROPAGATION
2y 5m to grant Granted Apr 07, 2026
Patent 12493786
Visual Analytics System to Assess, Understand, and Improve Deep Neural Networks
2y 5m to grant Granted Dec 09, 2025
Patent 12462199
ADAPTIVE FILTER BASED LEARNING MODEL FOR TIME SERIES SENSOR SIGNAL CLASSIFICATION ON EDGE DEVICES
2y 5m to grant Granted Nov 04, 2025
Patent 12437199
Activation Compression Method for Deep Learning Acceleration
2y 5m to grant Granted Oct 07, 2025
Patent 12430552
Processing Data Batches in a Multi-Layer Network
2y 5m to grant Granted Sep 30, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
35%
Grant Probability
82%
With Interview (+47.1%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 34 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month