Prosecution Insights
Last updated: April 19, 2026
Application No. 18/732,412

METHOD, APPARATUS, DEVICE AND STORAGE MEDIUM FOR DATA REQUEST PROCESSING

Non-Final OA §103§112
Filed
Jun 03, 2024
Examiner
QIAN, SHELLY X
Art Unit
2154
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING VOLCANO ENGINE TECHNOLOGY CO., LTD.
OA Round
5 (Non-Final)
37%
Grant Probability
At Risk
5-6
OA Rounds
3y 11m
To Grant
57%
With Interview

Examiner Intelligence

Grants only 37% of cases
37%
Career Allow Rate
47 granted / 126 resolved
-17.7% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 11m
Avg Prosecution
28 currently pending
Career history
154
Total Applications
across all art units

Statute-Specific Performance

§101
16.7%
-23.3% vs TC avg
§103
64.0%
+24.0% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 126 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed 1/16/2026 have been fully considered but they are not persuasive. In particular, Applicant states (pp. 11) that Mathew does not teach limitation “directly accessing, by the userspace process, the target cached data stored in the kernel module without context switching between the user space process and the kernel module;” Examiner respectfully disagrees. Context switching is necessary in a multitasking OS such as Linux, but can be avoided in a single user single task OS (Mathew: 5,8/12). Notice that the type of OS (i.e., single or multi task) is not claimed. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Mathew to Malhotra. One having ordinary skill in the art would have found motivation to avoid the negative performance impact of context switching, at least for the special case of a single user single task OS in Malhotra. Applicant further states (pp. 12) that Aziz does not teach limitation “in response to the userspace process being abnormal closed, restarting of the userspace process, and obtaining, from the record information, the location information of the data request that is being processed, in the data request list;” Examiner respectfully disagrees. Aziz provides queue persistency using a variety of implementations, such as mirrored disks or a DBMS (Aziz: [0196]). If the server fails (i.e., abnormal closed), in-progress requests and state information (e.g., location) are forwarded to (i.e., restarted at) a different server for processing (i.e., continued processing) (Aziz: [0210]). In summary, the cited prior art of record combined teaches the argued limitations of independent claims 1, 10 and 19. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-4, 7-13 and 16-20 are rejected under 35 U.S.C. 112(b), as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Applicant admits in the arguments (pp. 11), that claim element “without context switching” is in fact an intended result of the invention, by reducing or even eliminating the need for context switching. This makes the claims indefinite (see MPEP §2173.05(g)). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-2, 4, 7-11, 13 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Malhotra et al. US patent application 2016/0321291 [herein “Malhotra”], in view of Amaral. What is a FUSE filesystem? Jan 2023, pp. 1-9. https://medium.com/@goamaral/fuse-filesystem-b44768f27aa2 [herein “FUSE”], and Mathew. Types of Operating Systems: A Comprehensive Guide. Feb. 2023, pp. 1-12. https://merwin.hashnode.dev/types-of-operating-systems-a-comprehensive-guide [herein “Mathew”], and further in view of Aziz et al. US patent application 2003/0126265 [herein “Aziz”]. Claim 1 recites “A method of data request processing, the method being applied at a filesystem in userspace, the filesystem in userspace including a kernel module and a userspace process, the kernel module being configured to send a data request to the userspace process, and the userspace process being configured to process the data request, the method comprising: obtaining, by the userspace process, a data request list in the kernel module, wherein the data request list includes a plurality of data requests to be processed and virtual address information corresponding to respective data requests of the plurality of data requests;” The instant specification does not define “filesystem in userspace”, but describes it as widely applied in prior art (spec. [0003]). Examiner thus interprets it to refer to a software interface in industry standard UNIX and UNIX-like computer operating systems that let non-privileged users create their file systems without editing kernel code (FUSE: pp. 1/9). Malhotra teaches a virtual file system for cloud-based shared content using OS-specific metadata local to a user device (i.e., filesystem in userspace) [0033], which is a layer between a user device’s native file system and the file storage system of the cloud [0034]. An application issues function calls in user/kernel space to access remotely hosted content objects [0042] by read, write, or modify operations. An adapter layer (i.e., kernel module) extends the kernel to redirect (i.e., send) these calls to the virtual file system (i.e., userspace process) [0131]. The virtual file system contains a file system interface, local/cloud executors and data managers, and a local storage [0062]. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of FUSE to Malhotra. One having ordinary skill in the art would have found motivation to implement Malhotra in the industry standard FUSE file system. Claim 1 further recites “querying, for each of the data requests, by the userspace process, physical address information corresponding to the virtual address information, according to the virtual address information corresponding to the data request, wherein the kernel module stores a plurality of cached data and a one-to-one correspondence between the cached data and the physical address information;” Malhotra places file access requests (i.e., data requests) in an event queue (i.e., list) to be handled (i.e., processed) remotely by the cloud [0091]. A mapping table exists as metadata to map (i.e., one-to-one correspondence) file IDs (i.e., OS-specific physical address) in the cloud to (i.e., according to) filenames (i.e., virtual address) locally requested (i.e., queried) [0092]. Claim 1 further recites “querying, by the physical address information, target cached data corresponding to the data request, from the plurality of cached data stored in the kernel module;” Local storage in Malhotra comprises a local cache with a set of local metadata. Local cache stores certain portions of objects for faster access [0066]. Local metadata attributes for each cached object includes a "nodeID" uniquely identifying a certain node (i.e., file) in a file tree, a "type" attribute describing the node and/or object type, a "remoteID" uniquely identifying the corresponding object in the cloud, etc. [0063]. Claim 1 further recites “directly accessing, by the userspace process, the target cached data stored in the kernel module without context switching between the user space process and the kernel module; and processing the data request based on the target cached data,” In response to local object operation, the virtual file system of Malhotra updates (i.e., accesses) the local cache with modified portions (i.e., target cached data), and uploads the updated portions to the cloud [0080]. Malhotra does not disclose claim element “without context switching”; however, context switching is necessary in a multitasking OS such as Linux, but can be avoided in a single user single task OS (Mathew: 5,8/12). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Mathew to Malhotra. One having ordinary skill in the art would have found motivation to avoid the negative performance impact of context switching for a single user single task OS in Malhotra. Malhotra does not disclose limitation “wherein the method further comprises: determining location information of a data request of the data requests that is being processed, in the data request list; labelling the location information in recorded information; in response to processing of the data request being completed, recording, in the recorded information, information that the processing of the data request is completed;” However, Aziz uses a processing queue for work requests (i.e., data requests). Each request includes an object and all methods required to process the object (Aziz: [0194]). The queue is stored in a queue table, where each entry contains (i.e., records) metadata (i.e., labels) such as request ID (i.e., location), source/destination, and attributes such as state (e.g., completed) and priority (Aziz: [0203]). Periodically requests are selected and sent to servers for processing, using a selection criteria such as FIFO, FILO, or priority-based (Aziz: [0197]). Malhotra does not disclose limitation “in response to the userspace process being abnormal closed, restarting of the userspace process, and obtaining, from the record information, the location information of the data request that is being processed, in the data request list; and continuing processing of the data request corresponding to the location information.” However, Aziz provides queue persistency using a variety of implementations, such as mirrored disks or a DBMS (Aziz: [0196]). If the server fails (i.e., abnormal closed), in-progress requests and state information (e.g., location) are forwarded to (i.e., restarted at) a different server for processing (i.e., continued processing) (Aziz: [0210]). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Aziz to Malhotra. One having ordinary skill in the art would have found motivation to adopt the processing queue of Aziz as the event queue of Malhotra, with entry-level metadata such as entry location and processing status, such that server failure can be recovered by continuing processing of the queue at a different server. Claims 10 and 19 are analogous to claim 1, and are similarly rejected. Claim 2 recites “The method of claim 1, wherein the method further comprises: before obtaining, by the userspace process, the data request list in the kernel module, configuring, by the kernel module, a memory management unit (MMU) page table, wherein the MMU page table comprises the virtual address information corresponding to the physical address information for each of the plurality of cached data; and querying, by the userspace process, for each of the data requests, the physical address information corresponding to the virtual address information, according to the virtual address information corresponding to the data request comprises: querying, by the userspace process, for each of the data requests, the physical address information corresponding to the virtual address information from the MMU page table, according to the virtual address information corresponding to the data request.” An adapter layer (i.e., kernel module) of Malhotra extends the kernel to redirect file access requests (i.e., data requests) to the virtual file system (i.e., userspace process) [0131]. Malhotra places these requests in an event queue (i.e., list) to be handled by the cloud [0091]. A mapping table (i.e., MMU page table) is used as metadata to map (i.e., query) file IDs (i.e., physical address) in the cloud to filenames (i.e., virtual address) locally requested (i.e., cached data) [0092]. Claims 11 and 20 are analogous to claim 2, and are similarly rejected. Claim 4 recites “The method of claim 1, wherein a character device interface is configured in the kernel module, and a data transmission layer connected to the character device interface is configured in the userspace process; and obtaining, by the userspace process, the data request list in the kernel module comprises: obtaining, by the data transmission layer in the userspace process and the character device interface in the kernel module, the data request list from the kernel module.” An adapter layer (i.e., kernel module) of Malhotra extends the kernel to redirect file access requests (i.e., data requests) to the virtual file system (i.e., userspace process) [0131]. Malhotra places these requests in an event queue (i.e., list) to be handled by the cloud [0091]. Network operations are performed using communications interface, with communications link (i.e. data transmission layer) configured to transmit communication packets encoded to fit into byte or word boundaries (i.e., character device interface) [0164]. Claim 13 is analogous to claim 4, and is similarly rejected. Claim 7 recites “The method of claim 1, wherein the method further comprises: before obtaining, by the userspace process, the data request list in the kernel module, in response to the kernel module receiving a data request of any application program, determining a data description label of the data request, wherein the data description label comprises the virtual address information; associating the virtual address information with the data request: and adding the associated data request and the virtual address information into the data request list.” An adapter layer (i.e., kernel module) of Malhotra extends the kernel to redirect file access requests (i.e., data requests) to the virtual file system (i.e., userspace process) [0131]. Malhotra places these requests in an event queue (i.e., list) to be handled by the cloud [0091], and sent/received as communications packets. Every communications packet contains a payload, destination address (i.e., virtual address), a flow label (i.e., data description label), etc. [0164]. Claim 16 is analogous to claim 7, and is similarly rejected. Claim 8 recites “The method of claim 7, wherein the data description label further comprises one or more of data length information, data identification information and data type information.” Malhotra sends/receives file access requests (i.e., data requests) as communications packets. Every communications packet contains a payload (i.e., data identification), destination address (i.e., virtual address), a flow label (i.e., data description label), packet/payload length, a traffic class (i.e., type), etc. [0164]. Claim 17 is analogous to claim 8, and is similarly rejected. Claim 9 recites “The method of claim 1, wherein the data request comprises a data read request or a data write request; accessing, by the userspace process, the target cached data and processing the data request comprises: in response to the data request being the data read request, reading, by the userspace process, the target cached data; and in response to the data request being the data write request, writing, by the userspace process. the target cached data.” An application issues function calls (i.e., data requests) in user/kernel space to access remotely hosted content objects [0042] by read, write, or modify operations. An adapter layer extends the kernel to redirect these calls to the virtual file system (i.e., userspace process) [0131]. The virtual file system handles these calls by accessing requested files in the cloud [0132], or in a local cache containing requested portions of objects (i.e., target cached data) for faster access [0066]. Claim 18 is analogous to claim 9, and is similarly rejected. Claims 3 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Malhotra as applied to claim 2 above, in view of FUSE, Mathew and Aziz, and further in view of Tallamraju et al. US patent application 2016/0321311 [herein “Tallamraju”]. Claim 3 recites “The method of claim 2, wherein configuring, by the kernel module, the memory management unit (MMU) page table comprises: copying, by the kernel module, each of the plurality of cached data to obtain a plurality of rebound cached data; and configuring the corresponding virtual address information for the physical address information for each of the rebound cached data to obtain the MMU page table; and querying, by the physical address information, the target cached data corresponding to the data request from the plurality of cached data stored in the kernel module comprises: querying, by the physical address information, the target cache data corresponding to the data request, from the plurality of the rebound cached data stored in the kernel module.” According to the instant specification [0048], performing backup of the cached data generates rebound cached data (i.e., copy), which can be equally queried to satisfy the data request. Examiner thus interprets “rebound cached data” to mean copies of cached data. An adapter layer (i.e., kernel module) of Malhotra extends the kernel to redirect file access requests (i.e., data requests) to the virtual file system [0131]. Malhotra teaches claim 2, but does not disclose this claim; however, Tallamraju can configure a file opened locally (i.e., cached data) to perform autosave operations on a periodic basis, which can create a backup copy (i.e., rebound cached data) of the file such that changes can be recovered from device failure, with the file (i.e., virtual address) renamed to point to (i.e., via MMU table) the backup copy (i.e., physical address) (Tallamraju: [0106]). Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to apply the teachings of Tallamraju to Malhotra. One having ordinary skill in the art would have found motivation to incorporate the autosave operation of Tallamraju in the virtual file system of Malhotra to enable device failure recovery. Claim 12 is analogous to claim 3, and is similarly rejected. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. For example, Understanding overhead cost of context switching. Dec. 2021. https://unix.stackexchange.com/questions/681096/understanding-overhead-cost-of-context-switching. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHELLY X. QIAN whose telephone number is (408)918-7599. The examiner can normally be reached Monday - Friday 8-5 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached at (571)270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHELLY X QIAN/Examiner, Art Unit 2154 /BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154
Read full office action

Prosecution Timeline

Jun 03, 2024
Application Filed
Sep 27, 2024
Non-Final Rejection — §103, §112
Dec 27, 2024
Response Filed
Feb 11, 2025
Non-Final Rejection — §103, §112
May 15, 2025
Response Filed
May 29, 2025
Final Rejection — §103, §112
Jul 29, 2025
Response after Non-Final Action
Sep 04, 2025
Request for Continued Examination
Sep 12, 2025
Response after Non-Final Action
Oct 12, 2025
Non-Final Rejection — §103, §112
Jan 16, 2026
Response Filed
Mar 25, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12578892
FINGERPRINT TRACKING STRUCTURE FOR STORAGE SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12475044
Method And System For Estimating Garbage Collection Suspension Contributions Of Individual Allocation Sites
2y 5m to grant Granted Nov 18, 2025
Patent 12450197
BACKGROUND DATASET MAINTENANCE
2y 5m to grant Granted Oct 21, 2025
Patent 12386904
SYSTEMS AND METHODS FOR MEASURING COLLECTED CONTENT SIGNIFICANCE
2y 5m to grant Granted Aug 12, 2025
Patent 12314225
CONTINUOUS INGESTION OF CUSTOM FILE FORMATS
2y 5m to grant Granted May 27, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
37%
Grant Probability
57%
With Interview (+19.4%)
3y 11m
Median Time to Grant
High
PTA Risk
Based on 126 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month