Prosecution Insights
Last updated: April 19, 2026
Application No. 18/634,880

ARTIFICIAL INTELLIGENCE QUERY PROCESSING BY PROCESSING-NEAR-MEMORY STORAGE

Non-Final OA §103
Filed
Apr 12, 2024
Examiner
GURMU, MULUEMEBET
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Samsung Electronics Co., Ltd.
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
3y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
377 granted / 475 resolved
+24.4% vs TC avg
Strong +18% interview lift
Without
With
+18.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
30 currently pending
Career history
505
Total Applications
across all art units

Statute-Specific Performance

§101
18.8%
-21.2% vs TC avg
§103
61.2%
+21.2% vs TC avg
§102
3.3%
-36.7% vs TC avg
§112
1.6%
-38.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 475 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 03/02/26 has been entered. Response to Amendment This action is in response to applicant's arguments and amendments filed on 03/02/26. which are in response to USPTO Office Action mailed on 12/02/25. Applicant's arguments and amendments have been considered with the results that follow: THIS ACTION IS MADE NON-FINAL. Claim Rejections 35 U.S.C. §103 2. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 5-16 and 18-20 are rejected under 35 U.S.C. § 103 as being un patentable over KOTRA et al. (US 2023/0393849 A1) in view of YAN et al. (US 2022/0318601 A1). Regarding claim 1, KOTRA teaches a method of query processing, the method comprising: receiving at a first processing-near-memory (PNM) storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), data from a processing device based on the processing device offloading to the first PNM storage device attention operations for a query associated with the data, (See KOTRA paragraph [0064], the execution unit 150 is a component of a PIM device 280 that is implemented in a processing-near-memory (PNM) fashion.,,have been offloaded to by the host processor 132), processing, at the first PNM storage device, first values from the data with transposed second values from the data, (See KOTRA paragraph [0012], architectures allows for improved computational efficiency through reduced data transfer as well as reduced power consumption. In some implementations, a PIM architecture supports offloading instructions from a host processor for execution in memory or near memory), determining, at the first PNM storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), generating, at the first PNM storage device, , (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM). KOTRA does not explicitly disclose the processing device being configured to process weight operations for the query; a probability distribution of a result of the processing of the first values from the data with the transposed second values from the data, and an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data. However YAN teaches the processing device being configured to process weight operations for the query, (See YAN paragraph [0009], a self-attention operation performed by a decoder using a full processing path. This example primarily serves to introduce the concepts of query information, key information, and value information); a probability distribution of a result of the processing of the first values from the data with the transposed second values from the data, (See YAN paragraph [0060], An output probability generation component 432 can use a combination of a linear transformation operation and the softmax function to map the decoder output information into a probability distribution), and an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data, (See YAN paragraph [0060], An output probability generation component 432 can use a combination of a linear transformation operation and the softmax function to map the decoder output information into a probability distribution). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify the processing device being configured to process weight operations for the query; a probability distribution of a result of the processing of the first values from the data with the transposed second values from the data, and an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data of YAN in order to make efficient use of processing and memory resources. Regarding claim 2, KOTRA taught the data processing system of according to claim 1, as described above. KOTRA further teaches wherein the first PNM storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM). KOTRA does not explicitly disclose includes a memory that stores key-value data from the data. However YAN teaches includes a memory that stores key-value data from the data, (See YAN paragraph [0026], stores at least the instances of head-specific key information 110 and the instances of head-specific value information 112 in cache memory). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify includes a memory that stores key-value data from the data of YAN in order to make efficient use of processing and memory resources. Claim 15 recites the same limitations as claim 2 above. Therefore, Claim 15 is rejected based on the same reasoning. Regarding claim 3, KOTRA taught the data processing system of according to claim 2, as described above. KOTRA further teaches, wherein the memory, and a processor of the first PNM storage device comprise, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), an integrated circuit based on the memory being stacked on top of the processor, and the memory, being communicatively connected to the processor, (See KOTRA paragraph [0040], a PNM device could include a stacked memory having several memory layers stacked on a base die, where the base die includes a processing device that provides near-memory processing capabilities. Claim 16 recites the same limitations as claim 3 above. Therefore, Claim 16 is rejected based on the same reasoning. Regarding claim 5, KOTRA taught the data processing system of according to claim 1, as described above. KOTRA does not explicitly disclose further comprising receiving, from a processing device, a trigger based on a hint ahead operation, wherein the trigger includes at least one of a user identifier or layer number information associated with the data. However YAN teaches further comprising receiving, from a processing device, a trigger based on a hint ahead operation, (See YAN paragraph [0076], The downstream system(s) 1006 can leverage the synthesized text for various purposes, such as by sending a user an advertisement or other type of information item based on triggering keyword information in the synthesized text), wherein the trigger includes at least one of a user identifier or layer number information associated with the data, (See YAN paragraph [0044], number of layers to map the original query information Q into plural respective instances of FFN.sub.i.sup.Q (Q), per the following equation). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify further comprising receiving, from a processing device, a trigger based on a hint ahead operation, wherein the trigger includes at least one of a user identifier or layer number information associated with the data of YAN in order to make efficient use of processing and memory resources. Claim 18 recites the same limitations as claim 5 above. Therefore, Claim 18 is rejected based on the same reasoning. Regarding claim 6, KOTRA taught the method of according to claim 1, as described above. KOTRA further teaches wherein the data is a portion of multi-head attention data that is distributed among multiple PNM storage devices, (See KOTRA paragraph [0049], the host device 130 can include multiple memory controllers, each corresponding to a different memory channel in the PIM-enabled memory device 180). Regarding claim 7, KOTRA taught the method of according to claim 1, as described above. KOTRA does not explicitly disclose wherein the data includes a portion of query attention data from a first attention layer, a portion of key attention data from a second attention layer, and a portion of value attention data from a third attention layer. However YAN teaches wherein the data includes a portion of query attention data from a first attention layer, (See YAN paragraph [0061], The self-attention mechanism performs self-attention, e.g., by mapping input information into head-specific query, key…Layer normalization entails adjusting values in a layer based on the mean and deviation of those values in the layer), a portion of key attention data from a second attention layer, (See YAN paragraph [0054], a decoder system includes plural layers of decoder-based processing, each of which may include one or more attention mechanisms), and a portion of value attention data from a third attention layer, (See YAN paragraph [0061], the output information provided by the self-attention mechanism…Layer normalization entails adjusting values in a layer based on the mean and deviation of those values in the layer). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the data includes a portion of query attention data from a first attention layer, a portion of key attention data from a second attention layer, and a portion of value attention data from a third attention layer of YAN in order to make efficient use of processing and memory resources. Regarding claim 8, KOTRA taught the method of according to claim 1, as described above. KOTRA further teaches generated based on partial outputs generated by multiple PNM storage devices that are combined to form a unified output, (See KOTRA paragraph [0049], the host device 130 can include multiple memory controllers, each corresponding to a different memory channel in the PIM-enabled memory device 180). KOTRA does not explicitly disclose wherein the data is based on an iteration of activation values. However YAN teaches wherein the data is based on an iteration of activation values, (See YAN paragraph [0055]Any layer can use any activation function (such as an ReLU activation function)…Some layers may operate based on machine-trained weighting values produced by the training system 134). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the data is based on an iteration of activation values of YAN in order to make efficient use of processing and memory resources. Regarding claim 9, KOTRA taught the method of according to claim 8, as described above. KOTRA further teaches wherein the multiple PNM storage devices, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM)), include an array of solid-state drives, (See KOTRA paragraph [0048], The memory array 182 can be one or more arrays of memory cells of a bank, channel, or other memory hierarchy partition). Regarding claim 10, KOTRA taught the method of according to claim 1, as described above. KOTRA further teaches wherein the first PNM storage device is a system on chip die that, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM)). KOTRA does not explicitly disclose includes includes a solid-state drive and at least one processor that performs query processing. However YAN teaches includes a solid-state drive and at least one processor that performs query processing, (See YAM paragraph [0057], The head specific query information processed. It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify includes a solid-state drive and at least one processor that performs query processing of YAN in order to make efficient use of processing and memory resources. Regarding claim 11, KOTRA taught the method of according to claim 1, as described above. KOTRA further teaches wherein the first PNM storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM)); communicatively connects to a processing device via a high-bandwidth expansion bus, (See KOTRA paragraph [0022], memory bandwidth utilizations and locality of the system services). Regarding claim 12, KOTRA taught the method of according to claim 11, as described above. KOTRA further teaches communicatively connected to high-bandwidth memory, (See KOTRA paragraph [0022], memory bandwidth utilizations and locality of the system services). KOTRA does not explicitly disclose includes wherein the processing device includes at least one graphical processing unit (GPU). However YAN teaches wherein the processing device includes at least one graphical processing unit (GPU), (See YAN paragraph [0091], one or more Graphics Processing Units (GPUs)). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein the processing device includes at least one graphical processing unit (GPU), of YAN in order to make efficient use of processing and memory resources. Regarding claim 13, KOTRA taught the method of according to claim 2, as described above. KOTRA does not explicitly disclose includes wherein: the data includes attention data, the first values include key values, the second values include query values, and the key-value data includes a key-value matrix. However YAN teaches includes wherein: the data includes attention data, the first values include key values, (See YAN paragraph [0003], generates attention information based on head-specific query information and shared key and value (KV) information), the second values include query values, and the key-value data, (See YAN paragraph [0009], the concepts of query information, key information, and value information), includes a key-value matrix, (See YAN paragraph [0034], generates the plural instances of key information 214 using plural respective head-specific key matrices (W.sub.1.sup.K, W.sub.2.sup.K, . . . , W.sub.h.sup.K)). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify wherein: the data includes attention data, the first values include key values, the second values include query values, and the key-value data includes a key-value matrix of YAN in order to make efficient use of processing and memory resources. Regarding claim 14, KOTRA teaches a query processing system, the query processing system comprising: a processing device communicatively connected to a first processing-near-memory (PNM) storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), the processing device to transmit data to the first PNM storage device, ((See KOTRA paragraph [0012], architectures allows for improved computational efficiency through reduced data transfer as well as reduced power consumption. In some implementations, a PIM architecture supports offloading instructions from a host processor for execution in memory or near memory), based on the processing device offloading to the first PNM storage device attention operations for a query associated with the data, and the first PNM storage device to (See KOTRA paragraph [0064], the execution unit 150 is a component of a PIM device 280 that is implemented in a processing-near-memory (PNM) fashion.,,have been offloaded to by the host processor 132), process first values from the data with transposed second values from the data, (See KOTRA paragraph [0012], architectures allows for improved computational efficiency through reduced data transfer as well as reduced power consumption. In some implementations, a PIM architecture supports offloading instructions from a host processor for execution in memory or near memory). KOTRA does not explicitly disclose the processing device being configured to process weight operations for the query, determine a probability distribution of a result of the processing; and generate an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data. However YAN teaches the processing device being configured to process weight operations for the query, (See YAN paragraph [0009], a self-attention operation performed by a decoder using a full processing path. This example primarily serves to introduce the concepts of query information, key information, and value information); determine a probability distribution of a result of the processing, (See YAN paragraph [0060], An output probability generation component…information into a probability distribution); and generate an activation value based on the probability distribution, (See YAN paragraph [0060], An output probability generation component 432 can use a combination of a linear transformation operation and the softmax function to map the decoder output information into a probability distribution), and the activation value indicating a correlation between units of text in a query associated with the data, (See YAN paragraph [0060], An output probability generation component 432 can use a combination of a linear transformation operation and the softmax function to map the decoder output information into a probability distribution). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify the processing device being configured to process weight operations for the query, determine a probability distribution of a result of the processing; and generate an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data of YAN in order to make efficient use of processing and memory resources. Regarding claim 19, KOTRA teaches a non-transitory computer-readable medium storing code, the code comprising instructions executable by a processor of a first processing-near-memory (PNM) storage device to, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), receive data from a processing device based on the processing device offloading to the first PNM storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), process, at the first PNM storage device, first values from data with transposed query values from the data, , (See KOTRA paragraph [0012], architectures allows for improved computational efficiency through reduced data transfer as well as reduced power consumption. In some implementations, a PIM architecture supports offloading instructions from a host processor for execution in memory or near memory), determine, at the first PNM storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), generate, at the first PNM storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM). KOTRA does not explicitly disclose attention operations for a query associated with the data, the processing device being configured to process weight operations for the querv; probability distribution of a result of the processing; and an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data. However YAN teaches attention operations for a query associated with the data, the processing device being configured to process weight operations for the querv, (See YAN paragraph [0009], a self-attention operation performed by a decoder using a full processing path. This example primarily serves to introduce the concepts of query information, key information, and value information); probability distribution of a result of the processing, (See YAN paragraph [0060], An output probability generation component 432 can use a combination of a linear transformation operation and the softmax function to map the decoder output information into a probability distribution), and an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data, (See YAN paragraph [0060], An output probability generation component 432 can use a combination of a linear transformation operation and the softmax function to map the decoder output information into a probability distribution). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify attention operations for a query associated with the data, the processing device being configured to process weight operations for the query; probability distribution of a result of the processing; and an activation value based on the probability distribution, the activation value indicating a correlation between units of text in a query associated with the data of YAN in order to make efficient use of processing and memory resources. Regarding claim 20, KOTRA taught the non-transitory computer-readable medium of according to claim 19, as described above. KOTRA further teaches wherein: the first PNM storage device includes, (See KOTRA paragraph [0031], The data storage device 200 may be processing-near-memory (PNM)); and the memory and a processor of the first PNM storage device, (See KOTRA paragraph [0041], each processor core 102, 104, 106, 108 of the host device 130 executes a different process); comprise a integrated circuit based on the memory being stacked on top of the processor and the memory being communicatively connected to the processor, (See KOTRA paragraph [0040], (PIM), processing near-memory (PNM), or processing in or near-memory (PINM), all refer a device (or unit) which includes a non-transitory computer readable memory device). KOTRA does not explicitly disclose includes a memory that stores key-value data from the data. However YAN teaches includes a memory that stores key-value data from the data, (See YAN paragraph [0026], stores at least the instances of head-specific key information 110 and the instances of head-specific value information 112 in cache memory). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify a memory that stores key-value data from the data of YAN in order to make efficient use of processing and memory resources. Claims 4 and 17 are rejected under 35 U.S.C. § 103 as being un patentable over KOTRA et al. (US 2023/0393849 A1) in view of YAN et al. (US 2022/0318601 A1) and further in view of Fang et al. (US 2024/0394210 A1). Regarding claim 4, KOTRA taught the data processing system of according to claim 1, as described above. KOTRA further teaches comprising: at least a second activation value from a second PNM storage device, (See KOTRA paragraph [0031], The data storage device 200 may be processing-near-memory (PNM)), and of the first PNM storage device and the second activation value of the second PNM storage device, (See KOTRA paragraph [0012], architectures allows for improved computational efficiency through reduced data transfer as well as reduced power consumption. In some implementations, a PIM architecture supports offloading instructions from a host processor for execution in memory or near memory), and the second PNM storage device to communicate, enables the first PNM storage device, (See KOTRA paragraph [0002], Processing performance can be improved by offloading operations that would normally be executed in the functional units to a processing-in-memory (PIM) device…to process memory communications between the processor and the memory). KOTRA does not explicitly disclose forming a third activation value based at least in part on a combination of the activation value. However YAN teaches forming a third activation value based at least in part on a combination of the activation value, (See YAN paragraph [0055]Any layer can use any activation function (such as an ReLU activation function)…Some layers may operate based on machine-trained weighting values produced by the training system 134). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify forming a third activation value based at least in part on a combination of the activation value of YAN in order to make efficient use of processing and memory resources. KOTRA together with YAN does not explicitly disclose receiving, via a die-to-die (D2D) communication interface and wherein the D2D communication interface. However Fang teaches receiving, via a die-to-die (D2D) communication interface, (See Fang paragraph [0006], receiving the first data unit stream from the data channel in the communication interface by a first receiving device disposed at a second die, wherein the first die and the second die are disposed in a same integrated circuit package); wherein the D2D communication interface, (See Fang paragraph [0006], receiving the first data unit stream from the data channel in the communication interface by a first receiving device disposed at a second die, wherein the first die and the second die are disposed in a same integrated circuit package). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify receiving, via a die-to-die (D2D) communication interface and wherein the D2D communication interface of Fang in order to manage a data unit stream transmitted between different dies in real time. Regarding claim 17, KOTRA taught the query processing system of according to claim 14, as described above. KOTRA further teaches wherein the first PNM storage device is configured to: (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), at least a second activation value from a second PNM storage device, (See KOTRA paragraph [0040], processing near-memory (PNM), or processing in or near-memory (PINM), of the second PNM storage device, enables the first PNM storage device, and the second PNM storage device to communicate, (See KOTRA paragraph [0002], Processing performance can be improved by offloading operations that would normally be executed in the functional units to a processing-in-memory (PIM) device…to process memory communications between the processor and the memory). KOTRA does not explicitly disclose and form a unified activation value based at least in part on a combination of the activation value of the first PNM storage device and second activation value. However YAN teaches and form a unified activation value based at least in part on a combination of the activation value of the first PNM storage device and second activation value, (See YAN paragraph [0055]Any layer can use any activation function (such as an ReLU activation function)…Some layers may operate based on machine-trained weighting values produced by the training system 134). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify and form a unified activation value based at least in part on a combination of the activation value of the first PNM storage device and second activation value of YAN in order to make efficient use of processing and memory resources. KOTRA together with YAN does not explicitly disclose receive, via a die-to-die (D2D) communication interface and wherein the D2D communication interface. However Fang teaches receive, via a die-to-die (D2D) communication interface, (See Fang paragraph [0006], receiving the first data unit stream from the data channel in the communication interface by a first receiving device disposed at a second die, wherein the first die and the second die are disposed in a same integrated circuit package); wherein the D2D communication interface, (See Fang paragraph [0006], receiving the first data unit stream from the data channel in the communication interface by a first receiving device disposed at a second die, wherein the first die and the second die are disposed in a same integrated circuit package). It would have been obvious to one with ordinary skill in the art before the effective filing date of the claimed invention was made, to modify receive, via a die-to-die (D2D) communication interface and wherein the D2D communication interface of Fang in order to manage a data unit stream transmitted between different dies in real time. Conclusions/Points of Contacts The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. See form PTO-892. AGA et al. (US 2024/0168639 A1), the present disclosure provide for mechanisms and primitives that harness near-memory computation to enable processing units (e. g., CPU, GPU, etc.) to perform all-reduce primitives efficiently. Accordingly, implementations in accordance with the present disclosure provide for offloading of distributed reduction operations, such as a reduce-scatter operation, to in or near-memory compute nodes such as PIM (Processing-in-Memory) enabled memory. Ding et al. (US 2021/0081142 A1) Methods that can offload data operations to a storage system are also provided. One method includes performing, by a processor, a set of non-storage operations at a storage system for data stored in a set of storage devices on the storage system to generate a set of results in which the storage system is separate from a client device that owns the data and transmitting the result(s) to the client device. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MULUEMEBET GURMU whose telephone number is (571)270-7095. The examiner can normally be reached M-F 9am - 5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached at 5712724078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MULUEMEBET GURMU/Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
May 17, 2025
Non-Final Rejection — §103
Aug 18, 2025
Applicant Interview (Telephonic)
Aug 21, 2025
Response Filed
Aug 23, 2025
Examiner Interview Summary
Nov 28, 2025
Final Rejection — §103
Jan 28, 2026
Response after Non-Final Action
Mar 02, 2026
Request for Continued Examination
Mar 03, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591601
SYSTEM AND METHOD FOR HYBRID MULTILINGUAL SEARCH INDEXING
2y 5m to grant Granted Mar 31, 2026
Patent 12591621
GENERATIVE ARTIFICIAL INTELLIGENCE AND PREFERENCE AWARE HASHTAG GENERATION
2y 5m to grant Granted Mar 31, 2026
Patent 12591591
DISTRIBUTING LARGE AMOUNTS OF GLOBAL METADATA USING OBJECT FILES
2y 5m to grant Granted Mar 31, 2026
Patent 12585652
AUTOMATIC QUERY PERFORMANCE REGRESSION MANAGEMENT
2y 5m to grant Granted Mar 24, 2026
Patent 12585671
SYSTEM AND METHOD FOR CLOUD-BASED REPLICATION OF DATA
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
98%
With Interview (+18.1%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 475 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month