DETAILED ACTION
This non-final rejection is responsive to the Request for Continued Examination (RCE) filed December 22, 2025. Claims 1, 7, 11, and 17 are currently amended. Claims 2, 5, 6, 8-10, 12, 15, 16, and 18- 20 are canceled. Claims 21-28 have been added. Claims 1, 3, 4, 7, 11, 13, 14, 17 and 21-28 are pending in this application.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3, 4, 11, 13, 14, 21, 22, 25, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Bortnikov et al. (US 20170140012 A1) (‘Bortnikov’) in view of Guo et al. (US 11,151,608 B1) (‘Guo’), and further in view of Borkovic et al. (US 11182314 B1) (‘Borkovic’).
With respect to claims 1 and 11, Bortnikov teaches a method performed by an information processing device including a first memory and a second memory capable of operating at a higher speed than the first memory and an information processing device comprising:
a first memory configured to store a group of first data (paragraph 26);
a second memory capable of operating at a higher speed than the first memory (paragraph 30); and
a processor (paragraph 26) connected to the first memory and configured to:
receive a query from outside the information processing device (paragraph 43);
identify a first object (i.e. subset of clusters) associated with one or more pieces of first data including first data closest to the query in a group of the first data from a plurality of first objects (paragraph 45), and determine a score for identifying the first object (i.e. distance), each of the first objects being associated with one or more pieces of first data in the group of first data stored in the first memory (paragraph 45);
select the first object (i.e. subset of clusters) in accordance with the score (paragraph 45),
determining one or more pieces of second data being one or more pieces of first data associated with a second object, the second object corresponding to the selected first object and being the one of the first objects having been selected (paragraphs 20, 47, and 50);
calculate a metric of a distance between the query and the one or more pieces of second data (paragraphs 20, 47, and 50), and
identify third data on the basis of the metric of the distance, the third data being first data closest to the query in the group of the first data (query results pertaining to data points) (paragraph 51), wherein
each of the first objects is associated with a different one of sub-groups of one or more pieces of first data in the group of the first data (Bortnikov, paragraph 45),
the score (i.e. distance) represents a possibility that the one or more pieces of first data represented by a first object includes the first data closest to the query in the group of the first data (Bortnikov, paragraph 45),
the identifying the third data includes identifying, as the third data, second data closest to the query among the one or more pieces of second data (Bortnikov, paragraph 51), and
the method further comprises outputting the third data to outside the information processing device (Bortnikov, paragraphs 51 and 71).
Although Bortnikov teaches selecting objects based on machine learning (paragraphs 52 and 55), Bortnikov does not explicitly teach inputting the query into a neural network model, the neural network model having been trained to identify a first object, the neural network model being configured to output a score for identifying the first object.
Guo teaches inputting the query into a neural network model, the neural network model having been trained to identify a first object, the neural network model being configured to output a score for identifying the first object (an item concept relatedness score for each item concept of the plurality of item concepts) (col.8 lines 34-37 and lines 63-66).
It would have been obvious to a person having ordinary skill in the art prior to the filing date of the invention to have modified the selection of Bortnikov to be based on a neural network model as taught by Guo to enable improved, more relevant search results that filter out less relevant search results (Guo, col. 4 lines 36-43). Further, the combination would entail incorporating a known prior art technique (neural networks) to achieve predictable results (result selections).
Further regarding claims 1 and 11, Bortnikov in view of Guo does not explicitly teach the neural network model being in the second memory capable of operating at a higher speed than the first memory; or transferring one or more pieces of second data from the first memory to the second memory.
Borkovic teaches loading the neural network model into a second memory capable of operating at a higher speed than the first memory and inputting data to the neural network model in the second memory (col. 2 lines 39-42; col. 4 lines 9-29; col. 6 lines 29-36; col. 7 lines 7-13); and
transferring one or more pieces of second data from the first memory to the second memory (col. 4 lines 22-29; col. 5 lines 23-46; col. 14 lines 49-59).
It would have been obvious to a person having ordinary skill in the art prior to the filing date of the invention to have modified the neural network of Guo to be loaded into a faster memory and for selected data to be transferred to the second memory as taught by Borkovic to enable reduced latency and more efficient neural network processing (Borkovic, abstract). Further, a person having ordinary skill in the art would have been motivated to a make the modification because it would only entail swapping one memory type for another memory type to achieve faster processing.
With respect to claims 3 and 13, Bortnikov in view of Guo and Borkovic teaches transferring the one or more pieces of second data from the first memory to the second memory (Borkovic, col. 4 lines 22-29; col. 5 lines 23-46),
wherein the calculating includes calculating the metric of the distance between the query and each of the one or more pieces of second data in a second memory (Bortnikov, paragraphs 20, 47 and 50).
With respect to claims 4 and 14, Bortnikov in view of Guo and Borkovic teaches wherein the one or more pieces of first data associated with each of the first objects are stored in a continuous area in an address space of the first memory (Bortnikov, paragraph 26; Borkovic, col. 11 lines 17-20; col. 19 lines 42-45).
With respect to claims 21 and 25, Bortnikov in view of Guo and Borkovic teaches wherein the first memory includes a NAND flash memory device (Guo, col. 20 lines 64-65; Borkovic, col. 18 lines 27-32).
With respect to claims 22 and 26, Bortnikov in view of Guo and Borkovic teaches wherein the second memory includes a DRAM (Dynamic Random Access Memory) (Borkovic, col. 6 lines 1-2; col. 14 line 65 – col. 15 line 9; col. 18 lines 38-40).
Claims 7, 17, 23, 24, 27 and 28 are rejected under 35 U.S.C. 103 as being unpatentable over Bortnikov et al. (US 20170140012 A1) (‘Bortnikov’) in view of Guo et al. (US 11,151,608 B1) (‘Guo’), further in view of Solmer et al. (US 12,141,732 B1) (‘Solmer’), and further in view of Borkovic et al. (US 11182314 B1) (‘Borkovic’).
With respect to claims 7 and 17, Bortnikov teaches:
a first memory configured to store a group of first data (paragraph 26);
a second memory capable of operating at a higher speed than the first memory (paragraph 30); and
a processor (paragraph 26) connected to the first memory and configured to:
receive a query from outside the information processing device (paragraph 43);
identify a first object (i.e. subset of clusters) associated with one or more pieces of first data including first data closest to the query in a group of the first data from a plurality of first objects (paragraph 45), and determine a score for identifying the first object (i.e. distance), each of the first objects being associated with one or more pieces of first data in the group of first data stored in the first memory (paragraph 45);
select the first object (i.e. subset of clusters) in accordance with the score (paragraph 45),
determining one or more pieces of second data being one or more pieces of first data associated with a second object, the second object corresponding to the selected first object and being the one of the first objects having been selected (paragraphs 20, 47, and 50);
calculate a metric of a distance between the query and the one or more pieces of second data (paragraphs 20, 47, and 50), and
identify third data on the basis of the metric of the distance, the third data being first data closest to the query in the group of the first data (query results pertaining to data points) (paragraph 51), wherein
each of the first objects is associated with a different one piece of first data in the group of the first data (Bortnikov, paragraph 45);
the score (i.e. distance) represents a possibility that a hop count (distance) to the third data is the minimum (Bortnikov, paragraph 45),
the identifying the third data includes identifying, as the third data, second data closest to the query among the one or more pieces of second data (Bortnikov, paragraph 51), and
the method further comprises outputting the third data to outside the information processing device (Bortnikov, paragraphs 51 and 71).
Although Bortnikov teaches selecting objects based on machine learning (paragraphs 52 and 55), Bortnikov does not explicitly teach inputting the query into a neural network model, the neural network model having been trained to identify a first object, the neural network model being configured to output a score for identifying the first object.
Guo teaches inputting the query into a neural network model, the neural network model having been trained to identify a first object, the neural network model being configured to output a score for identifying the first object (an item concept relatedness score for each item concept of the plurality of item concepts) (col.8 lines 34-37 and lines 63-66).
It would have been obvious to a person having ordinary skill in the art prior to the filing date of the invention to have modified the selection of Bortnikov to be based on a neural network model as taught by Guo to enable improved, more relevant search results that filter out less relevant search results (Guo, col. 4 lines 36-43). Further, the combination would entail incorporating a known prior art technique (neural networks) to achieve predictable results (result selections).
Further regarding claims 7 and 17, Bortnikov in view of Guo does not explicitly teach the group of the first data constitutes a graph; or the identifying includes identifying the third data by performing, on the basis of the graph, a search whose entry point is a piece of second data associated with the second object.
Solmer teaches the group of the first data constitutes a graph (Solmer, col. 21 lines 8-10); and
the identifying includes identifying the third data by performing, on the basis of the graph, a search whose entry point is a piece of second data associated with the second object (Solmer, col. 50 line 62 – col. 51 line 21).
It would have been obvious to a person having ordinary skill in the art prior to the filing date of the invention to have modified the selection of Bortnikov to be based on a neural network model as taught by Solmer to enable improved result retrieval using a plurality of compatible combinations of methodologies for result retrieval ranking or scoring of data (Solmer, col. 29 lines 16-38). Further, the combination would entail incorporating a known prior art technique (neural networks) to achieve predictable results (result selections).
Further regarding claims 7 and 17, Bortnikov in view of Guo and Solmer does not explicitly teach the neural network model being in the second memory capable of operating at a higher speed than the first memory; or transferring one or more pieces of second data from the first memory to the second memory.
Borkovic teaches loading the neural network model into a second memory capable of operating at a higher speed than the first memory and inputting data to the neural network model in the second memory (col. 2 lines 39-42; col. 4 lines 9-29; col. 6 lines 29-36; col. 7 lines 7-13); and
transferring one or more pieces of second data from the first memory to the second memory (col. 4 lines 22-29; col. 5 lines 23-46; col. 14 lines 49-59).
It would have been obvious to a person having ordinary skill in the art prior to the filing date of the invention to have modified the neural network of Guo to be loaded into a faster memory and for selected data to be transferred to the second memory as taught by Borkovic to enable reduced latency and more efficient neural network processing (Borkovic, abstract). Further, a person having ordinary skill in the art would have been motivated to a make the modification because it would only entail swapping one memory type for another memory type to achieve faster processing.
With respect to claims 23 and 27, Bortnikov in view of Guo, Solmer and Borkovic teaches wherein the first memory includes a NAND flash memory device (Guo col. 20 lines 64-65; Solmer, col, 68 line 25; Borkovic, col. 18 lines 27-32).
With respect to claims 24 and 28, Bortnikov in view of Guo, Solmer and Borkovic teaches wherein the second memory includes a DRAM (Dynamic Random Access Memory) (Borkovic, col. 6 lines 1-2; col. 14 line 65 – col. 15 line 9; col. 18 lines 38-40).
Response to Arguments
Applicant’s arguments with respect to claims 1, 3, 4, 7, 11, 13, 14, 17 and 21-28 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALICIA M WILLOUGHBY whose telephone number is (571)272-5599. The examiner can normally be reached 9-5:30, EST, M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ajay Bhatia can be reached at 571-272-3906. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALICIA M WILLOUGHBY/Primary Examiner, Art Unit 2156 February 3, 2026