Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 2/29/24 is being considered by the examiner.
Election/Restrictions
This Office Action is in response to Applicant’s initial filing submission on 6/29/2022 and subsequent restriction/election made on 2/10/26 on a telephone interview with Attorney Deidra Ritcherson. Applicant accepted group 1 that included claims 1-8 and 14-20 without traverse and withdrew claims 9-13.
INVENTION 1: Claims 1-8 and 14-20 are related to query response generation using searchable logical graphs using a single machine learning model.
INVENTION 2: Claims 9-13 are related to query response generation that uses two machine learning models where the first machine learning model generates a second logical graph that searches the first graph with the second graph.
Inventions 1 and 2 are directed to response generations using logical graphs. The related inventions are distinct if: (1) the inventions as claimed are either not capable of use together or can have a materially different design, mode of operation, function, or effect; (2) the inventions do not overlap in scope, i.e., are mutually exclusive; and (3) the inventions as claimed are not obvious variants. See MPEP § 806.05(j). Furthermore, in the instant applications the inventions as claimed do not encompass overlapping subject matter and there is nothing of record to show them to be obvious variants.
Inventions 1 and 2 are related to each other in that they use logical graphs for generation of responses.. However these devices are not capable of being used together as invention 1 deals query response generation using searchable logical graphs using a single machine learning model by creating a single logical graph. Invention 2 is fundamentally different as it uses two machine learning models where the first machine learning model generates a second logical graph that searches the first graph with the second graph.
As can be seen the mode of operation is different. The claims further do not overlap in scope and are mutually exclusive. Further the claims are not obvious variants of each other and would not be obvious as they depict different manners of operation.
Restriction for examination purposes as indicated is proper because all the inventions listed in this action are independent or distinct for the reasons given above and there would be a serious search and/or examination burden if restriction were not required because one or more of the following reasons apply:
--the inventions have acquired a separate status in the art in view of their different classification
--the inventions have acquired a separate status in the art due to their recognized divergent subject matter
--the inventions require a different field of search (e.g., searching different classes/subclasses or electronic resources, or employing different search strategies or search queries).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Jia (US 20220318275 A1) in further view of Costabello (US 10157226 B1).
With respect to claims 1 and 14, Jia teaches
(Claim 1) A computer-implemented method comprising:
(Claim 14) A computer system, comprising: a processor set ([0005] The disclosure provides an electronic device. The electronic device includes: at least one processor and a memory coupled in communication with the at least one processor. The memory stores instructions executable by the at least one processor, and when the instructions are executed by the at least one processor, the at least one processor is caused to execute the method according to the first aspect, ¶[0006] The disclosure provides a non-transitory computer-readable storage medium storing computer instructions. The computer instructions are configured to cause a computer to execute the method according to the first aspect.) ; a set of one or more computer-readable storage media ([0005] The disclosure provides an electronic device. The electronic device includes: at least one processor and a memory coupled in communication with the at least one processor. The memory stores instructions executable by the at least one processor, and when the instructions are executed by the at least one processor, the at least one processor is caused to execute the method according to the first aspect, ¶[0006] The disclosure provides a non-transitory computer-readable storage medium storing computer instructions. The computer instructions are configured to cause a computer to execute the method according to the first aspect.) ; and program instructions, collectively stored in the set of one or more computer-readable storage media, to cause the processor set to perform computer operations comprising ([0005] The disclosure provides an electronic device. The electronic device includes: at least one processor and a memory coupled in communication with the at least one processor. The memory stores instructions executable by the at least one processor, and when the instructions are executed by the at least one processor, the at least one processor is caused to execute the method according to the first aspect, ¶[0006] The disclosure provides a non-transitory computer-readable storage medium storing computer instructions. The computer instructions are configured to cause a computer to execute the method according to the first aspect.):
receiving, at a machine learning model, first text data ([0102]and the input from the user may be received in any form (including acoustic input, voice input, or tactile input).);
determining, via the machine learning model, a token length of the first text data ([0070] For example, if the target search result is plain text data, and the text length is greater than a preset length threshold);
in response to determining that the token length is greater than a threshold token length of a first language machine learning model, generating via the machine learning model a first logical graph from the first text data, wherein the first logical graph incorporates tokens exceeding the threshold token length of the first language machine learning model without a portion of the first text data becoming eliminated, and the first logical graph is incorporated within the machine learning model ([0070] For example, if the target search result is plain text data, and the text length is greater than a preset length threshold, the target search result and the corresponding knowledge graph can be displayed, the user can selectively decide whether to read the knowledge graph or the target search result, and reading the knowledge graph corresponding to the target search result can save the time for the users to read the target search result and extract the key information¶[0098] Some examples of computing unit 501 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms,); and
wherein the first logical graph is searchable for generating responses to one or more queries to the machine learning model (Col 14ll54-65: As further shown in FIG. 4, process 400 may include generating candidate responses to the query based on the knowledge graph (block 470).).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia to include searchable knowledge graph of Costabello in order to remove subjectivity and improve speed and efficiency (Costabello, Col8ll25-28).
Claims 2 and 15 are rejected as being unpatentable over Jia and Costabello in further view of Allen and Zhai.
With respect to claims 2 and 15, Jia and Costabello don’t explicitly disclose however Allen teaches generating, via the machine learning model, subtexts from the portion of the first text data that exceeds the threshold token length of the first language machine learning model (Allen ¶ Claim1: generating, by the one or more computer processors, one or more additional knowledge graphs associated with the one or more subsets of the set of text of the original passage; determining, by the one or more computer processors, a first subset of the one or more subsets of the set of text of the original passage is associated with a knowledge graph most similar to the first knowledge graph associated with the set of text of the original passage);
generating, via the machine learning model, respective logical subgraphs from the subtexts (Allen ¶ Claim1: generating, by the one or more computer processors, one or more additional knowledge graphs associated with the one or more subsets of the set of text of the original passage; determining, by the one or more computer processors, a first subset of the one or more subsets of the set of text of the original passage is associated with a knowledge graph most similar to the first knowledge graph associated with the set of text of the original passage); and
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello to include subgraph of Allen in order to gain efficiency (Allen, Col2ll17-24).
None of Jia, Costabello and Allen explicitly disclose, however Zhai teaches combining, via the machine learning model, the respective logical subgraphs to form the first logical graph (Zhai [0040] In operation 235, if the similarity score is greater than a high threshold value, then the update knowledge graph module 155 can merge the data source topic entity sub-graph with the knowledge graph 160 at the candidate entity of the knowledge graph 160).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello in view of subgraph of Allen to include merging of Zhai in order to improve system responsiveness (Zhai, [0069]).
Claims 3 and 16 are rejected as being unpatentable over Jia, Costabello, Allen and Zhai in further view of Wang.
With respect to claims 3 and 16, none of Jia, Costabello, Allen and Zhai explicitly disclose, however Wang teaches wherein the respective logical subgraphs are generated based on a prompt template that is input into the first language machine learning model ([0038] Product knowledge graph generation data 110D can include prompt templates that are used to generate prompts that are executed on the generative AI model 142. Product knowledge graph generation data 110D can be processed using prompt engineering operations associated with the knowledge graph engine operations 112).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello in view of subgraph of Allen in view of merging of Zhai to include template of Wang in order to improve operational efficiency (Wang, [0001]).
Claims 4, 5, 17 rejected as being unpatentable over Jia, Costabello, Allen, Zhai in further view of Klein.
With respect to claims 4 and 17, none of Jia, Costabello, Allen, Zhai do not explicitly disclose, however Klein teaches wherein the subtexts are generated from the first text data via a semantic integrity-driven sliding window comprising a lightweight pointer neural network ([0066] The scoring submodule 208, provides scores, such as those for the relatedness of words (based on their meaning), including, for example, window scores, for both sliding and successive windows (and similarity scores for successive windows), probabilities, and the like,¶ [0086] The process can move to block 306e-3, an optional process, where ads (advertisements), or other information items (e.g., public service announcements and the like), and other known break points are detected. This detection process is also performable, for example, by machine learning techniques for classification, including artificial intelligence (AI) software, such as a logistic regression classifier or a neural network. The process can also go directly from block 308e-1, if desired).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello in view of subgraph of Allen in view of merging of Zhai to include lightweight pointer of Klein in order to look for consistent pattern ow words that are relevant (Klein, [0073]).
With respect to claim 5 Klein further teaches wherein an input to the lightweight pointer neural network is sequence data, and wherein an output of the lightweight pointer neural network is a probability value that indicates whether semantic integrity of the first text data is preserved during truncation of the first text data into the subtexts ([0066] The scoring submodule 208, provides scores, such as those for the relatedness of words (based on their meaning), including, for example, window scores, for both sliding and successive windows (and similarity scores for successive windows), probabilities, and the like,¶ [0086] The process can move to block 306e-3, an optional process, where ads (advertisements), or other information items (e.g., public service announcements and the like), and other known break points are detected. This detection process is also performable, for example, by machine learning techniques for classification, including artificial intelligence (AI) software, such as a logistic regression classifier or a neural network. The process can also go directly from block 308e-1, if desired).
Claims 6 and 18 are rejected as being unpatentable over Jia and Costabello in further view of Adel-Vu.
With respect to claims 6 and 18, Jia and Costabello don’t explicitly disclose however Adel-Vu teaches wherein the first logical graph is formed further by converting textual distances in the first text data into logical distances in the first logical graph without limiting the first logical graph by the token length of the first text data ([0008] For example, in the case of the sentence “Barack Obama Sr., the father of Barack Obama, was born in 1936.” the relational argument “Barack Obama” is not located between the relational arguments “Barack Obama Sr.” and “1936” in the dependence tree. The length of the path between “Barack Obama” and “1936” is not shorter than the length of the path between “Barack Obama Sr.” and “1936”. This additionally improves the decision of the model).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello to include textual distances to logical distances of Adel-Vu in order to improve decision making (Adel-Vu, [0008]).
Claims 7 are rejected as being unpatentable over Jia and Costabello in further view of You.
With respect to claims 7, Jia and Costabello don’t explicitly disclose however You teaches further comprising storing a long-term memory of the first language machine learning model as a task attention mechanism of the first language machine learning model ([0061] In some implementations, platform 220 may regularize the LSTM by using an attention mechanism that pays attention to a particular part of the source sentence. Each attention node captures a different portion of the sentence, while each portion of the sentence is considered as a target named entity (a node in knowledge graph).).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello to include attention of You in order to improve generalization (You, [0001]).
Claim(s) 8 and 19 is(are) rejected as being unpatentable over Jia and Costabello in further view of Saha.
With respect to claims 8 and 19 Jia and Costabello don’t explicitly disclose however Saha teaches wherein the first logical graph is stored in computer memory and serves as a logical index of knowledge comprised in the first text data for generating the responses to the one or more queries ([0002]and providing, to the user, a response to the received natural language query, wherein the returning the response comprises querying the produced knowledge graph using the natural language query).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello to include response generation of Saha in order to reduce query search times (Saha, [0022]).
Claim(s) 20 is(are) rejected as being unpatentable over Jia and Costabello in further view of Ramsl and Lao .
With respect to claim 20 Jia and Costabello don’t explicitly disclose however Saha teaches restoring continuous first text data from the first logical graph by inputting a matching portion of the first logical graph into a second language machine learning model ([0016] The knowledge graph may be converted back to natural language text using a trained machine learning model. The generation of the knowledge graph entities provides data for training the machine learning model..); and
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello to include restoring of Ramsl in order to reduce consumption of computing resources (Ramsl, [0016]).
None of Jia and Costabello and Saha explicitly disclose however Lao teaches inputting the continuous first text data and the one or more queries into the first language machine learning model ([0006] Another example aspect of the present disclosure is directed to a computing system for responding to a natural language query. The computing system can include an encoding system configured to receive a natural language text body and generate, using a machine-learned natural language encoder model, a knowledge graph based on the natural language text body. The computing system can include a query programming system configured to receive a natural language input query, and generate, using a machine-learned natural language query programmer model, a program for querying the knowledge graph based on the natural language input query. The query execution system can be configured to execute the generated program on the generated knowledge graph and to output a query response.).
It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the invention to modify knowledge graph of Jia in view of searchable knowledge graph of Costabello in view of restoring of Ramsl to include text and queries in order to allow for a faster processing time and/or reduction in the computational demands during operation. (Lao, [0025]).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ATHAR N PASHA whose telephone number is (408)918-7675. The examiner can normally be reached on Monday-Thursday Alternate Fridays, 7:30-4:30 PT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached on (571)272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ATHAR N PASHA/ Primary Examiner, Art Unit 2657