DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see p. 13, filed March 3, 2026, with respect to Claim 24 have been fully considered and are persuasive.
Applicant argues that Claim 24 would require an eight way combination of references in view of the currently cited references for these features. Allowance of the claims is requested (p. 13).
In reply, the Examiner agrees that Claim 24 contains allowable subject matter.
Applicant’s arguments with respect to claim(s) 1-6, 8-16, and 20-23 have been considered but are moot because new grounds of rejection are made in view of Kimmel (US 20150095346A1), Grigore (US 20250199774A1), Ross (US 20230333900A1), Rosenberg (US 20210026707A1), and Wooten (US 20110209213A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-3, 11-13, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1).
As per Claim 1, Shiell teaches a computing platform comprising: at least one processor, a communication interface communicatively coupled to the at least one processor, and memory (computing devices may include one or more central processing units, memories, input/output interfaces, bus interfaces, network connections, [0069]). It would have been obvious to one of ordinary skill in the art that the memory stores computer-readable instructions that, when executed by the at least one processor, cause the computing platform perform the method because it is well-known in the art that the instructions need to be stored in order for the processor to access those instructions to execute those instructions in order to perform the method. Shiell teaches receiving a graphics processing unit (GPU) processing request for processing by a GPU system (25); identifying an operation requested by the GPU processing request (AAM machine 25 may be embodied by a computing device, such computing devices may include GPU’s, [0069], AAM 25 includes an align module 29, [0070], alignment module 29 searches for the best alignment of model face onto the text image by simultaneously minimizing misalignments in shape and appearance, [0074]); identifying whether or not the operation is stored in a hash table, wherein the hash table stores a plurality of operations and corresponding keys, and where each key corresponds to a pre-generated solution (computed binary feature) to a corresponding operation; based on identifying that the operation is not stored in the hash table, identify whether an approximate match of the operation is stored in the hash table; based on identifying that the approximate match is stored in the hash table, identify a first key stored, in the hash table, along with the approximate match; identifying, using the first key, a location of the pre-generated solution (computed binary feature) to the approximate match of the operation; obtaining, from the location, the pre-generated solution (computed binary feature) to the approximate match of the operation (creating a different hash key for each computed binary feature, the hash key being based on patch location in which the binary feature was computed and the choice of pixel patterns used in its computation and the binary value of the computed binary feature, using the created hash keys to access their corresponding entries in a hash table, each entry including identification information identifying an identity of a previously registered specific sample of the item class and the corresponding log probability of that identity generating a binary feature similar to the computed binary feature of the hash key at a specific patch location, summing the log probabilities of each identify found in the hash table using the created hash keys, sorting the found identifies by cumulative log probabilities to find the highest probability match, deems the identity having the highest probability match to most closely match the specific item, [0018]); and applying the solution (find the high probability match, the person corresponding to the highest found probability may be deemed to correspond to the submitted test image (S7), [0103], alignment module 29 searches for the best alignment of the model face onto the test image by simultaneously minimizing misalignments in shape and appearance, [0074]).
However, Shiell does not teach wherein each key indicates a distributed storage location of the pre-generated solution; and the location is the distributed storage location. However, Kimmel teaches the hash table stores keys, and wherein each key indicates a distributed storage location of data; identify, using the first key, the distributed storage location of the data; obtain, from the distributed storage location, the data (extracts a hash table index to index into the selected hash table and lookup a table entry having an extent key 810 identifying a storage location 830 of SSD 260 for the extent, if a table entry with a matching extent key is found, then the SSD location 830 mapped form the extent key 810 is used to retrieve an existing extent from SSD, [0038]; hash tables that embody mappings of identifiers associated with storage locations are stored for write data organized into extents, the hash value is used for multiple purposes within the distributed storage architecture, including a hash table index computed from the hash value to select an entry from a plurality of entries of the selected hash table having a identifier identifying a storage location for the extent, Abstract). Since Shiell teaches wherein each key corresponds to a pre-generated solution to a corresponding operation; identifying, using the first key, the location of the pre-generated solution to the approximate match of the operation; obtaining, from the location, the pre-generated solution to the approximate match of the operation [0018], this teaching of the distributed storage location from Kimmel can be implemented into the device of Shiell so that each key indicates a distributed storage location of a pre-generated solution to a corresponding operation; identifying, using the first key, the distributed storage location of the pre-generated solution to the approximate match of the operation; obtaining, from the distributed storage location, the pre-generated solution to the approximate match of the operation.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell so that each key indicates a distributed storage location of the pre-generated solution; and the location is the distributed storage location because Kimmel suggests that this way, the processing load on the storage systems is reduced since the storage consumption is distributed throughout the cluster [0006], and this way, the data that is needed is efficiently found and retrieved from a distributed storage location (Abstract).
As per Claim 2, Shiell teaches wherein the hash table is pre-populated with the plurality of operations and corresponding solution keys (hash table is constructed by: applying the first patch pattern onto the canonical sample, applying the second patch pattern onto the canonical sample, defining a plurality of random pixel pair clusters for each patch in the first patch pattern and in the second patch pattern, computing a separate binary feature for each pixel pair cluster, defining the hash table using an inverted indexing scheme, wherein a hash key is computed for each computed binary feature based on the patch location in which the binary feature was computed and the pair cluster used in its computation and the binary value of the computed binary feature, accessing a library of registrable samples of the item class, aligning the registrable sample to the canonical sample to create a registrable fitted sample, applying the first patch pattern onto the registrable fitted sample, applying the second patch pattern onto the fitted sample, computing a separate binary feature for each of the random pixel pair clusters, computing a probability of the current registrable sample generating a binary feature from the pixel pair cluster at the current specific patch location, storing the probability and the identity of the current registrable sample at the hash table location corresponding to the current binary feature, [0028]).
As per Claim 3, Shiell teaches using the created hash keys to access their corresponding entries in a hash table, each entry including ID information identifying an identity of a previously registered specific sample of the item class and the corresponding log probability of that identity generating a binary feature similar to the computed binary feature of the hash key at the specific patch location; summing the log probabilities of each identity found in the hash table using the created hash keys, sorting the found identities by cumulative log probabilities to find the highest probability match; deeming the identify having the highest probability match to most closely match the specific item [0018]. It would have been obvious to one of ordinary skill in the art that when there is a key stored, in the hash table, along with the matching operation stored in the hash table, then the highest probability match that is identified is the actual matching operation. Thus, Shiell teaches wherein the memory stores additional computer-readable instructions that, when executed by the at least one processor, cause the computing platform to: based on identifying that the operation is stored in the hash table, identify a second key stored, in the hash table, along with the matching operation [0018].
As per Claims 11-13, these claims are similar in scope to Claims 1-3 respectively, and therefore are rejected under the same rationale. As per Claim 20, Claim 20 is similar in scope to Claim 1, and therefore is rejected under the same rationale.
Claim(s) 4 and 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1) in view of Pham (US 20250077435A1) and Bolz (US 20150089151A1).
As per Claim 4, Shiell and Kimmel are relied upon for the teachings as discussed above relative to Claim 1.
However, Shiell and Kimmel do not teach based on identifying that the approximate match is not stored in the hash table: send an operation execution request to the GPU system, wherein the GPU system is configured to identify a solution to the operation execution request, receive the solution to the operation execution request, update hash table to include the operation execution request and a second key corresponding to a location of the solution to the operation execution request, and apply the solution the operation execution request. However, Pham teaches wherein the memory stores additional computer-readable instructions that, when executed by the at least one processor, cause the computing platform (one embodiment may be implemented as software including instructions that are stored in a storage medium that is readable by a machine, [0169]) to: based on identifying that the approximate match is not stored in the hash table: send an operation execution request to the GPU system, wherein the GPU system is configured to identify a solution to the operation execution request, receive the solution to the operation execution request, apply the solution to the operation execution request (if the hash table lookup results in a table miss, this indicates that that data for the data request is likely not included in the device cache, in response, an SSD request can then be generated for the data request and provided to an SSD request queue, [0143], graphics processing unit (GPU), [0149]). Pham teaches the hash table is updated frequently to correspond to the device cache [0142]. Thus, when the device cache is updated with data, then the hash table is updated to correspond to the device cache. It would have been obvious to one of ordinary skill in the art that updating the hash table involves updating the hash table to include the data and a second key corresponding to a location of the data, since it is well-known in the art that hash tables include keys.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell and Kimmel so that the memory stores additional computer-readable instructions that, when executed by the at least one processor, cause the computing platform to: based on identifying that the approximate match is not stored in the hash table: send an operation execution request to the GPU system, wherein the GPU system is configured to identify a solution to the operation execution request, receive the solution to the operation execution request, update hash table to include the data and a second key corresponding to a location of the data, and apply the solution the operation execution request because Pham suggests that this improves throughput performance and efficient resource allocation [0002].
However, Shiell, Kimmel, and Pham do not teach updating the hash table to include the operation execution request and the second key corresponding to the location of the solution to the operation execution request. However, Bolz teaches based on identifying that the match is not stored in the cache: send an operation request to the GPU system, wherein the GPU system is configured to identify a solution to the operation execution request, receive the solution to the operation execution request, update the cache to include the solution to the operation execution request, and apply the solution to the operation execution request (GPU writes to the cache, accesses of invalidated cache lines results in a cache miss, forcing the cache to retrieve updated data from memory, [0130]). Since Pham teaches the hash table is updated to correspond to the device cache, and updating the hash table to include the data and a second key corresponding to a location of the data, as discussed above, this teaching of updating the cache to include the solution to the operation execution request from Bolz can be implemented into the device of Pham so that it updates the cache to include the solution to the operation execution request, then the hash table is updated to correspond to the cache, and updating the hash table to include the operation execution request and a second key corresponding to a location of the solution to the operation execution request.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell, Kimmel, and Pham to include updating the hash table to include the operation execution request and the second key corresponding to the location of the solution to the operation execution request because Bolz suggests that this way, cache coherency and consistency of a cache is maintained [0011].
As per Claim 14, Claim 14 is similar in scope to Claim 4, and therefore is rejected under the same rationale.
Claim(s) 5 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1) in view of Tang (see citation below).
As per Claim 5, Shiell and Kimmel are relied upon for the teachings as discussed above relative to Claim 1.
However, Shiell and Kimmel do not teach wherein applying the solution comprises training a large language model based on the solution. However, Tang teaches wherein applying the solution comprises training a large language model based on the solution (various types of data require communication among compnodes, including training raw data, etc., our system adopts a decentralized communication topology, for efficient management of distributed storage and lookup of data, we leverage the power of Distributed Hash Table (DHT), DHT enables the distribution of data across the network by employing key-value pairs and hashing, allowing for rapid and efficient retrieval based on unique keys, by utilizing DHT, our system achieves a decentralized and self-organizing architecture, the use of DHT enhances scalability, fault tolerance, and flexibility in our decentralized communication framework, 3.4 Decentralized Communication, p. 4-5; large language models (LLMs), decentralized system unlocking the potential vast untapped consumer-level GPUs in pre-training, inference and fine-tuning of LLMs, Abstract, p. 1).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell and Kimmel so that applying the solution comprises training a large language model based on the solution because Tang suggests that large language models (LLMs) are well-known in the art, and training a LLM by using a distributed hash table has the advantage of being able to use a consumer-level GPU to train an LLM, which is significantly less expensive than using a large-scale high-end GPU (Abstract, p. 1; 3.4 Decentralized Communication, p. 4-5).
As per Claim 15, Claim 15 is similar in scope to Claim 5, and therefore is rejected under the same rationale.
Claim(s) 6 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1) in view of Tang (see citation below) and Kenefick (US 20250279111A1).
As per Claim 6, Shiell and Kimmel are relied upon for the teachings as discussed above relative to Claim 1.
However, Shiell and Kimmel do not teach wherein applying the solution comprises sending, to a user device, an indication of the solution. However, Tang teaches wherein applying the solution comprises a large language model is used to indicate the solution (various types of data require communication among compnodes, including training raw data, etc., our system adopts a decentralized communication topology, for efficient management of distributed storage and lookup of data, we leverage the power of Distributed Hash Table (DHT), DHT enables the distribution of data across the network by employing key-value pairs and hashing, allowing for rapid and efficient retrieval based on unique keys, by utilizing DHT, our system achieves a decentralized and self-organizing architecture, the use of DHT enhances scalability, fault tolerance, and flexibility in our decentralized communication framework, 3.4 Decentralized Communication, p. 4-5; large language models (LLMs), decentralized system unlocking the potential vast untapped consumer-level GPUs in pre-training, inference and fine-tuning of LLMs, Abstract, p. 1). This would be obvious for the reasons given in the rejection for Claim 5.
However, Shiell, Kimmel, and Tang do not teach wherein applying the solution comprises sending, to a user device, an indication of the solution. However, Kenefick teaches sending to a user device (117), an output from a large language model (server 101 includes the trained large language model and provides information to the mobile device 117 about audio profiles to take advantage of greater processing power provided by the server 101, [0032]). Since Tang teaches wherein applying the solution comprises a large language model is used to indicate the solution, as discussed above, this teaching from Kenefick of the user device can be implemented into the system of Tang so that applying the solution comprises sending, to a user device, an indication of the solution.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell, Kimmel, and Tang so that applying the solution comprises sending, to a user device, an indication of the solution because Kenefick suggests that this way, a user using his mobile device can take advantage of greater processing power provided by the server [0032].
As per Claim 16, Claim 16 is similar in scope to Claim 6, and therefore is rejected under the same rationale.
Claim(s) 8 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1) in view of Burge (US 20180101742A1).
As per Claim 8, Shiell and Kimmel are relied upon for the teachings as discussed above relative to Claim 1.
However, Shiell and Kimmel do not teach wherein identifying the match comprises identifying that a vector, corresponding to the operation, matches a vector in the hash table. However, Burge teaches wherein identifying the match comprises identifying that a vector, corresponding to the operation, matches a vector in the hash table (compare sub-strings of the binary vector to hash tables created from sub-strings of the images, enabling sub-linear searching that allows locating the closest matches from among the entire gallery, Abstract).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell and Kimmel so that identifying the match comprises identifying that a vector, corresponding to the operation, matches a vector in the hash table as suggested by Burge. It is well-known in the art that vectors are fundamental to machine learning, serving as the bedrock for various data science tasks and algorithms.
As per Claim 18, Claim 18 is similar in scope to Claim 8, and therefore is rejected under the same rationale.
Claim(s) 9 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1) in view of Harmanci (US 20130039584A1).
As per Claim 9, Shiell and Kimmel are relied upon for the teachings as discussed above relative to Claim 1.
However, Shiell and Kimmel do not teach wherein identifying the approximate match comprises: identifying a first vector, corresponding to the operation; identifying a second vector in the hash table; normalizing the first vector and the second vector to produce normalized vectors; comparing values of the normalized vectors to produce a comparison score; compare the comparison score to a comparison threshold; and based on identifying that the comparison score meets or exceeds the comparison threshold, identify that the first vector and the second vector comprise the approximate match. However, Harmanci teaches wherein identifying the approximate match comprises: identifying a first vector, corresponding to the operation; identifying a second vector in the hash table (44); normalizing the first vector and the second vector to produce normalized vectors; comparing values of the normalized vectors to produce a comparison score; compare the comparison score to a comparison threshold; and based on identifying that the comparison score meets or exceeds the comparison threshold, identify that the first vector and the second vector comprise the approximate match (feature vector is computed based on a normalized region around the interest point, a feature of additional points of the image within a preselected distance is numerically represented according to its brightness, these numerical values are stored within the feature vector, having obtained the feature vector over the preselected distance, a normalized feature vector is determined as a function of the first feature vector, the normalized feature vector is used as an input to the hash function which determines an index within the hash table 44 at which the data from the interest point is stored, [0037]; converts the normalized feature vector to an index value for comparing data against the hash table 44, threshold may be selected and the binary quantization function may return a one if the value in the feature vector is greater than the threshold and a zero if the value in the feature vector is less than the threshold, [0038]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell and Kimmel so that identifying the approximate match comprises: identifying a first vector, corresponding to the operation; identifying a second vector in the hash table; normalizing the first vector and the second vector to produce normalized vectors; comparing values of the normalized vectors to produce a comparison score; compare the comparison score to a comparison threshold; and based on identifying that the comparison score meets or exceeds the comparison threshold, identify that the first vector and the second vector comprise the approximate match because Harmanci suggests that this way, the search can be performed more rapidly [0005].
As per Claim 19, Claim 19 is similar in scope to Claim 9, and therefore is rejected under the same rationale.
Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1), Kimmel (US 20150095346A1), and Harmanci (US 20130039584A1) in view of Grigore (US 20250199774A1).
Shiell, Kimmel, and Harmanci are relied upon for the teachings as discussed above relative to Claim 9.
However, Shiell, Kimmel, and Harmanci do not teach wherein comparing the values comprises comparing: Euclidian distances, cosine distances, dot product, a Manhattan value, and an L2 squared value. However, Grigore teaches wherein comparing the values comprises comparing: Euclidian distances, cosine distances, dot product, a Manhattan value, and an L2 squared value (distance between the vector embeddings may be calculated using various techniques including Euclidean or L2 squared distance, Manhattan, cosine similarity, dot product, [0075]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell, Kimmel, and Harmanci so that comparing the values comprises comparing: Euclidian distances, cosine distances, dot product, a Manhattan value, and an L2 squared value as suggested by Grigore. It is well-known in the art that the advantages of Euclidean distance include its simplicity, ease of interpretation, and applicability across various dimensions. It is well-known in the art that the advantages of cosine distance include scale-invariance, easier to interpret, and computational efficiency. It is well-known in the art that the dot product’s versatility and simplicity make it a fundamental tool in modern technology and mathematics. It is well-known in the art that the advantages of a Manhattan value include its robustness to outliers, ease of calculation, and suitable for high-dimensional data. It is well-known in the art that the advantages of an L2 squared value include its mathematical convenience, simplicity in differentiation, simplified optimization, and regularization.
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1) in view of Ross (US 20230333900A1).
Shiell and Kimmel are relied upon for the teachings as discussed above relative to Claim 1.
However, Shiell and Kimmel do not teach wherein the operation comprises multiplication of vectors corresponding to model parameters. However, Ross teaches wherein the operation comprises multiplication of vectors corresponding to model parameters (performing multiplication operations on vectors, [0121], hash tables on GPUs, [0014]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell and Kimmel so that the operation comprises multiplication of vectors corresponding to model parameters as suggested by Ross. It is well-known in the art that this allows for the calculation of scalar quantities and vector quantities in a single operation, simplifying complex calculations and providing a more intuitive understanding of vector interactions.
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1), Kimmel (US 20150095346A1), and Harmanci (US 20130039584A1) in view of Rosenberg (US 20210026707A1).
Shiell, Kimmel, and Harmanci are relied upon for the teachings as discussed above relative to Claim 9.
However, Shiell, Kimmel, and Harmanci do not teach wherein normalizing the first vector and the second vector comprises, for each number in the first vector and the second vector, subtracting the number from a vector maximum and dividing by a result of subtracting a vector minimum from the vector maximum. However, Rosenberg teaches wherein normalizing the first vector and the second vector comprises, for each number in the first vector and the second vector, subtracting the number from a vector maximum and dividing by a result of subtracting a vector minimum from the vector maximum (rescaling (min-max normalization), in rescaling, the minimum value may be subtracted from a value of interest to obtain an intermediate value, then, the intermediate value is divided by the difference between the difference in the maximum and minimum values in the range to obtain a normalized value, vector, [0073]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell, Kimmel, and Harmanci so that normalizing the first vector and the second vector comprises, for each number in the first vector and the second vector, subtracting the number from a vector maximum and dividing by a result of subtracting a vector minimum from the vector maximum as suggested by Rosenberg. It is well-known in the art that the advantages of this include fairness across features, improved model convergence, enhance interpretability, and standardization.
Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Shiell (US 20130266195A1) and Kimmel (US 20150095346A1) in view of Wooten (US 20110209213A1).
Shiell and Kimmel are relied upon for the teachings as discussed above relative to Claim 1.
However, Shiell and Kimmel do not teach wherein a hash key corresponding to the operation includes all parameters of the operation and current values of the parameters. However, Wooten teaches wherein a hash key corresponding to the operation includes all parameters of the operation and current values of the parameters (use of a key, the current values in a storage location hash can be checked against a value that was put in the key when created, [0020], GPU, [0088]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Shiell and Kimmel so that a hash key corresponding to the operation includes all parameters of the operation and current values of the parameters because Wooten suggests that this way, current values can be quickly found and retrieved, and thus this increases the processing speed [0020].
Allowable Subject Matter
Claim 24 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: Claim 24 contains allowable subject matter for the reasons discussed in Applicant’s Remarks. Claim 24 would require a 9 way combination of references. One of ordinary skill in the art would not reasonably combine all 9 references without impermissible hindsight.
Prior Art of Record
Tang, Zhenheng; FusionAI: Decentralized Training and Deploying LLMs with Massive Consumer-Level GPUs; September 2023; Symposium on Large Language Models (LLM-IJCAI workshop 2023); p. 1-8; https://arxiv.org/pdf/2309.01172
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
JH
/JONI HSU/Primary Examiner, Art Unit 2611