DETAILED ACTION
A. This action is in response to the following communications: Transmittal of New Application filed 01/11/2024.
B. Claims 1-20 remains pending.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 11-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claimed invention does not fall within at least one of the four categories of patent eligible subject matter recited in 35 U.S.C. 101 (process, machine, manufacture, or composition of matter) as disclosed in the specification only in paragraph 7 of summary, computer program product is only mentioned but not described, the claim limitations notes that the computer program product comprises computer code. A computer program per se is not included in one of the statutory categories of invention.
More information about this matter is covered in the Annex IV of the Interim Guidelines for Subject matter Eligibility.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1-6,8-16 and 18-20 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Hughes, Cory et al. (US Pub. 2020/0202171 A1), herein referred to as “Hughes”.
As for claims 1 and 11, Hughes teaches. A method and corresponding executed by a document understanding engine implemented as a computer program within a computing environment, the method comprising (Fig. 2; par. 145 computer hardware environment; a computer architecture 200 to facilitate data annotation and creation of machine learning models, in accordance with an example embodiment of the disclosed technology. The computer architecture 200 includes an annotation server 202 that executes the annotation process described herein):
pre-annotating, by a specialized model of the document understanding engine, one or more files with one or more annotation suggestions (par. 145 a specialized model is termed intermediate models herein, which include model types as further described in par. 156; these types are used to establish a second model called a baseline model; (e.g. Fig. 9, a sentence classifier is an intermediate model/ algorithm used to create a base line model wherein both are used in training pre-annotated data from data stack/file); For example, logistic regression with bigram features may be selected as a baseline algorithm for text classification, whereas a hidden markov model with spectrogram features may be selected as a baseline algorithm for automatic speech recognition. Beyond baselines, each model type has an associated list of applicable algorithms that are predetermined by the annotation server 202).;
Par. 145: The annotation server 202 interacts with the annotation client 206 through one or more graphical user interfaces to the facilitate generation of the annotated data 104. Upon sufficient annotation of the unannotated data 102, as specified by one or more annotation training criteria (e.g., 20 annotations for each class), the annotation server 202 is configured to generate one or more intermediate models.
Par. 146 These intermediate models generate predictions on unannotated data which may be communicated over the network 208 to the annotation client 206 or another client computer (not shown) to facilitate production annotation. During normal production operation on the client computer 206, additional production annotated data is generated and stored in a production annotation database 210. For example, as new data is entered or manipulated on the client computer 206, the baseline model presents a prediction of an annotation for the new data which is accepted or amended to generate additional production annotated data. Periodically, the production annotations are fed back to the annotation server 202 and used to generate an updated model that takes into account the additional production annotated data. The production annotations may be fed back to the annotation server 202 by importing a file with the production annotations or through a standard API exposed on the annotation server 202…
presenting, in a user interface by the document understanding engine, the one or more files with the one or more annotation suggestions for validation, the user interface comprising a scoring for the one or more files (par. 168-170, fig. 4 At 404, a prediction set is generated by a model 406 predicting an annotation for samples in the set of training candidates or a subset thereof. a sampled prediction set is generated by sampling the prediction set according to one of a plurality of sampling algorithms and arranging each sample in the sampled prediction sets in a queue in order of a sampling score. The sampling score may be equal to the confidence score or may be derived from a prediction vector to represent how well a prediction fits in the sampling algorithm. User input used in conjunction with a scoring (e.g. confidence score) for validation of predictions created by intermediate model to create baseline model); fig. 17 shows user interface example of user interaction with pre-annotated data for training the model) and
PNG
media_image1.png
519
793
media_image1.png
Greyscale
training, by the document understanding engine, the specialized model based on one or more inputs to improve the scoring and the pre-annotating, the one or more inputs being received in response to the one or more annotation suggestions (par. 150 and 152 training data based upon pre-processed data (e.g. pre-annotating) to train the model creating updated baseline models that were originally created by intermediate model types (image, text, sentience, voice etc.).
As for claims 2 and 12, Hughes teaches. The method of claim 1, wherein pre-annotating of the file is performed by the specialized model and a language model (par. 146 intermediate model and baseline model are used together to train by means of presents a prediction of an annotation for the new data which is accepted or amended to generate additional production annotated data).
As for claims 3 and 13, Hughes teaches. The method of claim 2, wherein the language model predicts a field from a common set of fields for a feature in the one or more files when the specialized model is unable to provide at least one suggestion of the one or more annotation suggestions (par. 146 the baseline model presents a prediction of an annotation for the new data which is accepted or amended to generate additional production annotated data. Periodically, the production annotations are fed back to the annotation server 202 and used to generate an updated model that takes into account the additional production annotated data).
As for claims 4 and 14, Hughes teaches. The method of claim 1, wherein pre-annotating of the one or more files comprises one or more color coded underlining or outlining of features in the one or more files (par. 218 FIG. 20 illustrates an example graphical user interface 2000 depicting the ability to annotate adjacent entries, as well as colored feedback on annotations, in accordance with an example embodiment of the disclosed technology. For example, upon being presented an example, a user may highlight 2002 an adjacent entry and provide a selection from a menu 2004 to annotate the adjacent entry as a positive example, a negative example, or clear the highlight of the adjacent entry, for example).
As for claims 5 and 15, Hughes teaches. The method of claim 1, wherein the method comprises pre-annotating the one or more files after a domain of the file is determined (par. 156 Given training data and test data and a model type (e.g. text classifier, image classifier, semantic role labeling), the annotation server 202 selects an appropriate algorithm and loss function to use to establish a baseline).
As for claims 6 and 16, Hughes teaches. The method of claim 5, wherein pre-annotating of the one or more files comprises receiving the one or more inputs during a browsing of the one or more annotation suggestions from the pre-annotating, the one or more inputs comprising one or more confirmations or clarifications that fine tune the document understanding engine during training (par. 152 At 312 and 318, a data review is performed on the annotated training set and the annotated test set. The data review includes annotation “cleaning” that identifies inconsistencies between annotations across multiple reviewers, even if the underlying samples are semantically similar but not identical).
As for claims 8 and 18, Hughes teaches. The method of claim 1, wherein the one or more files are received from a device scanning one or more corresponding paper documents into a non-standardized format (par. 142 The text data may come from plain text files, such as from electronic communications through email or chat, flat files, or other types of document files (e.g., .pdf, .doc, etc.).
As for claims 9 and 19, Hughes teaches. The method of claim 1, wherein the document understanding engine provides the user interface to receive the one or more files (fig. 10 dataset selection user interface).
As for claims 10 and 20, Hughes teaches. The method of claim 1, wherein the method comprises digitizing the one or more files from non-standardized formats into usable data structures for the document understanding engine (par. 142, 201-202 image library and image model used to annotate sections/pieces of images of dataset).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Hughes in view of Hodos, Rachel et al. (US Pub. 2023/0289619 A1), herein referred to as “Hodos”.
As for claims 7 and 17, Hughes teaches. The method of claim 1, wherein the one or more files are stored in a non-standardized format (par. 142 any format can be used which would include standardized and non-standardized).
Hughes does not specifically teach are received via a drag and drop operation; however in the same field of endeavor Hodos teaches in paragraph 57 that selection of models and files can be done so by a user may select the two or more desired data model configurations they believe might be effective using a graphical user interface. This may be performed for each designed data model configuration via a GUI process of dragging-and-dropping, or otherwise selecting data representative of desired parameters, attributes, relationships and/or configurations of the knowledge graph from a list of potential relationships, nodes, edges, attributes, filters and/or limits that may be used to generate a suitable subset of the knowledge graph.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Hodos into Hughes because Hodos suggests in paragraph 5 there is a desire for a more efficient and robust system for generating and selecting a data model from a knowledge graph for optimizing the training of one or more ML predictive model(s) that result in the downstream workflow in robust ML predictive model(s) for inferring relationships and the like from an ever-changing and/or updated knowledge graph and the like.
(Note :) It is noted that any citation to specific, pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006,1009, 158 USPQ 275, 277 (CCPA 1968)).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
User Interface Configured To Facilitate User Annotation For Instance Segmentation Within Biological Sample
Document ID
US 20200327671 A1
Date Published
2020-10-15
Abstract
Novel tools and techniques are provided for implementing digital microscopy imaging using deep learning-based segmentation via multiple regression layers, implementing instance segmentation based on partial annotations, and/or implementing user interface configured to facilitate user annotation for instance segmentation. In various embodiments, a computing system might generate a user interface configured to collect training data for predicting instance segmentation within biological samples, and might display, within a display portion of the user interface, the first image comprising a field of view of a biological sample. The computing system might receive, from a user via the user interface, first user input indicating a centroid for each of a first plurality of objects of interest and second user input indicating a border around each of the first plurality of objects of interest. The computing system might train an AI system to predict instance segmentation of objects of interest in images of biological samples.
ADAPTIVE DATA MODELS AND SELECTION THEREOF
Document ID
US 20230289619 A1
Date Published
2023-09-14
Abstract
Method(s), apparatus, and system(s) are provided for selecting a data model configuration for use in training predictive models comprise receiving two or more data model configurations, extracting a data model for each of the two or more data model configurations from a knowledge graph, generating a separate predictive model for each of the extracted data models, scoring the output of each separate predictive model based on a benchmark data set, and selecting at least one data model configuration of the two or more data model configurations based on the output scores.
Inquires
Any inquiry concerning this communication should be directed to NICHOLAS AUGUSTINE at telephone number (571)270-1056.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
PNG
media_image2.png
213
559
media_image2.png
Greyscale
/NICHOLAS AUGUSTINE/Primary Examiner, Art Unit 2178 January 7, 2026